Deploying Temporal on AWS ECS with Terraform
Comments
jarboot
llmslave
Whats funny is in some sense, temporal replaces alot of the AWS stack. You dont really need queues, step functions, lambdas, and the rest. I personally think its a better compute model than the wildly complicated AWS infra. Deploying temporal on compute primitives is simply better, and allows for you to be cloud agnostic.
causal
I sometimes suspect AWS deliberately looks for ways to extract low-overhead tasks into dedicated services for the simple reason that many people will pay for the service without thinking about whether they really need it.
llmslave
Its very easy to add AWS services, but after building them into a stack over a few years, its basically impossible to remove them
bithavoc
yes, one word: IAM
swyx
same guys worked on temporal as aws step functions. they just learned over time.
DoofWarrior
Why go with Fargate instead of EC2?
norapap
We went with Fargate because it keeps things lean — no servers to manage, no patching, no scaling headaches. It’s perfect for our bursty workloads, since we only pay when containers actually run . Plus autoscaling just works .
In the github you can find comments to easily switch to EC2 if your workload needs it
leetrout
Just as an aside for the fargate convo... we switched from fargate to EC2 auto scaling group so we could run a custom AMI that has our container images (which are larger than I like) pre-baked and we went from a ~3 minute startup in fargate to ~30 seconds with the ASG when it's not triggering a scaling action.
We're using prefect not Temporal and each prefect flow launches in a discrete ECS task so the waiting added up.
jen20
This article is really about hosting Temporal _workers_ in ECS - which is the "easy" part - not running the Temporal service itself. That would be a valuable follow-up!
whalesalad
99.9% sure the entire article was written by Claude or ChatGPT - so you can probably direct that question at the source. Make sure to end your prompt with, "no emojis"
> Autoscaling is configured via CloudWatch alarms on CPU usage: > Scale-out policy adds workers when CPU > 30%. > Scale-in policy removes idle workers when CPU < 20%.
Does this handle the case where there are longer-running activities that have low CPU usage? Couldn't these be canceled during scalein?
Temporal would retry them, but it would make some workflow runs take longer, which could be annoying for some user-interactive workflows.
Otherwise I've seen needing to hit the metrics endpoint to query things like `worker_task_slots_available` to scale up, or query pending activities, pending workflows, etc to scale down per worker.