How can we improve Azure Container Instances?

Larger memory sizes - at least 32GB - best 64GB

Current memory sizes are too small for my usage.

I need something to start quickly, load a 1-50GB dataset - process it in memory for half an hour and deallocate.

23 votes
Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)

We’ll send you updates on this idea

Jacek shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

9 comments

Sign in
(thinking…)
Sign in with: Microsoft
Signed in as (Sign out)
Submitting...
  • Tobias Zimmergren commented  ·   ·  Flag as inappropriate

    I'm moving away from AKS in favor for ACI in some of my scenarios. This works very well most of the time, but sometimes I have the need of burst computing and intense memory workload, something I can easily achieve with AKS because I can decide the underlying node size and pick something with 16 or more cores and a lot of memory.

    I think it would definitely increase adoption of Container Instances if it could handle bigger workloads. For my tasks running on AKS, I hit the best performance around 8 cores and 32 GB, but with ACI I have to spin up multiple ACI groups and then split the tasks across to get any sense of performance.

  • Akshay Raj Gollahalli commented  ·   ·  Flag as inappropriate

    This is required for us as well, we are planning to use this more like a bath process for some ML related work and 14GB is nothing, like really nothing. I think 64GB is less too, might want to increase it as the VM's have ~448GB I guess.

  • Jacek commented  ·   ·  Flag as inappropriate

    For now I am waiting for approval and we probably start with smaller data sizes.

    There is no need for CPU - 2 or 4 are perfectly enough.

    Out of 329 datasets there are 49 requiring more than 14GB. 7 require more than 28GB.
    2 datasets are over 56GB.

    So you may start with 4 CPU and 28GB and see how often it is used.

    As I wrote at the beginning I am waiting for approval so not using at the moment.

    Our use case is autoscaling with Docker Machine

  • Jacek commented  ·   ·  Flag as inappropriate

    For now I am waiting for approval and we probably start with smaller data sizes.

    There is no need for CPU - 2 or 4 are perfectly enough.

    Out of 329 datasets there are 49 requiring more than 14GB. 7 require more than 28GB.
    2 datasets are over 56GB.

    So you may start with 4 CPU and 28GB and see how often it is used.

    As I wrote at the beginning I am waiting for approval so not using at the moment.

  • Anonymous commented  ·   ·  Flag as inappropriate

    There are two issues; some regions have tighter resource limits and also the total resource available in any region.

    Our two main regions are AustraliaEast & UkSouth, based on this table we are very limited: https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quotas

    So primarily Windows support and more CPU & Ram in AustraliaEast would be immediately useful.

    From there up to 4 CPUs & 32gb memory would allow us to use ACI for burst workloads that we don't want to run inside our k8s cluster.

Feedback and Knowledge Base