37 votes0 comments · Azure Monitor-Log Analytics » Solutions / Packs Gallery and new IP ideas · Flag idea as inappropriate… · Admin →
6 votesTJ Cornish shared this idea ·
Agreed. Previously accrued credits should be kept when restarting a deallocated machine.
We are looking into this requirement. If you have any more requirements around this – kindly let us know via the comments.
274 votesplanned · AdminAzure IaaS Engineering Team (Azure IaaS Engineering Team, Microsoft, Microsoft Azure) responded
This is still coming. The work is being completed now and we will be able to expose it in a few months.
195 votesplanned · AdminAzure IaaS Engineering Team (Azure IaaS Engineering Team, Microsoft, Microsoft Azure) responded
Currently you can increase the size of a disk when the VM is in the stop-deallocated state. We are also working on a solution where a disk can be increased in size while attached to a running VM.
We agree, and total VM throughput is hard to find in the documentation. The VM selection screen proudly shows "8000 IOPS" or whatever, but the reality at least for our general computing tasks is we are never even close to the IOPS limit, but we are limited by the VM throughput limit.
This is especially frustrating in that a single transfer between drives on the machine costs twice - i.e. copying from the E drive to the F drive means your throughput is only half what is advertised as the machine capability.
Machines like the E4s_v3 have 4 CPUs and relatively a lot of RAM - 32GB compared to a typical client machine (desktop or laptop), but the VM throughput limit is SO BAD compared to the experience on a typical client machine. It is very frustrating to have an application that runs faster on a $1000 desktop computer than on a $600/month Azure VM. Increasing to the next VM size gives us cores and RAM we don't need, and now we're talking $1000/month to have comparable performance to a $1000 to purchase desktop.
I just found out about the new "L-series" storage-optimized VMs, which unfortunately aren't available in our region yet, but even these have a throughput limit that is only 33% faster than the E series (96MB/sec for E4s_V3 vs 125MB/sec for L4s). Limits across the board - A series, D series, E series, etc., should be 2X greater or more, and "storage optimized" machines should be faster yet.
Thanks for the valid suggestion. Your feedback is now open for the user community to upvote which allows us to effectively prioritize your request against our existing feature list and also gives us insight into the potential impact of implementing the suggested feature
336 votesunder review · AdminAzure IaaS Engineering Team (Azure IaaS Engineering Team, Microsoft, Microsoft Azure) responded
The status of this item has been moved back to Under Review. We initially planned to move to VHDX support as part of our support for HyperV Gen2 VMs, but we ended up using the VHD format for Gen2 VMs as well. Some aspects of the Azure Infrastructure do not cleanly support VHDX OS or data disks. So this feature is dependent on some of these internal services being updated which is an ongoing process.
Any update on timing? We're testing Azure Site Recovery and the down-converting of the VHDX to VHD means our failover time is 30 minutes or more for a small virtual machine, vs. just a few minutes for a machine that doesn't require a drive conversion.
Downgrading our production servers to VHD is not an acceptable tradeoff.