SQL DW doesn't have the capability to specify longer term retention capabilities as released for SQL DB recently. The workaround is to restore and pause. It would be great to have this capability for compliance and auditing requirements.130 votes
Thank you for voting for this feature! We are aware of this scenario and are looking into ways of supporting this. In the meantime, stay tuned for an update and please continue voting for this feature.
The current workaround for cross subscription restore is:
1. Restore to a new logical server in the same prod subscription
2. ‘Move’ the new logical server with the restored data warehouse to new subscription
It would be faster and simpler to enable restore directly to a different subscription.23 votes
Shrinkdatabase command should to be supported even if TDE is on or off because extra cost is charged in case that the unallocated space is huge.82 votes
Thank you for all the feedback folks. We understand the scenario and are actively working on improving this experience. We will reach back out when it is addressed and share when we have an update.
The ability to restore a single table from backup, to an existing DW, would greatly assist in recovering from errors.53 votes
I would absolutely magic if we had something like Snowflake:
- Automatically suspends the warehouse if no query has been issued during a fixed amount of time
- Automatically resumes the warehouse when a query is issued6 votes
Primary and Secondary maintenance must be mutually exclusive (either Saturday / Sunday or during weekdays); We run batch jobs daily, with the exception of Saturday. Must schedule maintainance on Saturday.3 votes
Especially in dev environments, a currently running dw with no active queries is a waste. Some guard rails to protect users from themselves would be useful.
Something like the feature in databricks.1 vote
Resource Governance - Resource Pools - Control CPU, physical IO, memory, priority, run-time cap, max request, concurrency, request timeout..
1. Ability to manage workloads effectively
2. Enables to specify limits on the amount of CPU, physical IO and memory
3. User-Defined Resource Pools
a. Memory size
b. Memory cap
d. Maximum requests
e. Grant time-out
h. Run-time cap73 votes
We are in the very early stages of planning this improvement.
The current VNET endpoint solution does not allow connections via expressroute.
Allow a private IP from the VNET to be assigned to the data warehouse, so that we can easily route to the warehouse from on prem, via expressroute. Removing the need for any complex peering or other IT infrastructure involvement.4 votes
sp_send_db_mail needs to be supported to send mails from stored procedures created on Azure.
We should have the abilities of sp_send_dbmail which was available with databases6 votes
Thank you for all the feedback folks. Please comment on your scenario below. You can create Azure alerts for metrics and logs along with Azure functions to send emails.
Increase the concurrency limit from 32 to unlimited and keep it tweakable , so the customers can vary it according to their needs17 votes
Concurrency was increased to 128. Unlimited concurrency requires further analysis.
It would be great if table statistics were automatically created and updated in Azure Data Warehouse.43 votes
Working on now. Should be out in the next few months!
It would be great if we can have SQL command to Pause/Resume DB instance like how we do for Scale up/down (under Master DB). This will help us to manage the same from ADF and don't have to worry about trigger REST API or PowerShell43 votes
Thank you for voting for this feature. We understand this scenario and will consider this in a future release. For now, please continue monitoring this item and have your team vote for this feature. Thank you for your patience.
When using Polybase to load into Data Warehouse via Data Factory, Control permission on the database is required for the user.
Can this be limited to a Schema Owner, or be more granular at the database level ?10 votes
You can see what the database is scaled to i.e. DWU 200, but how do you know how much is actually being used over time. The portal display a graph of both the DWU limit and the DWU used but there is no way to programmatically monitor how much is being used.29 votes
We are actively improving our monitoring experience. Currently we have ‘DWU Used’ in the portal which is a blend between CPU and IO to indicate data warehouse utilization. We also have future improvements on our road map such as Query Data Store and integrating with Azure Monitor for near real time troubleshooting in the Azure portal. If anyone has any other feedback, please elaborate on your scenario on this thread. Thank you for your continued support!
The LABEL column in the sys.dm_pdw_exec_requests DMV is limited to 255 characters.
This field needs to be much larger to store rich metadata/context of some queries.4 votes
Thank you for all the feedback folks. Unfortunately this is taking longer than we’d like. We will reach back out when it is on the roadmap and can share when we have an update.
At the moment the automated backup process take place every 8 hours unless the database is paused. Upon resumption of the database, it appears that it has to be online for 8 hours before the next backup is taken.
If resuming and then pausing the system again it is possible to go for a long duration without backups. Could you please evaluate the potential of taking an automatic backup either when the database is paused or resumed.12 votes
We are actively exploring ways to enable event-driven backups. Currently this scenario can be addressed by User Defined Restore Points which is on our road map for this calendar year. Stay tuned and thank you for your patience.
A customer asked if there are plans to implement some sort of elastic pools where you can include many data warehouses and share DWU (eDWU?) the same way elastic database pools work with SQL Databases.29 votes
Thank you for all the feedback folks. This is on our radar and we will consider this in a future release. We will reach back out when it is on the roadmap and can share when we have an update. Please vote for this feature and comment on your scenario below.
The charts in the SQL DW blade in the portal and the ability to add alerts are very helpful. Please add additional metrics. The two metrics that I think would be helpful are:
Number of Queued Queries (meaning after you've exhausted your 32 concurrent queries or your concurrency slots or the queued query needs more concurrency slots than are available, queries get queued)
Number of Concurrency Slots Available (the sum of concurrency slots used by currently running queries... would help surface whether you need to scale to more DWUs)27 votes
Thank you for voting for this folks! This is on our radar. We have plans to improve workload management for Azure SQL Data Warehouse which includes monitoring capabilities within the Azure portal. We’d love to hear your feedback so please comment on your scenario below.
SQL Data Warehouse scale stepping should be much smaller (10 DWU or even 1 DWU), than 100 DWU!
Please fix that ASAP!
Thank you!95 votes
Thank you for all the feedback folks. We are currently evaluating options for lower click stops for SQL Data Warehouse. Please continue to vote and comment on your scenario below. Thank you for your patience.
- Don't see your idea?