SQL DW doesn't have the capability to specify longer term retention capabilities as released for SQL DB recently. The workaround is to restore and pause. It would be great to have this capability for compliance and auditing requirements.150 votes
Thank you for voting for this feature! We are aware of this scenario and are looking into ways of supporting this. In the meantime, stay tuned for an update and please continue voting for this feature.
The current workaround for cross subscription restore is:
1. Restore to a new logical server in the same prod subscription
2. ‘Move’ the new logical server with the restored data warehouse to new subscription
It would be faster and simpler to enable restore directly to a different subscription.33 votes
I would absolutely magic if we had something like Snowflake:
- Automatically suspends the warehouse if no query has been issued during a fixed amount of time
- Automatically resumes the warehouse when a query is issued10 votes
Shrinkdatabase command should to be supported even if TDE is on or off because extra cost is charged in case that the unallocated space is huge.86 votes
Thank you for all the feedback folks. We understand the scenario and are actively working on improving this experience. We will reach back out when it is addressed and share when we have an update.
Currently SQLDW doesn't have option to track deadlocks. It would be good if that option is available in Azure Portal as like SQL Database2 votes
The ability to restore a single table from backup, to an existing DW, would greatly assist in recovering from errors.55 votes
Provide ActiveQueryCount as a result output under .properties for the REST API database state parameter.
Primary and Secondary maintenance must be mutually exclusive (either Saturday / Sunday or during weekdays); We run batch jobs daily, with the exception of Saturday. Must schedule maintainance on Saturday.3 votes
Especially in dev environments, a currently running dw with no active queries is a waste. Some guard rails to protect users from themselves would be useful.
Something like the feature in databricks.1 vote
Resource Governance - Resource Pools - Control CPU, physical IO, memory, priority, run-time cap, max request, concurrency, request timeout..
1. Ability to manage workloads effectively
2. Enables to specify limits on the amount of CPU, physical IO and memory
3. User-Defined Resource Pools
a. Memory size
b. Memory cap
d. Maximum requests
e. Grant time-out
h. Run-time cap75 votes
We are in the very early stages of planning this improvement.
The current VNET endpoint solution does not allow connections via expressroute.
Allow a private IP from the VNET to be assigned to the data warehouse, so that we can easily route to the warehouse from on prem, via expressroute. Removing the need for any complex peering or other IT infrastructure involvement.6 votes
Ideally all billable components of a Data Warehouse should be visible via the Azure Portal.
We had a client recently who was trying to reconcile their billing, and could not do so as the DW size did not appear in the portal. This then required the DBA team to retrieve the size via the client admin tools: https://docs.microsoft.com/en-us/sql/relational-databases/databases/display-data-and-log-space-information-for-a-database?view=sql-server-2017
Trending on billable metrics potentially also helps customers forecast their upcoming Azure costs.2 votes
sp_send_db_mail needs to be supported to send mails from stored procedures created on Azure.
We should have the abilities of sp_send_dbmail which was available with databases7 votes
Thank you for all the feedback folks. Please comment on your scenario below. You can create Azure alerts for metrics and logs along with Azure functions to send emails.
Increase the concurrency limit from 32 to unlimited and keep it tweakable , so the customers can vary it according to their needs17 votes
Concurrency was increased to 128. Unlimited concurrency requires further analysis.
Implement the sys.dm_db_stats_properties DMV to expose the mod_ctr to get a better idea of when stats should be updated. using stats_date isn't a complete solution. as you have no idea if any rows have changed since that date3 votes
Thank you for all the feedback folks. We are continuously improving the manageability experience with SQL Data Warehouse which includes automatic statistics. We will reach back out when this is on the roadmap and can share when we have an update.
It would be great if table statistics were automatically created and updated in Azure Data Warehouse.47 votes
Working on now. Should be out in the next few months!
It would be great if we can have SQL command to Pause/Resume DB instance like how we do for Scale up/down (under Master DB). This will help us to manage the same from ADF and don't have to worry about trigger REST API or PowerShell44 votes
Thank you for voting for this feature. We understand this scenario and will consider this in a future release. For now, please continue monitoring this item and have your team vote for this feature. Thank you for your patience.
When using Polybase to load into Data Warehouse via Data Factory, Control permission on the database is required for the user.
Can this be limited to a Schema Owner, or be more granular at the database level ?11 votes
You can see what the database is scaled to i.e. DWU 200, but how do you know how much is actually being used over time. The portal display a graph of both the DWU limit and the DWU used but there is no way to programmatically monitor how much is being used.31 votes
We are actively improving our monitoring experience. Currently we have ‘DWU Used’ in the portal which is a blend between CPU and IO to indicate data warehouse utilization. We also have future improvements on our road map such as Query Data Store and integrating with Azure Monitor for near real time troubleshooting in the Azure portal. If anyone has any other feedback, please elaborate on your scenario on this thread. Thank you for your continued support!
The LABEL column in the sys.dm_pdw_exec_requests DMV is limited to 255 characters.
This field needs to be much larger to store rich metadata/context of some queries.4 votes
Thank you for all the feedback folks. Unfortunately this is taking longer than we’d like. We will reach back out when it is on the roadmap and can share when we have an update.
- Don't see your idea?