SQL Data Warehouse

Do you have an idea or suggestion based on your experience with SQL Data Warehouse? We would love to hear it! Please take a few minutes to submit your idea or vote up an idea submitted by another SQL Data Warehouse customer. All of the feedback you share in these forums will be monitored and reviewed by the SQL Data Warehouse engineering team. By suggesting or voting for ideas here, you will also be one of the first to know when we begin work on your feature requests and when we release the feature.

Remember that this site is only for feature suggestions and ideas! If you have technical questions please try our documentation, MSDN forum, or StackOverflow. If you need support, please open a support ticket with us.

How can we improve Microsoft Azure SQL Data Warehouse ?

(thinking…)

Enter your idea and we'll search to see if someone has already suggested it.

If a similar idea already exists, you can support and comment on it.

If it doesn't exist, you can post your idea so others can support it.

Enter your idea and we'll search to see if someone has already suggested it.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. hash distribute fact table on multiple columns

    A large fact table often has multiple join keys to multiple dimension table, so while it can be opitmised for one join it always creates data movement for other joins. The hash distribution should accept multiple columns so all joins are optimised and create no data movement for dimension joins.

    46 votes
    Sign in
    Check!
    (thinking…)
    Reset
    or sign in with
    • facebook
    • google
      Password icon
      Signed in as (Sign out)

      We’ll send you updates on this idea

      3 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
    • Resource class override

      Having a fixed resource class assigned to a user is too rigid. Most queries can be executed efficiently with a small resource class, but certain operations (e.g. rebuild index) should be run with a larger RC. I need to switch between several user accounts just to be able to have appropriate resources for whatever operation or query I need to run. Users assigned to xlargerc can only execute a single query at a time.

      Would be great to be able to set/change the resource class used for executing each statement. Alternatively, being able to execute as a different user could…

      32 votes
      Sign in
      Check!
      (thinking…)
      Reset
      or sign in with
      • facebook
      • google
        Password icon
        Signed in as (Sign out)

        We’ll send you updates on this idea

        4 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
      • Support table variables

        Prior to data analysis it is required to collect some initial info and store it in a table format. Creating and dropping a table or a temp table takes too much time (seconds) since it has to be distributed over multiple nodes. Having table variables would solve this performance issue.

        27 votes
        Sign in
        Check!
        (thinking…)
        Reset
        or sign in with
        • facebook
        • google
          Password icon
          Signed in as (Sign out)

          We’ll send you updates on this idea

          0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
        • Enable Dynamic compute - scaling up the DW should not kill any existing query

          Scaling up/down the warehouse kills all ongoing sessions.
          Adding or removing compute capacity should be truly dynamic. This means that when I add more compute - by scaling up - the DW should be able to absorb that additional compute capacity without killing existing queries.
          Maybe that compute can be brought into play for the new queries that get submitted post addition of the compute.

          Thanks,
          Mayank

          23 votes
          Sign in
          Check!
          (thinking…)
          Reset
          or sign in with
          • facebook
          • google
            Password icon
            Signed in as (Sign out)

            We’ll send you updates on this idea

            0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
          • Support for updating statistics on a single partition

            This would increase update statistics performance for users who are currently loading data into a single partition and updating statistics afterwards.

            Otherwise with growing data size, updating statistics takes longer after every load.

            19 votes
            Sign in
            Check!
            (thinking…)
            Reset
            or sign in with
            • facebook
            • google
              Password icon
              Signed in as (Sign out)

              We’ll send you updates on this idea

              2 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
            • Support rebuilding DW indexes with Online = ON so the table is accessible during the rebuild

              This works:
              ALTER INDEX ALL ON VulnerabilityScope REBUILD
              But, when the rebuild takes place, the entire table becomes locked from any query--even selecting one row. Users complain when the table is inaccessible during the 30 minute rebuild.

              This currently returns an error:
              ALTER INDEX ALL ON VulnerabilityScope REBUILD WITH (ONLINE = ON)
              Parse error at line: 1, column: 48: Incorrect syntax near 'ONLINE'.

              Please support rebuilding DW indexes with Online = ON

              17 votes
              Sign in
              Check!
              (thinking…)
              Reset
              or sign in with
              • facebook
              • google
                Password icon
                Signed in as (Sign out)

                We’ll send you updates on this idea

                1 comment  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →

                Thank you for the feedback folks. For now, you can use Alter Index reorganize as the online operation when rebuilding your CCI . We will reach out when this is on our roadmap. Please describe your scenario for this capability below and continue to vote. Thanks for your patience.

              • Performance with respect to dynamic SQL

                In our solutions we load data from csv-files into SQL DW internal tables using polybase/external tables.

                Per roll-out we face the challenge that the schema (columns) in the csv may differ for the same table. Therefore we implemented a logic to lookup the column name and data type from a dictionary table and we create the external and internal table schemes dynamically. As we have round about 500 tables with up to 20 columns per table automating this process is the only way to go.

                The issue we face is that compared to an on premise SQL Server the dynamic…

                16 votes
                Sign in
                Check!
                (thinking…)
                Reset
                or sign in with
                • facebook
                • google
                  Password icon
                  Signed in as (Sign out)

                  We’ll send you updates on this idea

                  2 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                • Better Database & Query Monitoring through Azure Portal

                  Enable a better experience when it comes to monitoring and analysis of queries and DWH/query performance. Currently the SQL DW blade in the Azure Portal does only show very basic information about queries and their execution as well as DTUs with no possibility to customize the charts or the (query) lists. Only the last 50 queries are shown in this list, there are no overall reports and statistics (long running queries), there is no possibility to filter this list or show other metrics. Please enable a similar experience for monitoring of query and DB performance like SQL DB offers (Query…

                  14 votes
                  Sign in
                  Check!
                  (thinking…)
                  Reset
                  or sign in with
                  • facebook
                  • google
                    Password icon
                    Signed in as (Sign out)

                    We’ll send you updates on this idea

                    0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →

                    We are actively improving our monitoring experience. We have future improvements on our road map such as Query Data Store and integrating with Azure Monitor (logs and metrics) for near real time troubleshooting in the Azure portal. If anyone has any other feedback, please elaborate on your scenario on this thread and have your team continue to vote. Thank you for your continued support!

                  • Enable distribution elimination in the generated query plan if the query predicate contains the distribution key column as a filter

                    Say we have the following table:

                    CREATE TABLE [dbo].[Foo]

                    (

                    [columnWithGuids] [uniqueidentifier] NULL,

                    [column1] [int] NULL,

                    [column2] [bigint] NULL,

                    [ingressDay] [date] NULL

                    )

                    WITH

                    (

                    DISTRIBUTION = HASH ( [columnWithGuids] ),

                    CLUSTERED COLUMNSTORE INDEX,

                    PARTITION

                    (

                    [ingressDay] RANGE RIGHT FOR VALUES (N'2016-01-01T00:00:00.000', N'2016-01-02T00:00:00.000')

                    )

                    )

                    And we have a query:

                    SELECT

                    SUM([colum1]) AS [Total]

                    ,[ingressDay]

                    FROM [dbo].[Foo]

                    WHERE [columnWithGuids] = '40300bab-03aa-8d51-7ecb-904dfaf47ec7'

                    GROUP BY

                    [ingressDay];


                    Currently, Azure SQL DW does not eliminate distributions to interrogate based on a predicate. Which means, that in the above case, even though only 1 distribution will always have the data we're looking for, it will…

                    13 votes
                    Sign in
                    Check!
                    (thinking…)
                    Reset
                    or sign in with
                    • facebook
                    • google
                      Password icon
                      Signed in as (Sign out)

                      We’ll send you updates on this idea

                      1 comment  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                    • Functionality to upload data to ADW where upload queries do not count in 32 concurrent queries limit

                      Today we use PDW/APS and when we upload data to PDW using DWLoader, it does not count towards 32 concurrent query limitations, this makes sure that users have all of query window to use datawarehouse.
                      However in ADW, when we are ingesting data with Azure data Factory or manually using Polybase, since technically its a query, it counts towards 32 concurrent query limit.
                      Would be great to have a feature similar to PDW.

                      10 votes
                      Sign in
                      Check!
                      (thinking…)
                      Reset
                      or sign in with
                      • facebook
                      • google
                        Password icon
                        Signed in as (Sign out)

                        We’ll send you updates on this idea

                        0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                      • Add recommended statistics to EXPLAIN output

                        Enable needed statistics to be gathered in automated fashion by either showing recommendations in EXPLAIN output or having tool that analyzes a specified workload.

                        9 votes
                        Sign in
                        Check!
                        (thinking…)
                        Reset
                        or sign in with
                        • facebook
                        • google
                          Password icon
                          Signed in as (Sign out)

                          We’ll send you updates on this idea

                          0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                        • Add Query Store functionality

                          Same as Query Store in SQL Server 2016+
                          https://msdn.microsoft.com/en-AU/library/dn817826.aspx

                          It would be really helpful if we can keep track of all queries executed on the server together with corresponding CPU cycles, query plan(s), logical and physical reads, etc.

                          8 votes
                          Sign in
                          Check!
                          (thinking…)
                          Reset
                          or sign in with
                          • facebook
                          • google
                            Password icon
                            Signed in as (Sign out)

                            We’ll send you updates on this idea

                            0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                          • CREATE SCHEMA causes pervasive blocking

                            When CREATE SCHEMA is issued it is blocked by active work and blocks (most) all new work.

                            CREATE SCHEMA should be new transparent to the operations of a database.

                            7 votes
                            Sign in
                            Check!
                            (thinking…)
                            Reset
                            or sign in with
                            • facebook
                            • google
                              Password icon
                              Signed in as (Sign out)

                              We’ll send you updates on this idea

                              0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                            • Expose DMVs that display the actual execution plan (sqlplan) of a query on a node after the query executes

                              Currently, DBCC PWD_SHOWEXECUTIONPLAN only displays the *estimated* query plan of the step being executed on a compute node, *while* the query is running. There is no way to see the actual execution plan of the query step on the node.

                              This prevents us from seeing the details of the query being executed such as the stats on how many rows were estimated vs actual rows returned, or rowgroups eliminated by the ColumnStore index, or any gross under/over estimates in the row counts owing to missing/bad statistics.

                              6 votes
                              Sign in
                              Check!
                              (thinking…)
                              Reset
                              or sign in with
                              • facebook
                              • google
                                Password icon
                                Signed in as (Sign out)

                                We’ll send you updates on this idea

                                1 comment  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →

                                Thank you for voting for this feature! We are aware of this scenario and are looking into ways of supporting this and improving our query troubleshooting experience. In the meantime, stay tuned for an update and please continue voting for this feature.

                              • 6 votes
                                Sign in
                                Check!
                                (thinking…)
                                Reset
                                or sign in with
                                • facebook
                                • google
                                  Password icon
                                  Signed in as (Sign out)

                                  We’ll send you updates on this idea

                                  0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                                • DW100 for Gen2

                                  We like to scale to DW100 during low demand.

                                  The minimum DW for Gen2 is DW1000. This represents a 10x cost increase during these hours.

                                  We are blocked from migrating until there is a DW100 equivalent.

                                  2 votes
                                  Sign in
                                  Check!
                                  (thinking…)
                                  Reset
                                  or sign in with
                                  • facebook
                                  • google
                                    Password icon
                                    Signed in as (Sign out)

                                    We’ll send you updates on this idea

                                    1 comment  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                                  • Workflow capability within Azure Data warehouse to run parallel queries.

                                    To maximise the parallel execution and MPP architecture, it will be great to have a workflow capability that will allow users to run queries in parallel against a defined Resource Class.
                                    I appreciate this could be against the SQL engine design but looking at the architecture we might be able to do something like this

                                    workflow-Parallel (ResourceClassUser = XlargeRCuser, OptionalAllocMaxMemory = 0.5)
                                    {

                                    Exec SP_Tranform1;
                                    Exec SP_Tranform2;
                                    Exec SP_Tranform3;
                                    Exec SP_Tranform4;
                                    }

                                    where 4 sprocs will run in parallel with defined RC and max memory allocation in percentage/ratio to the total max available for the RC, half in this case.

                                    2 votes
                                    Sign in
                                    Check!
                                    (thinking…)
                                    Reset
                                    or sign in with
                                    • facebook
                                    • google
                                      Password icon
                                      Signed in as (Sign out)

                                      We’ll send you updates on this idea

                                      0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                                    • Ignore update when there are no changes

                                      In DW project, usually there are scenarios where full data load of dimension happens. Even if there are no attributes changes, the timestamp of the records get change and that causes cascading impact in the DW system, wherein dependent facts and dimension are full processed. This is very resource intensive task and causes entire DW facts recreation.

                                      To avoid this, it would be good to have a T-SQL syntax wherein it checks the required attributes between the incoming record and the destination and updates if and only if the attribute is change. eg:

                                      UPDATE Emp UPDATE WHEN Name, DOB CHANGED …

                                      2 votes
                                      Sign in
                                      Check!
                                      (thinking…)
                                      Reset
                                      or sign in with
                                      • facebook
                                      • google
                                        Password icon
                                        Signed in as (Sign out)

                                        We’ll send you updates on this idea

                                        0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                                      • unicode compression

                                        Support Unicode compression or similar.

                                        As per https://docs.microsoft.com/en-us/sql/relational-databases/data-compression/data-compression unicode compression is not supported in SQL DW.

                                        We currently have to trade-off between significantly increased data size and support for non-latin characters. With predominantly latin data it would be good to not have to choose.

                                        2 votes
                                        Sign in
                                        Check!
                                        (thinking…)
                                        Reset
                                        or sign in with
                                        • facebook
                                        • google
                                          Password icon
                                          Signed in as (Sign out)

                                          We’ll send you updates on this idea

                                          0 comments  ·  Performance  ·  Flag idea as inappropriate…  ·  Admin →
                                        • Don't see your idea?

                                        SQL Data Warehouse

                                        Feedback and Knowledge Base