Update: Microsoft will be moving away from UserVoice sites on a product-by-product basis throughout the 2021 calendar year. We will leverage 1st party solutions for customer feedback. Learn more here.

Azure Synapse Analytics

We would love to hear your ideas for new features for Azure Synapse Analytics. Below, enter a new idea or upvote an existing one. The Synapse engineering team pays attention to all requests.

If instead you need a technical question answered or help, try the these options: DocumentationMSDN forum, and StackOverflow. If you need support, please open a support ticket.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. Allow appending for Apache Spark pool to Synapse SQL connector

    Allow this connector to append data to an existing SQL Pool table instead of having to drop the dedicated SQL pool table. This is a barrier to adoption for some customers. I am not sure if this is related to polybase external table limitations.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  2. Add support for pass variable when using %run command

    When using %run magic command to reference another notebook, it would be great if the user can use variables for notebook paths, so that to adapt to more different use cases.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  3. Connect to a Spark database through SSMS and view data based on ACL in Storage

    We would like to access external tables generated from parquet files in a storage account/ADLS. We would like to grant users permission to see these tables by logging on to our Ondemand server using a SQL client like SSMS, or a reporting tool like Power BI. Today it is only possible to access if an AD user or group is Sysadmin, which is not viable in our use case. Also in that case the user would have access to all tables across the Ondemand-server, as the AD object ID is only granted permissions according to IAM on Azure Storage or…

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  4. Add support for spark pool vCore usage metrics

    Customer would like to have spark pool vCore usage metrics. Currently no metrics for customer to determine the vCore usage such as Avg or peak usage. Getting the spark pool vCore usage is very important for customer's to determine the workload baseline and future expansions on their spark pool.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  5. 2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  6. Allow private hosted pip repositories in Synapse

    Allow private hosted pip repositories in Synapse,
    private packages are commonly used and currently, the workaround of copying all the code is pretty obnoxious

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  7. Need %pip installer within the workspace notebook, so we can install libraries as needed immediately

    Need %pip installer within the workspace notebook, so we can install libraries as needed immediately

    9 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  8. Connecting remotely to Azure Spark Pool for notebook hosting.

    We would like to host a notebook service on top of Azure Synapse Analytics, with a custom frontend.

    This requires the possibility of connecting the notebook to a Python-kernel running in a remote Azure Spark instance.

    According to the answer provided here in the Microsoft forums, this is currently not available.

    https://docs.microsoft.com/en-us/answers/questions/165607/connecting-a-notebook-remotely-to-an-azure-spark-p.html

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  9. Better error messaging for Synapse Spark Pools

    Today I'm getting an error message across multiple tenant workspaces (tested to make sure it wasn't a setup issue) when trying to start spark clusters, "Failed to start cluster:". Doesn't really help identify what the issue might be. Last week I received an error "CLUSTERINTERMINALSTATEBEFORE_READY". Neither of them very useful, and both of them resolved on their own after waiting a while.

    Better error reporting, maybe some documentation of common errors and resolution, and some stability on the MS backend would go a long way to increase adoption.

    2 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  10. Setting Custom Credentials for ADLS/ABS

    I would like to be able to use custom credentials when authenticating to ADLS or ABS when I'm writing out Spark dataframes out to those locations. This is a feature in Databricks as shown here (https://docs.databricks.com/data/data-sources/azure/azure-datalake.html). Right now, when I write out spark dataframes in Azure Synapse with Spark on Cosmos, it defaults to using my user credentials but there are scenarios where I would want to use a service principal. This feature would be for those scenarios.

    1 vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  11. Support for Key Vault secret access from Spark/Python Notebook

    My customer needs access to their secrets stored in Azure Key Vault from within a Spark notebook. These credentials are used to access their cognitive services keys in order to complete their data engineering process. The libraries installed in Synapse conflicts with the documentation provided by Key Vault.

    https://docs.microsoft.com/en-us/azure/key-vault/secrets/quick-create-python?tabs=cmd

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  12. support for running notebooks from within notebooks usign the %run magic

    Ipython supports the use of the %run magic to run notebooks from within a notebook... having this support in the synapse spark environment would unblock migration from other environments

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  13. add support for wheel library distributions

    support wheel distribution azure data bricks currently support these:

    https://pythonwheels.com/

    3 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  14. Add Support for delta lake v0.7 and spark 3.0 in spark pool

    Delta lake v0.6.1 doesn't support much of the ACID functionality. It would be great to upgrade the delta lake version and Spark version to utilize functionality as supported by data bricks.

    20 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  15. Make available some utilities in Spark Pools as already existing in Databricks

    It would be good to have some spark utilities as already available in databricks, like executing one notebook from another notebook and passing it parameters.

    Example:
    dbutils.notebooks.run('NotebookName', 3600, parameters)

    This is very needed to have a dynamic notebook which can trigger the execution of another notebook.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  16. Apache Spark - Allow Sharing a cluster across multiple users

    Currently, an entire new cluster is spun for every user who starts an Apache Spark session using Notebooks. Please add the ability to share a single physical cluster across multiple users. Spinning up a new cluster (with a minimum of 3 nodes) for every user is very expensive.

    7 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  17. Connect Spark Pool to Power BI

    Please support the ability to connect Spark pool to Power BI by exposing the cluster connection information for Spark pool.

    Currently, Spark data needs to be loaded into a different data source such as Azure SQL Data Warehouse (ADW) before Power BI can use the data.

    However, Power BI has the capability to connect to Spark (https://docs.microsoft.com/en-us/azure/databricks/integrations/bi/power-bi#step-2-get-azure-databricks-connection-information).

    Adding ADW as an intermediate step from Spark to Power BI just adds unnecessary delay in syncing the data between Spark and ADW.

    Potentially relevant discussion post: https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/40706374-jdbc-connection-to-spark-pool

    4 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  18. JDBC connection to Spark Pool

    Please support JDBC connection to Synapse Spark Pool.

    28 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base