Azure Synapse Analytics

We would love to hear your ideas for new features for Azure Synapse Analytics. Below, enter a new idea or upvote an existing one. The Synapse engineering team pays attention to all requests.

If instead you need a technical question answered or help, try the these options: DocumentationMSDN forum, and StackOverflow. If you need support, please open a support ticket.

  • Hot ideas
  • Top ideas
  • New ideas
  • My feedback
  1. JDBC connection to Spark Pool

    Please support JDBC connection to Synapse Spark Pool.

    22 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  2. Make available some utilities in Spark Pools as already existing in Databricks

    It would be good to have some spark utilities as already available in databricks, like executing one notebook from another notebook and passing it parameters.

    Example:
    dbutils.notebooks.run('NotebookName', 3600, parameters)

    This is very needed to have a dynamic notebook which can trigger the execution of another notebook.

    6 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  3. Apache Spark - Allow Sharing a cluster across multiple users

    Currently, an entire new cluster is spun for every user who starts an Apache Spark session using Notebooks. Please add the ability to share a single physical cluster across multiple users. Spinning up a new cluster (with a minimum of 3 nodes) for every user is very expensive.

    5 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  4. Support for Key Vault secret access from Spark/Python Notebook

    My customer needs access to their secrets stored in Azure Key Vault from within a Spark notebook. These credentials are used to access their cognitive services keys in order to complete their data engineering process. The libraries installed in Synapse conflicts with the documentation provided by Key Vault.

    https://docs.microsoft.com/en-us/azure/key-vault/secrets/quick-create-python?tabs=cmd

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  5. support for running notebooks from within notebooks usign the %run magic

    Ipython supports the use of the %run magic to run notebooks from within a notebook... having this support in the synapse spark environment would unblock migration from other environments

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  6. add support for wheel library distributions

    support wheel distribution azure data bricks currently support these:

    https://pythonwheels.com/

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  7. Add Support for delta lake v0.7 and spark 3.0 in spark pool

    Delta lake v0.6.1 doesn't support much of the ACID functionality. It would be great to upgrade the delta lake version and Spark version to utilize functionality as supported by data bricks.

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  8. Connect Spark Pool to Power BI

    Please support the ability to connect Spark pool to Power BI by exposing the cluster connection information for Spark pool.

    Currently, Spark data needs to be loaded into a different data source such as Azure SQL Data Warehouse (ADW) before Power BI can use the data.

    However, Power BI has the capability to connect to Spark (https://docs.microsoft.com/en-us/azure/databricks/integrations/bi/power-bi#step-2-get-azure-databricks-connection-information).

    Adding ADW as an intermediate step from Spark to Power BI just adds unnecessary delay in syncing the data between Spark and ADW.

    Potentially relevant discussion post: https://feedback.azure.com/forums/307516-azure-synapse-analytics/suggestions/40706374-jdbc-connection-to-spark-pool

    1 vote
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    0 comments  ·  Workspace/Spark  ·  Flag idea as inappropriate…  ·  Admin →
  • Don't see your idea?

Feedback and Knowledge Base