Update: Microsoft will be moving away from UserVoice sites on a product-by-product basis throughout the 2021 calendar year. We will leverage 1st party solutions for customer feedback. Learn more here.

Alex Bedig

My feedback

  1. 15 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  Azure Event Grid » Features  ·  Flag idea as inappropriate…  ·  Admin →
    An error occurred while saving the comment
    Alex Bedig commented  · 

    Here's a Github issue discussing the same question: https://github.com/MicrosoftDocs/azure-docs/issues/19444

    Alex Bedig supported this idea  · 
  2. 111 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  8 comments  ·  Azure IoT (Hub, DPS, SDKs)  ·  Flag idea as inappropriate…  ·  Admin →
    An error occurred while saving the comment
    Alex Bedig commented  · 

    We are considering the tradeoffs of IoT Hub -> Event Hub vs. APIM -> Event Hub for 'unmanaged devices' - scenarios where another system manages updating an API key, that want to HTTP POST their telemetry data via applications we do not control.

    If the goal is to use SAS tokens with HTTP POST with IoT Hub, the 365-day maximum SAS token lifetime seems like a blocker, or at least another responsibility for that application we do not control to manage.

    Assuming we can get around that: because we do not control the applications, we assume we need ways to limit the impact of *devices* behaving badly, i.e. per-device throttling. But, we are willing to impose fairly severe limits on what devices are allowed to do - these are legacy devices after all.

    It seems the 'IoT Hub way' would be to create a different IoT hub for each tenant, so that at least the worst a customer could do is take down only their own devices on our platform. Since each Azure subscription can have 50 IoT hubs (https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-quotas-throttling), that gives us some room but doesn't seem exactly right either.

    One leading alternative is to use Client Driven Throttling (https://docs.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling#client-driven-throttling) along with an HMAC-based API key verification process to rate-limit every device. The downside here is implementing our own API key verification process, although APIM has the dependencies it needs to do an HMAC integrity check easily.

    The other upside of APIM is it currently supports custom domains, i.e. we do not have to get stuck with devices in the field reporting to Azure-owned domain names (assuming I'm reading this correctly here: https://feedback.azure.com/forums/321918-azure-iot/suggestions/17908789-add-the-custom-domain-name-support-for-iothub-endp).

    IoT Hub looks attractive when we are deploying a fleet of our own devices, especially if we can control or verify the applications that are running on them. It seems like it is not the simplest to work with for more "at arm's length" devices that customers want to ship data from without necessarily replacing, and that use a wide range of data management applications (i.e. HTTP POST is a least-common-denominator).

    I would be interested in how the IoT Hub team thinks about this scenario. If "it won't work at scale" is part of the answer, I can appreciate that perspective.

    An error occurred while saving the comment
    Alex Bedig commented  · 

    We are considering the tradeoffs of IoT Hub -> Event Hub vs. APIM -> Event Hub for 'unmanaged devices' - scenarios where another system manages updating an API key, that want to HTTP POST their telemetry data via applications we do not control.

    If the goal is to use SAS tokens with HTTP POST with IoT Hub, the 365-day maximum SAS token lifetime seems like a blocker, or at least another responsibility for that application we do not control to manage.

    Assuming we can get around that: because we do not control the applications, we assume we need ways to limit the impact of *devices* behaving badly, i.e. per-device throttling. But, we are willing to impose fairly severe limits on what devices are allowed to do - these are legacy devices after all.

    It seems the 'IoT Hub way' would be to create a different IoT hub for each tenant, so that at least the worst a customer could do is take down only their own devices on our platform. Since each Azure subscription can have 50 IoT hubs (https://docs.microsoft.com/en-us/azure/iot-hub/iot-hub-devguide-quotas-throttling), that gives us some room but doesn't seem exactly right either.

    One leading alternative is to use Client Driven Throttling (https://docs.microsoft.com/en-us/azure/api-management/api-management-sample-flexible-throttling#client-driven-throttling) along with an HMAC-based API key verification process to statelessly rate-limit every device. The downside here is implementing our own API key verification process, although APIM has the dependencies it needs to do an HMAC integrity check easily.

    The other upside of APIM is it currently supports custom domains, i.e. we do not have to get stuck with devices in the field reporting to Azure-owned domain names (assuming I'm reading this correctly here: https://feedback.azure.com/forums/321918-azure-iot/suggestions/17908789-add-the-custom-domain-name-support-for-iothub-endp).

    IoT Hub looks attractive when we are deploying a fleet of our own devices, especially if we can control or verify the applications that are running on them. It seems like it is not the simplest to work with for more "at arm's length" devices that customers want to ship data from without necessarily replacing, and that use a wide range of data management applications (i.e. HTTP POST is a least-common-denominator).

    I would be interested in how the IoT Hub team thinks about this scenario. If "it won't work at scale" is part of the answer, I can appreciate that perspective.

    Alex Bedig supported this idea  · 
  3. 34 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  Azure Active Directory » B2B  ·  Flag idea as inappropriate…  ·  Admin →
    An error occurred while saving the comment
    Alex Bedig commented  · 

    One use case is streamlining the migration from an internal app to a customer-facing one. As it stands today, there are different customer records and a migration and synchronization process between the two tenants that must be addressed by the developer, i.e. it is not "you have users in your directory, now we can create a different authentication path and support the same user accounts via configuration." This would reduce friction in using B2B's invitation system as an entry point for partners while doing early betas (super easy to get started), prior to rolling out with B2C as the primary front door to the commercial offering.

    Alex Bedig supported this idea  · 
  4. 1,235 votes
    Vote

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)
    You have left! (?) (thinking…)
    32 comments  ·  API Management » Defining APIs  ·  Flag idea as inappropriate…  ·  Admin →
    Alex Bedig supported this idea  · 
  5. 186 votes

    We're glad you're here

    Please sign in to leave feedback

    Signed in as (Sign out)

    We’ll send you updates on this idea

    Alex Bedig supported this idea  · 

Feedback and Knowledge Base