Do you have a comment or suggestion to improve SQL Server? We’d love to hear it!

new virtual table: errors. It would analogous to the deleted and inserted tables

If a constraint violation happens in a DML statement and the input was a dataset, the offending data in the source is difficult to find. The statement fails and the datasource has to be searched (and possibly recreated) and checked for the violation.

Thread from the forum.

2 votes
Sign in
or sign in with
    Password icon
    Signed in as (Sign out)

    We’ll send you updates on this idea

    AdminMicrosoft SQL Server (Product Manager, Microsoft Azure) shared this idea  ·   ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 650

    <=-=Dec 20 2012 1:14PM=-=>

    Oh god yes. If you do this please be sure to include rows not inserted due to IGNORE_DUP_KEY settings on indexes and constraints.

    <=-=Dec 20 2012 1:17PM=-=>

    @Rob: No, please do not include those rows! If I’ve added IGNORE_DUP_KEY it’s because I want to ignore them. They’re NOT errors. (Okay, now we can see how difficult this feature would be to implement…)

    <=-=Dec 20 2012 1:22PM=-=>

    “It could also include a column for the constraint name that was violated.”

    An error may be caused by something other than a constraint violation, e.g. a data type violation:

    INSERT @t SELECT ‘abc’

    … so I’d rather the virtual table have a column with the actual error message. (Which would include the constraint name anyway.) And maybe a column for the error number.

    <=-=Dec 20 2012 1:24PM=-=>

    Of course the last comment illustrates another point about the difficulty of implementing this. In the case I’ve pasted, what’s the output of the virtual table? [i] can’t be typed as INT if you want to see the row that you were trying to insert; doing that would create an unworkable situation.

    <=-=Dec 20 2012 1:34PM=-=>

    Crap, hit submit accidentally.

    It would also be nice for the errors virtual table to include identity and sequence values of failed rows, which appears would be automatic. Gaps due to IGNORE_DUP_KEY repro below:

    create table #z(i int not null identity(1,1),
    b int not null primary key with (ignore_dup_key=on));
    insert #z(b) values(1),(1),(2),(2),(3);
    select * from #z;
    drop table #z;

    create sequence s0 as int start with 1 increment by 1 no cycle;
    create table #z(i int not null default next value for s0,
    b int not null primary key with (ignore_dup_key=on));
    insert #z(b) values(1),(1),(2),(2),(3);
    select * from #z;
    drop table #z;drop sequence s0;

    Another suggestion: instead of adding a column to the virtual table listing the violated constraint, perhaps add an $error directive similar to the $action available with the OUTPUT clause of the MERGE statement. That would preserve the original table structure, just like inserted and deleted. And if possible, when multiple violations occur for the same row, $error would list them all, either as XML or CSV.

    <=-=Dec 20 2012 1:41PM=-=>

    “The statement would still fail as it currently does but the offending row(s) would be salvaged for postmortem analysis.”

    I disagree here. I actually think that would make the situation confusing and difficult. What I’d much rather have happen is that the rows that don’t hit a violation will go into the table, and the rows that do hit a violation will end up in the errors virtual table. This puts things in the user’s hands. If you want to roll back, roll back (of course you’ll need an explicit transaction for that). If you want to commit, commit. Either way, it’s one step rather than two, and turns this feature into a wonderful way of setting up a “data quarantine” in T-SQL — something that we usually need SSIS or some other external ETL tool to achieve.

    <=-=Dec 20 2012 1:41PM=-=>

    I’d want this for all T-SQL statements, not just DML. All database APIs except for T-SQL can harvest a collection of errors, rather than just a single error. This table should include all properties (e.g. error level) so informational errors (e.g. level 0 errors) could be filtered out. If so, they could be included.

    <=-=Dec 20 2012 1:55PM=-=>

    I agree wholeheartedly with Bob’s suggestion.


    Perhaps they’re not “errors” if IGNORE_DUP_KEY is set, but such gaps would occur for other violations. This new table should represent everything not inserted from a single statement regardless of why, for the sake of atomicity. It’s easier to filter out a particular violation like IGNORE_DUP_KEY if one doesn’t care, but to ignore them entirely would make a full ETL reconciliation more difficult.

    Regarding the type coercion failure you mention, an $error directive that returns XML with additional source type would be trivial to implement (and TRY_CONVERT() is available). The affected column in the errors virtual table could be set to NULL with supplementary $error info, or worst case returned as sql_variant.

    <=-=Dec 20 2012 3:12PM=-=>

    Bob, aren’t you thinking of a different thing? Like an error_table() function to supplement the current error_message() etc which are scalar and that would only return data in a CATCH handler.

    I think for a DML statement with an OUTPUT of errors (or a special clause) errors would not be raised and trigger the CATCH handler, but instead end up in the table with one row per failed row. Presumably errors would need to have a standard appearance with one xml column for the data and one for the errors – there may be multiple errors with the same row.

    Yes, this is a interesting feature, if it’s done right, but it requires some thinking to get it right.

    <=-=Dec 31 2012 3:37PM=-=>

    I think this is a fantastic idea and would have said my team months of effort in build cross system ETL for a migration.
    I’ve put my thoughts in a blog post

    I discuss what errors should be logged and at what level.

    <=-=Jan 16 2013 1:46PM=-=>

    Thanks for great feedback, we will evaluate this request for future releases.

    <=-=Apr 29 2013 2:40PM=-=>


    Thank you for submitting this feedback. After carefully evaluating all of the bugs in our pipeline, we are closing bugs that we will not fix in the current or future versions of SQL Server. The reasons for closing this bug is because the scenario reported in the bug are not common enough and due to the risk of implementing a fix it unfortunately does not meet the bar for the current version of the product.

    If you have more supporting information and details or feel strongly that we should reconsider, please do resubmit this item for review.

    Thanks again for reporting the product issue and continued support in improving our product.
    Gus Apostol, SQL Server Program Manager

    <=-=May 14 2013 3:13PM=-=>

    Not common enough? Come on! Sure, this is not a small order, and it will require some effort to get this right. But if you do, it will be revolutionary. Not common enough? Bah!

    <=-=May 15 2013 9:22AM=-=>

    Gus: As someone who focuses on data warehousing, I see use cases for this feature on a daily basis. And I regularly deal with problems caused by bad data, which can’t be caught and handled nicely in T-SQL.

    If properly implemented I would use this in every single ETL process, as I’m sure would the vast majority of other data warehouse developers. The scenarios here are extremely common and very much in need of a fix.

    Please reconsider.

    <=-=May 15 2013 10:00AM=-=>

    This would be hugely helpful. I also work with large datasets and moving data between tables regularly, and especially with very wide tables tracking down the error can be very problematic.

    <=-=May 15 2013 10:04AM=-=>

    This would be an INCREDIBLY useful feature. I see widespread applications for this. MSFT, please be a little more open-minded about this suggestion.

    <=-=May 15 2013 10:06AM=-=>

    This feature would definitely help. I request Microsoft to please consider this adding in the product.


    <=-=May 15 2013 10:07AM=-=>

    This is a great suggestion (not a bug) to handle the extremely common occurrence of data exceptions during set based manipulation and I believe should really be looked at for implementation. Perhaps, have a database / server setting for handle exceptions (or something similar) that would fail the batch as currently implemented or throw records into the virtual exception table as discussed below.

    <=-=May 15 2013 10:21AM=-=>

    I have to agree with others on this one. This is quite common when trying to manually handle inserts and updates. All we know is that something caused a violation, but no help on what that something was. We then have to spend much more time tracking down the problem data instead of getting some helpful way to get more information about the problem. Yes, the case described shouldn’t be common when working with an OLTP system with a well-designed interface and data-scrubbing, but it’s quite common for ETL and ad-hoc operations.

    <=-=May 15 2013 10:42AM=-=>

    This is absolutely needed. I have to resort to dumping files out to text editors and manually searching for possible culprits.

    <=-=May 15 2013 11:10AM=-=>

    This would be a very handy
    thing to have. thanks.

    <=-=May 15 2013 11:11AM=-=>

    not common enough?

    are you serious?

    you should take a leaf out of the team and the excellent work they’re doing listening to the devs

    <=-=May 15 2013 12:18PM=-=>

    This feature would be extremely useful and welcomed! The cumulative time this would have saved everyone… Please, take this one seriously. This would be a boon to data development work.

    <=-=May 15 2013 12:33PM=-=>

    Not common enough?

    I hope that’s an attempt at humor.

    <=-=May 15 2013 1:04PM=-=>

    This feature is called DML Error Logging in Oracle and has existed since 10gR2.
    This is an amazing feature that really would be great to have in SQL Server.

    <=-=May 15 2013 2:28PM=-=>

    This would save incredible amounts of time. I can’t in any scenario think of how this is not a common situation. I encounter this problem regularly and have at time spent hours debugging. Knowing all of the data that would cause the operation to fail would be of immense benefit.

    <=-=May 15 2013 4:40PM=-=>

    The various ways this would affect the quer

    1 comment

    Sign in
    or sign in with
      Password icon
      Signed in as (Sign out)
      • Brendt Hess commented  ·   ·  Flag as inappropriate

        Anyone who has ever worked ETL with a file format controlled by someone else would use this option. However, I would prefer an option in the statement that activates this to specify rollback for all or keep successful rows. Something like "WITH (Error_Table, Rollback_On_Errors)" and "WITH (Error_Table, Keep_Valid): perhaps. Probably the Rollback option should be the default, since Atomic operations should be the default (all succeed or all fail) unless something directs a different behavior.

      Feedback and Knowledge Base