Jeff Moden

My feedback

  1. 343 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  14 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden commented  · 

    I definitely voted for this. I'd also like to add that there should be (some incredibly useful) options to do the following:

    1. Export CSV and TSV files according to the RFC 4180 standard. This also includes automatically double quoting all character based fields whether an in-field delimiter is present or not.
    2. Support for multi-character delimiters (FIELDTERMINATOR).
    3. Support for multi-character row terminators (ROWTERMINATOR).
    4. Support for delimiters and terminators in the ASCII character range below 32.
    5. Include the same delimiters, text identifiers, and row teminator in the optional header so that BCP and BULK INSERT can successfully do a FIRSTROW (because BCP and BULK INSERT currently count delimiters and terminators rather than just looking for row terminators).
    6. Ensure that the final line of output includes the same row terminator characters as all of the other rows to avoid the dreaded "Unexpected End of File" error.

    Now, if you really want to do a bang up job, add the following options (pretty please!):

    7. Add an option to create a row above the optional header that contains datatype information. It would be really helpful if these datatypes were exactly the same as they appear from the source table in T-SQL. That would allow a separate read of the file to get the datatype information for the file to be used to automate the creation of a target table. Again, this row must be delimited and terminated exactly the same as all the data rows so that BCP and BULK INSERT can successfully read the data in a separate read from the file using FIRSTROW.
    8. Have 4 "file manifest" options to automatically append to the target file name, the date in the ISO (yyyymmdd) DATE format, append the time in the (HHMMSS) format, append the count of the number of DATA lines in the file (does not include the header, etc), and the option to set the leading delimiter for each of the optional add-ons to the file name. DO NOT INCLUDE THIS INFORMATION AT THE BEGINNING OF THE DATA IN THE FILE ITSELF BECAUSE IT WOULD DESTROY THE ABILITY OF BCP AND BULK INSERT TO TAKE ADVANTAGE OF FIRSTROW!
    9. Optionally include a "LineNo" column at the beginning of all lines. The first data line should be numbered "1". It must be included in any optional lines (like the header and datatype lines) to preserve the number of delimiters in each line so that BCP and BULK INSERT can use FIRSTROW to find the first data line. I recommend that the optional header line be numbered "0" and the optional datatype line be numbered "-1". All of this would make troubleshooting both imports and exports a whole lot easier and would provide another bit of "manifest" information to ensure that no rows were lost in transmission or imports. Of course, this optional field must be delimited the same as all the other rows including any quoted (or otherwise) text identifiers.

    Jeff Moden supported this idea  · 
  2. 137 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    7 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 278

    <=-=Feb 1 2008 7:18PM=-=>

    Thanks for the valuable suggestion.

    This seems more like adding the sequence support which we’re seriously considering for the next major release.

    Meanwhile, would using identity column help?

    <=-=Feb 2 2008 2:11AM=-=>

    It does not seem that you understood the request. This definitely has nothing to do with
    IDENTITY. I am less versed about sequences, but I don’t think they will cut it either. If you think
    it does, maybe you could provide an example? Take this problem: For the Orders table in
    Northwind, write a query that lists the number of orders for all days in 1997. The result set should
    include all 365 days, and list zero for days without a number.

    This is a typical problem where you need a table of of numbers (or dates). While it’s easy to
    create such a table, I argue in this request that…

    Jeff Moden commented  · 

    @Schnepel, Enrico ,

    While that's clever code, it's terribly slow. If we take disk and display out of the picture by dumping the results to a "throw-away" variable, it takes 9 seconds on the box I'm currently using. Here's the test code.

    DECLARE @BitBucket INT;
    SELECT @BitBucket = Number
    FROM tfNumberGenerator(1, 1000000)
    OPTION (MAXRECURSION 18)
    ;

    A large part of the problem is because you did use an rCTE and you can usually beat an rCTE both in duration and resource usage using a properly written WHILE loop. See the following article for more on the horror of using rCTEs that produce a count.

    https://www.sqlservercentral.com/articles/hidden-rbar-counting-with-recursive-ctes

    The following code produces the same output but operates in less than 1ms.

    DECLARE @BitBucket INT;
    SELECT @BitBucket = t.N
    FROM dbo.fnTally(1,1000000) t
    ;

    You can find the fnTally function I used at the following URL...

    https://www.sqlservercentral.com/scripts/create-a-tally-function-fntally

    The real key here is that we don't need to know how to produce such an output. We've been doing that very successfully for a lot longer than this request has been in place and that's been more than 11 years at the time of this writing.

    What Erland and people like me is for MS to include such a sequence generator in the intrinsic functions of T-SQL and we'd like it to be at least as fast (preferably faster and should be possible with the machine language speeds behind the scenes) so that we don't always have to build it ourselves. There are a few other RDBMSs that have such a thing built in and we just want one built into SQL Server.

    Jeff Moden commented  · 

    @Michal Dobšovič wrote: "You can use Common Table Expressions for this:"

    You certainly can... for a "one off". It's not something that you'd want to build into code because a Recursive CTE, such as the one you used, is incredibly inefficient, uses a comparatively and totally unnecessary huge amount of resources, and it actually slower than a well written WHILE Loop.

    It is NOT something that you'd ever want to build into code, which is frequently necessary.

    Jeff Moden commented  · 

    Lordy. On the old CONNECT site, this suggestion had 278 upvotes. On this "new" site, it has only 96. Shows you how "good" the migration from the old site was.

    This excellent suggestion has been "unplanned" for more than a decade. It would have taken less time to implement this as a high performance CLR based function that it took to create the mistake called String_Split() (it's missing some very necessary functionality that the public is well aware of). Can you please implement this idea and test it just to make sure it doesn't end up having performance issues like the FORMAT function does (which also needs to be fixed).

    Jeff Moden supported this idea  · 
  3. 2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  SQL Server » Setup + Deployment  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 5

    <=-=Jul 10 2009 8:23AM=-=>

    Thank you for your feedback; this is a known issue and I’ve closed your bug as a duplicate of the work item we are tracking to fix this issue in our next major release.

    Thanks,

    Amy Lewis

    <=-=Jul 29 2009 10:11AM=-=>

    I wholeheartedly agree with the above comments. It’s very confusing, especially if one hasn’t consulted BOL, and why would I consult BOL when it seems so evident. Thanks!

    <=-=Jan 24 2011 12:27PM=-=>

    Thank you for your feedback on this issue. Our team analyzed what it would take to fix this, and, unfortunately, the bug didn’t meet our criteria for the next release. There are several workarounds available. Please refer to the SQL Agent blog for an explanation of the behavior. The link to the blos is here:
    http://blogs.msdn.com/b/sqlagent/archive/2011/01/19/sql-agent-quick-tip-deleting-history.aspx

    Jeff Moden commented  · 

    Heh... after nearly a decade, this issue has still not been addressed. At least put a warning on the screen that it's a one time task and not a permanent automatic setting so those not "in the know" won't have MSDB become their single largest database if they're running several require "execute once per minute" jobs.

    Also, your quick tip link no longer works. You should update that to help folks out.

  4. 2 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    1 comment  ·  SQL Server » Bugs  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 7

    <=-=Sep 3 2009 9:08AM=-=>

    Thank you for your feedback; this is a known issue and I’ve closed your bug as a duplicate of the work item we are tracking to fix this issue in our next major release.

    Thanks,

    Amy Lewis

    <=-=Sep 14 2009 4:48PM=-=>

    This needs to be fixed for R2. We run some jobs like log shipping backup/copy/restore at an interval of every 1 minute. This generates a huge amound of unnecessary log/history records that need to be cleaned up on a regular basis. Our current hardware architecure and msdb location does not support this kind of unrestricted growth. Please re-activate this bug and fix it for the next CTP release. – Eric Holling

    <=-=Sep 14 2009 10:30PM=-=>

    Hi Amy,
    We need this fixed in R2, please check the comments in the community forum by Eric Holling as below:

    “This needs to be fixed for R2.…

    Jeff Moden commented  · 

    Heh... after nearly a decade, this issue has still not been addressed. At least put a warning on the screen that it's a one time task and not a permanent automatic setting so those not "in the know" won't have MSDB become their single largest database if they running several require "execute once per minute" jobs.

  5. 44 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 407

    <=-=Oct 11 2007 4:28AM=-=>

    ANS SQL 99 actualy defines a syntax (section 11.49) for function definition that can be used for an inline scalar function (body consisting of a single RETURN statement) – inlining being an implementation rather than a definitional issue. There are other database products (e.g. IBM DB2 UDB) that already support this, so going the ANS way would assist portability.

    The enhancement is clearly necessary.

    <=-=Oct 11 2007 8:45AM=-=>

    In other programming languages, this might be a macro expansion, where an include file contains the macro, which expands into inline code at compile time. (I believe that this was discussed as a possibility in the very early days of Yukon.) In any case, however implemented, Inline Scalar UDF would be a major improvement over the speed-bump that exists now.

    I was going to post this myself, but happily add my vote to the existing…

    Jeff Moden commented  · 

    If I understand the "fix" correctly, "It Depends". The system CAN decide to not inline a scalar function. This isn't really under user control.

    Jeff Moden supported this idea  · 
  6. 49 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  0 comments  ·  SQL Server » Bugs  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
  7. 217 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    3 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 127

    <=-=Jun 23 2015 8:37AM=-=>

    I’m the first to post a useful comment. This must make me special.

    Seriously though, this would be an excellent solution to having to create a new “scratchdb” to hold my interim ETL data. This would be a major plus in simplifying design of a high performance app.

    <=-=Jul 3 2015 5:04AM=-=>

    In 2014, memory optimized tables, and delayed durability can be used help mitigate these Issues. However neither of this are always completely viable solutions. Brent’s proposed solution is likely the simplest way to achieve this with the least amount of unwanted impact. It is important to note that other platforms implement similar functionality as well. Notably Oracle.

    <=-=Nov 29 2016 3:58PM=-=>

    There are so many good things about this suggestion. I am amazed that SQL does not have the capability to turn off logging for certain tables that you define as no…

    Jeff Moden supported this idea  · 
  8. 91 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    53 comments  ·  SQL Server » Other  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden commented  · 

    I was hoping that MS would improve it, not remove it. *******, Microsoft! Stop deprecating/removing stuff.

    Jeff Moden supported this idea  · 
  9. 4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  SQL Server » Bugs  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 4

    <=-=Jun 6 2008 10:29AM=-=>

    Thanks for identifying this. we will consider this fix in the next release

    <=-=Jun 20 2015 1:30PM=-=>

    In SQL Server 2012 there is still Sort operator used when bulk loading into a table (empty) with Identity property, although data file is sorted and BULK INSERT is executed with ORDER hint.
    This issue was opened in 2008 and it’s still active, could you please fix it finally??
    It was really problematic back in the days when there was no workaround known as the one proposed in this issue, but do we really need to apply workarounds?
    This sort operator is not needed, that’s for sure.

    <=-=Sep 26 2017 7:15PM=-=>

    Still present in SQL Server 2017. This really should be addressed.

    Jeff Moden commented  · 

    Dear Microsoft... this absolutely crushed my efforts in the area of performance and the insane amount of TempDB required when doing data reduction of a 2.1TB database down to 900GB. As reported by the original poster, copying about half of a 246GB table tool only 17 minutes without using SET IDENTITY INSERT ON and an hour and 16 minutes with it on. In the former, TempDB didn't grow at all. In the latter, there was indeed (as the OP posted) a sort present (which reported Sort_In_TempDB) even though both tables were identical, minimal logging was active, and the copy was made in the same order as both clustered indexes.

    As more "big" data becomes available in our databases (half of a 246GB table is nothing nowadays), such actions will become more and more common.

    Please schedule this on for a fix. It's only going to get more important as time wears on. To be honest, this seems like a bug that few have been aware of simply because they've not attempted such actions, but will soon.

    Jeff Moden commented  · 

    This also occurs when using table to table inserts into a table using SET IDENTITY_INSERT even when both tables have the same clustered index (and no non-clustered indexes) and using Minimal Loggingwhich 1) totally defeats the advantage of using Minimal Logging and 2) caused my TempDB (lives on a 100GB drive and has worked for years with no growth past 16GB) to explode and the query to die when it tried to grow past 100GB.

    Since this has, according to the post prior to mine, been a problem since 2008, it's time to fix it, please. This absolutely kills methods for reducing data without blowing out the log file with deletes.

    Jeff Moden supported this idea  · 
  10. 224 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    9 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 72

    <=-=Mar 10 2016 11:26AM=-=>

    It’s a shame that this was submitted as just a “suggestion”. It should actually be listed as a “bug” because there’s only a comparatively small set of use cases where enumeration of the result set of elements is not important.

    <=-=Mar 11 2016 12:47PM=-=>

    I agree that an order column is required; one example use case is where two lists are passed in, and ordinal positions in one list correspond to positions in the other.

    <=-=Mar 11 2016 3:12PM=-=>

    Please see the related suggestion: STRING_SPLIT needs “RemoveEmptyEntries” option, like String.Split in .NET ( https://connect.microsoft.com/SQLServer/feedback/details/2462002/ ).

    <=-=Mar 12 2016 12:02PM=-=>

    This kind of function is primarily needed for de-serializing previously serialized arrays of values of any type format-able as text.
    I therefore recommend to have the result set of this function work excellent with this use-case.

    With de-serialized arrays there is a need to…

    Jeff Moden commented  · 

    There are a couple of suggestions on this thread that state that people want it to automatically remove "empty" elements like it some similar function does in .Net. There's also a suggestion or two to have it return the length of each ordinal. While that may seem useful, the use cases for it are not that frequent. I wouldn't do anything to it that would cause a slowdown other than adding the requested ordinal position, which shouldn't slow it down in any measurable fashion. The extra functionality can be handled post execution by the user code.

    It would, indeed, be nice if it could handle multi-character delimiters but, again, don't do it if it causes any measurable slowdown. More than 90% of the time (IMHO), string splitters are only needed to work against a single character delimiter.

    Jeff Moden supported this idea  · 
  11. 8 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  SQL Server » Bugs  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden commented  · 

    p.s. And, yes... there are HUGE advantages to storing all LOB data off-row!

    Jeff Moden commented  · 

    I second this request. It would also be nice to have LOB_Compaction actually work on LOB data, which is typically off-row but doesn't work there. This is especially important not only to the ShrinkFile process but to the very reason why one may need to do a ShrinkFile to begin with and that is the interleaved deletion of large quantities of LOB data. LOB_Compaction needs to be made to actually work even if you don't do a ShrinkFile.

    Jeff Moden supported this idea  · 
  12. 4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    2 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 10

    <=-=Mar 8 2011 9:34AM=-=>

    Thanks for contacting SQL Server team with the feedback. We will consider this in a future release. At this time, you can compress Out-of-row BLOBs by storing them as FileStream on a compressed volume. But I do understand that this is not your scenario as the data types you mentioned are not suitable for Filestream

    Thanks
    Sunil

    <=-=Mar 8 2011 1:30PM=-=>

    Sunil, thanks for the feedback. I had considered this option, but we’re using database mirroring as a resiliency solution which, unfortunately, doesn’t support Filestream.

    Thanks
    Chris

    <=-=Jan 15 2012 11:47AM=-=>

    Chris, I updated the title of your request. You are not really saying PAGE compression for out of row or LOB data. What you are saying is that we need the ability to compress this data as well. Also, I added a variance of UNICODE compresion as well.

    thanks
    Sunil

    <=-=Dec 1 2014…
    Jeff Moden commented  · 

    I second the problem. We are frequently required to delete blob data by setting it to NULL in a mostly vain attempt to recover disk space. LOB_COMPACTION simply does not work for out of row storage and it really needs to. It was quite the surprise, as well, because all of the documentation suggests that it will.

    Referring to the final message above by Sunil, do we really need to open a new item? Can't you just take this one to the bank?

    Jeff Moden supported this idea  · 
  13. 10 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    4 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden commented  · 

    Very cool. Thanks for the feedback. Just remember though... it should be an easily settable option so that you don't have to redo it again when all the people that like the brackets are deprived of them. ;-)

    Jeff Moden supported this idea  · 
  14. 4 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  0 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
  15. 6 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  1 comment  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
  16. 280 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    5 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 8

    <=-=Mar 5 2017 2:42PM=-=>

    Thanks for this idea. This is a valid requirement and I hope that it will get more votes. Currently we cannot confirm when it will be added, but it is in our backlog.

    <=-=May 22 2017 5:03AM=-=>

    would like it very much, particularly since you already have the CONCAT / GREATEST() a variable number of paramenters and does something with it…

    <=-=Jun 5 2017 12:31PM=-=>

    GREATEST / LEAST functions would be fantastic addition.

    <=-=Nov 14 2017 3:42PM=-=>

    The workarounds using CROSS APPLY or CASE expressions are difficult to manage and read. I’d love to see these implemented.

    Jeff Moden supported this idea  · 
  17. 104 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  2 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
  18. 108 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    under review  ·  5 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
  19. 302 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    6 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →

    Upvotes: 249

    <=-=Oct 4 2016 2:15PM=-=>

    This is similar to my feedback located here: https://connect.microsoft.com/SQLServer/feedback/details/2769130/sql-2016-temporal-tables-with-triggers. The solution I suggest is similar to this, and could be used in many different cases developing using SQL server.

    <=-=Jan 10 2017 9:43AM=-=>

    I would love this too. The issue is that many/most applications don’t use Windows Auth in the connection to SQL Server (connection pooling issues, etc), so SQL can’t get the User information

    <=-=Jan 10 2017 9:51AM=-=>

    @Sanford

    Just to clarify, I am not asking for an “automatically store user name” feature. I’m asking for “automatically store whatever I want, based on whatever expression I provide.” So the fact that the user name may or may not be available is not really relevant.

    That said, there are plenty of workarounds for various situations. I imagine that if you’re using a shared connection, you can get some notion of “user” some other…

    Jeff Moden supported this idea  · 
  20. 357 votes
    Sign in
    (thinking…)
    Sign in with: Microsoft
    Signed in as (Sign out)

    We’ll send you updates on this idea

    11 comments  ·  SQL Server » Suggestions  ·  Flag idea as inappropriate…  ·  Admin →
    Jeff Moden supported this idea  · 
← Previous 1

Feedback and Knowledge Base