Showing posts with label causing. Show all posts
Showing posts with label causing. Show all posts

Sunday, March 11, 2012

Blocking Problems

We have a stored proc that is causing alot of blocking
problems. Basically, it selects a field based on the PK
and returns the value (0 or 1) as an output parameter:
SELECT @.Anonymous_Bool = Anonymous_Bool
FROM Member_Properties
WHERE GUID = @.GUID
The estimated execution plan shows that it does a
clustered index seek. When we tested it out it was fine,
but when it's in production, it looks like it starts
causing massive blocking problems. I don't see how the
proc itself could be a problem since it is very
efficient.
Has anyone else encountered a similar problem? could it
possibly be something in the front end code? or maybe
connection pooling? any ideas on a direction to look
would be much appreciated...Is @.Guid a UniqueIdentifier datatype? If not and the GUID column is it may
have an issue actually using the index properly. But I suspect you are
seeing the results of the connection changing the isolation level to
serializable. Or you can be seeing the effects of page splitting due to the
Guid is clustered. Check to see what is going on with profile on the
connections doing the blocking. In my opinion Guid's are a horrible PK and
especially if you cluster on them. There are few if any alternatives that
can perform worse than a Guid in a clustered index on a table with new rows
being inserted.
--
Andrew J. Kelly
SQL Server MVP
"A. Sugrue" <sugruea@.hotmail.com> wrote in message
news:058701c356ec$ec57e640$a401280a@.phx.gbl...
> We have a stored proc that is causing alot of blocking
> problems. Basically, it selects a field based on the PK
> and returns the value (0 or 1) as an output parameter:
> SELECT @.Anonymous_Bool = Anonymous_Bool
> FROM Member_Properties
> WHERE GUID = @.GUID
> The estimated execution plan shows that it does a
> clustered index seek. When we tested it out it was fine,
> but when it's in production, it looks like it starts
> causing massive blocking problems. I don't see how the
> proc itself could be a problem since it is very
> efficient.
> Has anyone else encountered a similar problem? could it
> possibly be something in the front end code? or maybe
> connection pooling? any ideas on a direction to look
> would be much appreciated...|||Yes it is a uniqueidentifier and is the PK with a
clustered index and it is also the rowguidcol for merge
replication.
>--Original Message--
>Is @.Guid a UniqueIdentifier datatype? If not and the
GUID column is it may
>have an issue actually using the index properly. But I
suspect you are
>seeing the results of the connection changing the
isolation level to
>serializable. Or you can be seeing the effects of page
splitting due to the
>Guid is clustered. Check to see what is going on with
profile on the
>connections doing the blocking. In my opinion Guid's are
a horrible PK and
>especially if you cluster on them. There are few if any
alternatives that
>can perform worse than a Guid in a clustered index on a
table with new rows
>being inserted.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>news:058701c356ec$ec57e640$a401280a@.phx.gbl...
>> We have a stored proc that is causing alot of blocking
>> problems. Basically, it selects a field based on the PK
>> and returns the value (0 or 1) as an output parameter:
>> SELECT @.Anonymous_Bool = Anonymous_Bool
>> FROM Member_Properties
>> WHERE GUID = @.GUID
>> The estimated execution plan shows that it does a
>> clustered index seek. When we tested it out it was
fine,
>> but when it's in production, it looks like it starts
>> causing massive blocking problems. I don't see how the
>> proc itself could be a problem since it is very
>> efficient.
>> Has anyone else encountered a similar problem? could it
>> possibly be something in the front end code? or maybe
>> connection pooling? any ideas on a direction to look
>> would be much appreciated...
>
>.
>|||No, but since we already had a uniqueidentifier there as
the PK, there was no need to add another column to be the
rowguidcol.
>--Original Message--
>Having a Guid for merge replication purposes is fine but
that does not mean
>it has to be the PK.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>news:053f01c357ae$41b2aaa0$a301280a@.phx.gbl...
>> Yes it is a uniqueidentifier and is the PK with a
>> clustered index and it is also the rowguidcol for merge
>> replication.
>> >--Original Message--
>> >Is @.Guid a UniqueIdentifier datatype? If not and the
>> GUID column is it may
>> >have an issue actually using the index properly. But I
>> suspect you are
>> >seeing the results of the connection changing the
>> isolation level to
>> >serializable. Or you can be seeing the effects of page
>> splitting due to the
>> >Guid is clustered. Check to see what is going on with
>> profile on the
>> >connections doing the blocking. In my opinion Guid's
are
>> a horrible PK and
>> >especially if you cluster on them. There are few if
any
>> alternatives that
>> >can perform worse than a Guid in a clustered index on a
>> table with new rows
>> >being inserted.
>> >
>> >--
>> >
>> >Andrew J. Kelly
>> >SQL Server MVP
>> >
>> >
>> >"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>> >news:058701c356ec$ec57e640$a401280a@.phx.gbl...
>> >> We have a stored proc that is causing alot of
blocking
>> >> problems. Basically, it selects a field based on
the PK
>> >> and returns the value (0 or 1) as an output
parameter:
>> >>
>> >> SELECT @.Anonymous_Bool = Anonymous_Bool
>> >> FROM Member_Properties
>> >> WHERE GUID = @.GUID
>> >>
>> >> The estimated execution plan shows that it does a
>> >> clustered index seek. When we tested it out it was
>> fine,
>> >> but when it's in production, it looks like it starts
>> >> causing massive blocking problems. I don't see how
the
>> >> proc itself could be a problem since it is very
>> >> efficient.
>> >>
>> >> Has anyone else encountered a similar problem?
could it
>> >> possibly be something in the front end code? or
maybe
>> >> connection pooling? any ideas on a direction to look
>> >> would be much appreciated...
>> >
>> >
>> >.
>> >
>
>.
>

Thursday, March 8, 2012

blocking and deadlock detection

I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
may be causing either some sort of block or deadlock when certain operations
are performed. Can anyone suggest a good way of configuring the profiler (or
any other method) to help in identifying problem spids?
tia
MG
Have a look here: http://support.microsoft.com/default.aspx/kb/271509
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Hurme" <michael.geles@.thomson.com> wrote in message
news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
> I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
> may be causing either some sort of block or deadlock when certain
> operations
> are performed. Can anyone suggest a good way of configuring the profiler
> (or
> any other method) to help in identifying problem spids?
> tia
> --
> MG
|||You can also enable a trace flag to get detailed information about
deadlocks. See here: http://msdn2.microsoft.com/en-us/library/ms178104.aspx
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eCd19JDaIHA.220@.TK2MSFTNGP04.phx.gbl...
> Have a look here: http://support.microsoft.com/default.aspx/kb/271509
>
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Hurme" <michael.geles@.thomson.com> wrote in message
> news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
>
|||Hi
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
http://blogs.msdn.com/bartd/archive/2006/09/25/770928.aspx
"Hurme" <michael.geles@.thomson.com> wrote in message
news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
> I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
> may be causing either some sort of block or deadlock when certain
> operations
> are performed. Can anyone suggest a good way of configuring the profiler
> (or
> any other method) to help in identifying problem spids?
> tia
> --
> MG

blocking and deadlock detection

I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
may be causing either some sort of block or deadlock when certain operations
are performed. Can anyone suggest a good way of configuring the profiler (or
any other method) to help in identifying problem spids?
tia
--
MGHave a look here: http://support.microsoft.com/default.aspx/kb/271509
Andrew J. Kelly SQL MVP
Solid Quality Mentors
"Hurme" <michael.geles@.thomson.com> wrote in message
news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
> I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
> may be causing either some sort of block or deadlock when certain
> operations
> are performed. Can anyone suggest a good way of configuring the profiler
> (or
> any other method) to help in identifying problem spids?
> tia
> --
> MG|||You can also enable a trace flag to get detailed information about
deadlocks. See here: http://msdn2.microsoft.com/en-us/library/ms178104.aspx
--
Kevin G. Boles
Indicium Resources, Inc.
SQL Server MVP
kgboles a earthlink dt net
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eCd19JDaIHA.220@.TK2MSFTNGP04.phx.gbl...
> Have a look here: http://support.microsoft.com/default.aspx/kb/271509
>
> --
> Andrew J. Kelly SQL MVP
> Solid Quality Mentors
>
> "Hurme" <michael.geles@.thomson.com> wrote in message
> news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
>> I'm using sql 2005 standard. There's a that a 3rd party app that I
>> suspect
>> may be causing either some sort of block or deadlock when certain
>> operations
>> are performed. Can anyone suggest a good way of configuring the profiler
>> (or
>> any other method) to help in identifying problem spids?
>> tia
>> --
>> MG
>|||Hi
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
http://blogs.msdn.com/bartd/archive/2006/09/25/770928.aspx
"Hurme" <michael.geles@.thomson.com> wrote in message
news:6927521C-B6AC-4B6F-B506-4D29039AE885@.microsoft.com...
> I'm using sql 2005 standard. There's a that a 3rd party app that I suspect
> may be causing either some sort of block or deadlock when certain
> operations
> are performed. Can anyone suggest a good way of configuring the profiler
> (or
> any other method) to help in identifying problem spids?
> tia
> --
> MG

Saturday, February 25, 2012

BLOB Field in a table causing huge wastage?

Our backups are extremely slow as the docs table in the site database is
extremely large.
I've tried to shrink the file and shrink the database using DBCC
ShrinkDatabase and then DBCC ShrinkFile with no luck.
On examining the database i found this...
Table_Name data_KB unused_KB unused_pct
-- -- -- --
Docs 16490400 KB 24676360 KB 60%
DocVersions 1957800 KB 3915816 KB 67%
DBCC ShowContig (Docs) shows the following
DBCC SHOWCONTIG scanning 'Docs' table...
Table: 'Docs' (1993058136); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 13643
- Extents Scanned.......................: 1761
- Extent Switches.......................: 1760
- Avg. Pages per Extent..................: 7.7
- Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
- Logical Scan Fragmentation ..............: 0.01%
- Extent Scan Fragmentation ...............: 0.28%
- Avg. Bytes Free per Page................: 210.6
- Avg. Page Density (full)................: 97.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
so as far as i can see the table isnt fragmented but is extremely wasteful.
My thinking is this...
Files in the docs table range widely in size from a few hundred KB to 50MB.
Am i right in thinking that the "content" column in Docs is set to a size
that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB are
wasted on other rows?
I really need someones help in reducing the size of the database, i've tried
reindexing the database too, again with no difference other than the extent
scan fragmentation then goes shooting up to 99.97%Hi Mauro
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
There are no files in a table. Do you mean LOB data (text, image or ntext)?
How is the table defined?
How is the LOB data inserted?
Is the LOB data ever modified? If so, how?
What version are you using?
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
This is NOT correct. How would SQL Server know the largest size you are
planning to insert? The largest size a LOB column can hold is 2 GB and it
obviously isn't reserving 2 GB per row.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> Our backups are extremely slow as the docs table in the site database is
> extremely large.
> I've tried to shrink the file and shrink the database using DBCC
> ShrinkDatabase and then DBCC ShrinkFile with no luck.
> On examining the database i found this...
> Table_Name data_KB unused_KB unused_pct
> -- -- -- --
> Docs 16490400 KB 24676360 KB 60%
> DocVersions 1957800 KB 3915816 KB 67%
> DBCC ShowContig (Docs) shows the following
> DBCC SHOWCONTIG scanning 'Docs' table...
> Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 13643
> - Extents Scanned.......................: 1761
> - Extent Switches.......................: 1760
> - Avg. Pages per Extent..................: 7.7
> - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
> - Logical Scan Fragmentation ..............: 0.01%
> - Extent Scan Fragmentation ...............: 0.28%
> - Avg. Bytes Free per Page................: 210.6
> - Avg. Page Density (full)................: 97.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> so as far as i can see the table isnt fragmented but is extremely
> wasteful.
> My thinking is this...
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
> I really need someones help in reducing the size of the database, i've
> tried
> reindexing the database too, again with no difference other than the
> extent
> scan fragmentation then goes shooting up to 99.97%
>|||I noticed full text indexing was on in the table, so i disabled it for the
table and across the entire sharepoint site. this again had no effect. For
curiosity's sake, i am going to DTS the entire table row by row to another
table and see if it is 26GB as it reports or 40GB as is shown by the file
size.
"Kalen Delaney" wrote:
> Hi Mauro
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> There are no files in a table. Do you mean LOB data (text, image or ntext)?
> How is the table defined?
its defined by Sharepoint Server 2003 (in its unmodified state) there are
three blob fields
> How is the LOB data inserted?
as far as i know its inserted using a normal insert statement, and also
inserted by sharepoint through the sharepoint user interface.
> Is the LOB data ever modified? If so, how?
not as such, it can be overwritten or deleted by a client app. no one can
actually edit the LOB that is inserted directly to the database, however,
sharepoint does allow microsoft office integration which allows us to edit MS
Office documents (and any other documents which have internet explorer
integration).
> What version are you using?
its SQL Server 2000 with SP4.
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>|||thanks for the help, on closer inspection (looking at the file size's
themselves) i've found sql server was reporting the wrong size. Ive since
forced the db to update the stats and its now reporting the correct size.
"Kalen Delaney" wrote:
> Hi Mauro
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> There are no files in a table. Do you mean LOB data (text, image or ntext)?
> How is the table defined?
> How is the LOB data inserted?
> Is the LOB data ever modified? If so, how?
> What version are you using?
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> > Our backups are extremely slow as the docs table in the site database is
> > extremely large.
> >
> > I've tried to shrink the file and shrink the database using DBCC
> > ShrinkDatabase and then DBCC ShrinkFile with no luck.
> >
> > On examining the database i found this...
> > Table_Name data_KB unused_KB unused_pct
> > -- -- -- --
> > Docs 16490400 KB 24676360 KB 60%
> > DocVersions 1957800 KB 3915816 KB 67%
> >
> > DBCC ShowContig (Docs) shows the following
> > DBCC SHOWCONTIG scanning 'Docs' table...
> > Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> > TABLE level scan performed.
> > - Pages Scanned........................: 13643
> > - Extents Scanned.......................: 1761
> > - Extent Switches.......................: 1760
> > - Avg. Pages per Extent..................: 7.7
> > - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
> > - Logical Scan Fragmentation ..............: 0.01%
> > - Extent Scan Fragmentation ...............: 0.28%
> > - Avg. Bytes Free per Page................: 210.6
> > - Avg. Page Density (full)................: 97.40%
> > DBCC execution completed. If DBCC printed error messages, contact your
> > system administrator.
> >
> > so as far as i can see the table isnt fragmented but is extremely
> > wasteful.
> >
> > My thinking is this...
> >
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> >
> > I really need someones help in reducing the size of the database, i've
> > tried
> > reindexing the database too, again with no difference other than the
> > extent
> > scan fragmentation then goes shooting up to 99.97%
> >
>
>|||Full text indexes are stored completely separately and are not considered
part of the space used by the table.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:BD339FF2-F59C-434D-B270-83746525FD87@.microsoft.com...
>I noticed full text indexing was on in the table, so i disabled it for the
> table and across the entire sharepoint site. this again had no effect.
> For
> curiosity's sake, i am going to DTS the entire table row by row to another
> table and see if it is 26GB as it reports or 40GB as is shown by the file
> size.
> "Kalen Delaney" wrote:
>> Hi Mauro
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> There are no files in a table. Do you mean LOB data (text, image or
>> ntext)?
>> How is the table defined?
> its defined by Sharepoint Server 2003 (in its unmodified state) there are
> three blob fields
>> How is the LOB data inserted?
> as far as i know its inserted using a normal insert statement, and also
> inserted by sharepoint through the sharepoint user interface.
>> Is the LOB data ever modified? If so, how?
> not as such, it can be overwritten or deleted by a client app. no one can
> actually edit the LOB that is inserted directly to the database, however,
> sharepoint does allow microsoft office integration which allows us to edit
> MS
> Office documents (and any other documents which have internet explorer
> integration).
>> What version are you using?
> its SQL Server 2000 with SP4.
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> This is NOT correct. How would SQL Server know the largest size you are
>> planning to insert? The largest size a LOB column can hold is 2 GB and it
>> obviously isn't reserving 2 GB per row.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>>
>|||I'm glad you figured this out.
I usually suggest that running with updateusage should be your first step
if you are concerned about the values returned by sp_spaceused.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:7945DA78-EAA0-4248-8C66-9C33B4BF9BB3@.microsoft.com...
> thanks for the help, on closer inspection (looking at the file size's
> themselves) i've found sql server was reporting the wrong size. Ive since
> forced the db to update the stats and its now reporting the correct size.
> "Kalen Delaney" wrote:
>> Hi Mauro
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> There are no files in a table. Do you mean LOB data (text, image or
>> ntext)?
>> How is the table defined?
>> How is the LOB data inserted?
>> Is the LOB data ever modified? If so, how?
>> What version are you using?
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> This is NOT correct. How would SQL Server know the largest size you are
>> planning to insert? The largest size a LOB column can hold is 2 GB and it
>> obviously isn't reserving 2 GB per row.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>>
>> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
>> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
>> > Our backups are extremely slow as the docs table in the site database
>> > is
>> > extremely large.
>> >
>> > I've tried to shrink the file and shrink the database using DBCC
>> > ShrinkDatabase and then DBCC ShrinkFile with no luck.
>> >
>> > On examining the database i found this...
>> > Table_Name data_KB unused_KB
>> > unused_pct
>> > -- -- -- --
>> > Docs 16490400 KB 24676360 KB 60%
>> > DocVersions 1957800 KB 3915816 KB 67%
>> >
>> > DBCC ShowContig (Docs) shows the following
>> > DBCC SHOWCONTIG scanning 'Docs' table...
>> > Table: 'Docs' (1993058136); index ID: 1, database ID: 15
>> > TABLE level scan performed.
>> > - Pages Scanned........................: 13643
>> > - Extents Scanned.......................: 1761
>> > - Extent Switches.......................: 1760
>> > - Avg. Pages per Extent..................: 7.7
>> > - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
>> > - Logical Scan Fragmentation ..............: 0.01%
>> > - Extent Scan Fragmentation ...............: 0.28%
>> > - Avg. Bytes Free per Page................: 210.6
>> > - Avg. Page Density (full)................: 97.40%
>> > DBCC execution completed. If DBCC printed error messages, contact your
>> > system administrator.
>> >
>> > so as far as i can see the table isnt fragmented but is extremely
>> > wasteful.
>> >
>> > My thinking is this...
>> >
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> >
>> > I really need someones help in reducing the size of the database, i've
>> > tried
>> > reindexing the database too, again with no difference other than the
>> > extent
>> > scan fragmentation then goes shooting up to 99.97%
>> >
>>

BLOB Field in a table causing huge wastage?

Our backups are extremely slow as the docs table in the site database is
extremely large.
I've tried to shrink the file and shrink the database using DBCC
ShrinkDatabase and then DBCC ShrinkFile with no luck.
On examining the database i found this...
Table_Name data_KB unused_KB unused_pct
-- -- -- --
Docs 16490400 KB 24676360 KB 60%
DocVersions 1957800 KB 3915816 KB 67%
DBCC ShowContig (Docs) shows the following
DBCC SHOWCONTIG scanning 'Docs' table...
Table: 'Docs' (1993058136); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 13643
- Extents Scanned.......................: 1761
- Extent Switches.......................: 1760
- Avg. Pages per Extent..................: 7.7
- Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
- Logical Scan Fragmentation ..............: 0.01%
- Extent Scan Fragmentation ...............: 0.28%
- Avg. Bytes Free per Page................: 210.6
- Avg. Page Density (full)................: 97.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
so as far as i can see the table isnt fragmented but is extremely wasteful.
My thinking is this...
Files in the docs table range widely in size from a few hundred KB to 50MB.
Am i right in thinking that the "content" column in Docs is set to a size
that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB ar
e
wasted on other rows?
I really need someones help in reducing the size of the database, i've tried
reindexing the database too, again with no difference other than the extent
scan fragmentation then goes shooting up to 99.97%Hi Mauro

> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
There are no files in a table. Do you mean LOB data (text, image or ntext)?
How is the table defined?
How is the LOB data inserted?
Is the LOB data ever modified? If so, how?
What version are you using?

> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
This is NOT correct. How would SQL Server know the largest size you are
planning to insert? The largest size a LOB column can hold is 2 GB and it
obviously isn't reserving 2 GB per row.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> Our backups are extremely slow as the docs table in the site database is
> extremely large.
> I've tried to shrink the file and shrink the database using DBCC
> ShrinkDatabase and then DBCC ShrinkFile with no luck.
> On examining the database i found this...
> Table_Name data_KB unused_KB unused_pct
> -- -- -- --
-
> Docs 16490400 KB 24676360 KB 60%
> DocVersions 1957800 KB 3915816 KB 67%
> DBCC ShowContig (Docs) shows the following
> DBCC SHOWCONTIG scanning 'Docs' table...
> Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 13643
> - Extents Scanned.......................: 1761
> - Extent Switches.......................: 1760
> - Avg. Pages per Extent..................: 7.7
> - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761
]
> - Logical Scan Fragmentation ..............: 0.01%
> - Extent Scan Fragmentation ...............: 0.28%
> - Avg. Bytes Free per Page................: 210.6
> - Avg. Page Density (full)................: 97.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> so as far as i can see the table isnt fragmented but is extremely
> wasteful.
> My thinking is this...
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
> I really need someones help in reducing the size of the database, i've
> tried
> reindexing the database too, again with no difference other than the
> extent
> scan fragmentation then goes shooting up to 99.97%
>|||I noticed full text indexing was on in the table, so i disabled it for the
table and across the entire sharepoint site. this again had no effect. For
curiosity's sake, i am going to DTS the entire table row by row to another
table and see if it is 26GB as it reports or 40GB as is shown by the file
size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
its defined by Sharepoint Server 2003 (in its unmodified state) there are
three blob fields

> How is the LOB data inserted?
as far as i know its inserted using a normal insert statement, and also
inserted by sharepoint through the sharepoint user interface.

> Is the LOB data ever modified? If so, how?
not as such, it can be overwritten or deleted by a client app. no one can
actually edit the LOB that is inserted directly to the database, however,
sharepoint does allow microsoft office integration which allows us to edit M
S
Office documents (and any other documents which have internet explorer
integration).
> What version are you using?
its SQL Server 2000 with SP4.
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>|||thanks for the help, on closer inspection (looking at the file size's
themselves) i've found sql server was reporting the wrong size. Ive since
forced the db to update the stats and its now reporting the correct size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
> How is the LOB data inserted?
> Is the LOB data ever modified? If so, how?
> What version are you using?
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
>
>|||Full text indexes are stored completely separately and are not considered
part of the space used by the table.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:BD339FF2-F59C-434D-B270-83746525FD87@.microsoft.com...
>I noticed full text indexing was on in the table, so i disabled it for the
> table and across the entire sharepoint site. this again had no effect.
> For
> curiosity's sake, i am going to DTS the entire table row by row to another
> table and see if it is 26GB as it reports or 40GB as is shown by the file
> size.
> "Kalen Delaney" wrote:
>
> its defined by Sharepoint Server 2003 (in its unmodified state) there are
> three blob fields
>
> as far as i know its inserted using a normal insert statement, and also
> inserted by sharepoint through the sharepoint user interface.
>
> not as such, it can be overwritten or deleted by a client app. no one can
> actually edit the LOB that is inserted directly to the database, however,
> sharepoint does allow microsoft office integration which allows us to edit
> MS
> Office documents (and any other documents which have internet explorer
> integration).
> its SQL Server 2000 with SP4.
>|||I'm glad you figured this out.
I usually suggest that running with updateusage should be your first step
if you are concerned about the values returned by sp_spaceused.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:7945DA78-EAA0-4248-8C66-9C33B4BF9BB3@.microsoft.com...[vbcol=seagreen]
> thanks for the help, on closer inspection (looking at the file size's
> themselves) i've found sql server was reporting the wrong size. Ive since
> forced the db to update the stats and its now reporting the correct size.
> "Kalen Delaney" wrote:
>

Thursday, February 16, 2012

Blank Page...

I cannot figure out why I have a blank page in front of my report. I have
hunted and hunted to see what I have set that would be causing this. The
blank page is generated when I print to paper, Adobe or Excel but not in the
preview.
Any suggestions?
Thanks!On Apr 18, 10:16 am, "Chris Marsh" <cma...@.synergy-intl.com> wrote:
> I cannot figure out why I have a blank page in front of my report. I have
> hunted and hunted to see what I have set that would be causing this. The
> blank page is generated when I print to paper, Adobe or Excel but not in the
> preview.
> Any suggestions?
> Thanks!
Firstly, you will want to make sure that all of your controls'
properties (where possible) are set to 'Fit table on one page if
possible.' Make sure that none of the controls' properties have a page
break after groups, etc. If this does not resolve the issue, try to
put the table controls, etc inside a rectangle. Hope this helps.
Regards,
Enrique Martinez
Sr. Software Consultant|||Thank you - the second suggestion did the trick!
"EMartinez" <emartinez.pr1@.gmail.com> wrote in message
news:1176911441.964042.82780@.b58g2000hsg.googlegroups.com...
> On Apr 18, 10:16 am, "Chris Marsh" <cma...@.synergy-intl.com> wrote:
>> I cannot figure out why I have a blank page in front of my report. I have
>> hunted and hunted to see what I have set that would be causing this. The
>> blank page is generated when I print to paper, Adobe or Excel but not in
>> the
>> preview.
>> Any suggestions?
>> Thanks!
> Firstly, you will want to make sure that all of your controls'
> properties (where possible) are set to 'Fit table on one page if
> possible.' Make sure that none of the controls' properties have a page
> break after groups, etc. If this does not resolve the issue, try to
> put the table controls, etc inside a rectangle. Hope this helps.
> Regards,
> Enrique Martinez
> Sr. Software Consultant
>|||On Apr 18, 1:57 pm, "Chris Marsh" <cma...@.synergy-intl.com> wrote:
> Thank you - the second suggestion did the trick!
> "EMartinez" <emartinez...@.gmail.com> wrote in message
> news:1176911441.964042.82780@.b58g2000hsg.googlegroups.com...
> > On Apr 18, 10:16 am, "Chris Marsh" <cma...@.synergy-intl.com> wrote:
> >> I cannot figure out why I have a blank page in front of my report. I have
> >> hunted and hunted to see what I have set that would be causing this. The
> >> blank page is generated when I print to paper, Adobe or Excel but not in
> >> the
> >> preview.
> >> Any suggestions?
> >> Thanks!
> > Firstly, you will want to make sure that all of your controls'
> > properties (where possible) are set to 'Fit table on one page if
> > possible.' Make sure that none of the controls' properties have a page
> > break after groups, etc. If this does not resolve the issue, try to
> > put the table controls, etc inside a rectangle. Hope this helps.
> > Regards,
> > Enrique Martinez
> > Sr. Software Consultant
You're welcome. Glad I could be of assistance.
Regards,
Enrique Martinez
Sr. Software Consultant

Tuesday, February 14, 2012

BlackMail by software manufactureron sql 6.5

we are using a software that runs on sql 6.5 and the manufacturer is causing all types of problems

I tried to trace what he is doing on one few steps

i could not, how has he disabled the trace, can i enable it

how and will it have some repurcussion

i want to throw this vendor out

Plain BlackmailHow is the vendor connecting to your environment? VPN? Disable the VPN account. Also, analyze your logins and see which ones are used by your app. If possible, change the password on the application account (I hope that account is not SA), and by all means change the SA password. But before you do all that make sure you're not breaching your Software License Agreement by monkeying with all this stuff.|||Refer to your other http://www.dbforums.com/t990068.html post about enabling SQL Trace.

Ensure to tighten the security on SQL by using secure password for SA account.|||I mean that when i set up the trace,each operation that we do on the vendors software

I need to track where it effecting on sql tables

Can a vendor disable that tracking , or i have it set wrong or something

I dont have a vpn connection for the vendor open

so he can't enter without our approval

For trace is not user friendly on 6.5|||Ensure you're using correct password to connect SQL database.
There is no setting where you can bypass the Trace on SQL Server.

Sunday, February 12, 2012

Bizarre slow query problem (again)

I previously posted about a problem where it seemed that changing the
case of the word "BY" in a SELECT query was causing it to run much much
faster.

Now I've hit the same thing again, where basically almost any change I
make to how the query is executed (so that it still performs the same
function) causes the performance to jump from a dismal 7 or 8 seconds
to instantaneous. It's a very simple query of the form:

SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0

which was running fine until a moment ago, when it suddently started
running hopelessly slowly. If change anything in the query to
lowercase (or the Min to uppercase), it runs fine again. Last time
someone suggested something about a bad plan being cached, and after a
bit of research I found the commands DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE. Sure enough, after running these, the query started
running fine again. The question is

a) why is this happening? Is it a bug in my code, or in SQL server?
b) is it worth detecting it and fixing it automatically? I.e, should I
put some code in that notices that a query is running far too slowly,
then runs "DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE"? Or will that
cause other problems?

ThanksHi
http://blogs.msdn.com/khen1234/arch.../02/424228.aspx
<wizofaus@.hotmail.comwrote in message
news:1168494297.719888.324130@.77g2000hsv.googlegro ups.com...

Quote:

Originally Posted by

>I previously posted about a problem where it seemed that changing the
case of the word "BY" in a SELECT query was causing it to run much much
faster.
>
Now I've hit the same thing again, where basically almost any change I
make to how the query is executed (so that it still performs the same
function) causes the performance to jump from a dismal 7 or 8 seconds
to instantaneous. It's a very simple query of the form:
>
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0
>
which was running fine until a moment ago, when it suddently started
running hopelessly slowly. If change anything in the query to
lowercase (or the Min to uppercase), it runs fine again. Last time
someone suggested something about a bad plan being cached, and after a
bit of research I found the commands DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE. Sure enough, after running these, the query started
running fine again. The question is
>
a) why is this happening? Is it a bug in my code, or in SQL server?
b) is it worth detecting it and fixing it automatically? I.e, should I
put some code in that notices that a query is running far too slowly,
then runs "DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE"? Or will that
cause other problems?
>
Thanks
>

|||Uri Dimant wrote:

Quote:

Originally Posted by

Hi
http://blogs.msdn.com/khen1234/arch.../02/424228.aspx
>


Thanks for that...amazingly enough it turned that that was exactly my
problem, although I'm using ad-hoc queries rather than stored procs. I
did some more testing, and it turned out that it was because it was
executing the same query twice, the first time with an atypical
parameter value, and the second time with a more typical one, that the
query was running so slowly. That is, executing

SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0, with @.0 = 999
followed by
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0, with @.0 = 123

Caused the second query to run absurdly slowly, because in the first
case only very few rows in the table had MyKey = 999 whereas almost
every row had MyKey = 123. After doing a DBCC FREEPROCCACHE and
swapping the queries around, they both ran fine.

In the end I ended up de-parameterizing the query just for this case,
but now I'm worried - how can I be sure that my other queries won't
suffer from the same problem? Should I never use parameters because of
this possibility?|||In the end I ended up de-parameterizing the query just for this case,

Quote:

Originally Posted by

but now I'm worried - how can I be sure that my other queries won't
suffer from the same problem? Should I never use parameters because of
this possibility?
>


An ability using parameters is very powerful , don't afraid using parameters
, just test it carefuly

<wizofaus@.hotmail.comwrote in message
news:1168509843.760497.149900@.i56g2000hsf.googlegr oups.com...

Quote:

Originally Posted by

Uri Dimant wrote:

Quote:

Originally Posted by

>Hi
>http://blogs.msdn.com/khen1234/arch.../02/424228.aspx
>>


Thanks for that...amazingly enough it turned that that was exactly my
problem, although I'm using ad-hoc queries rather than stored procs. I
did some more testing, and it turned out that it was because it was
executing the same query twice, the first time with an atypical
parameter value, and the second time with a more typical one, that the
query was running so slowly. That is, executing
>
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0, with @.0 = 999
followed by
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0, with @.0 = 123
>
Caused the second query to run absurdly slowly, because in the first
case only very few rows in the table had MyKey = 999 whereas almost
every row had MyKey = 123. After doing a DBCC FREEPROCCACHE and
swapping the queries around, they both ran fine.
>
In the end I ended up de-parameterizing the query just for this case,
but now I'm worried - how can I be sure that my other queries won't
suffer from the same problem? Should I never use parameters because of
this possibility?
>

|||Uri Dimant wrote:

Quote:

Originally Posted by

Quote:

Originally Posted by

In the end I ended up de-parameterizing the query just for this case,
but now I'm worried - how can I be sure that my other queries won't
suffer from the same problem? Should I never use parameters because of
this possibility?


An ability using parameters is very powerful , don't afraid using parameters
, just test it carefuly
>


Sure, except that the content of the database is out of my control -
this particular scenario (where nearly all the records matched a
particular key, but the query was first run against a different key)
could easily arrise in a production environment. More to the point,
I've seen no evidence that I'm getting any performance benefits from
using parameterized queries.
I suspect I will at least add a configuration option to avoid
parameterized queries (relatively straightforward, as I have a layer of
code that handles query parameters) if I see a problem like this again.|||wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

Uri Dimant wrote:

Quote:

Originally Posted by

Quote:

Originally Posted by

>>In the end I ended up de-parameterizing the query just for this
>>case,


"de-parameterizing"? You mean changing to dynamic sql and leaving yourself
vulnerable to sql injection??

Quote:

Originally Posted by

Quote:

Originally Posted by

Quote:

Originally Posted by

>>but now I'm worried - how can I be sure that my other queries won't
>>suffer from the same problem? Should I never use parameters
>>because of this possibility?
>>>


>An ability using parameters is very powerful , don't afraid using
>parameters , just test it carefuly
>>


Sure, except that the content of the database is out of my control -
this particular scenario (where nearly all the records matched a
particular key, but the query was first run against a different key)
could easily arrise in a production environment. More to the point,
I've seen no evidence that I'm getting any performance benefits from
using parameterized queries.
I suspect I will at least add a configuration option to avoid
parameterized queries (relatively straightforward, as I have a layer
of
code that handles query parameters) if I see a problem like this
again.


This is a ridiculous overreaction. Problems due to parameter-sniffing are
too rare to justify eliminating the benefits of using parameters. Talk about
"throwing the baby out with te bath water".

The article showed two, no three, ways to alleviate the problems caused by
parameter sniffing and still use parameters. So what do you do? ignore the
article's advice and "de-parameterize" your query...

--
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||a) why is this happening? Is it a bug in my code, or in SQL server?

Quote:

Originally Posted by

b) is it worth detecting it and fixing it automatically? I.e, should I
put some code in that notices that a query is running far too slowly,
then runs "DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE"? Or will that
cause other problems?


Pardon me but

Why did you not say which version of SQL Server you are running on?
And why not mention the version of MDAC you have installed (since you posted
to microsoft.public.data.ado)?
And why not mention something about the Primary Keys & Indexes of the table
you are querying - I take it you have an index on MyKey?
And have you checked the documentation, whitepapers, MSDN for details on
query performance?
And have you checked the query plan in Query Analyser?

Cheers

Stephen Howe|||Bob Barrows [MVP] wrote:

Quote:

Originally Posted by

wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

Uri Dimant wrote:

Quote:

Originally Posted by

>In the end I ended up de-parameterizing the query just for this
>case,


>
"de-parameterizing"? You mean changing to dynamic sql and leaving yourself
vulnerable to sql injection??


No, because the parameter values are fully under my control - they are
not submitted directly by the user. At any rate, in this case it's
always just a simple integer.

Quote:

Originally Posted by

>

Quote:

Originally Posted by

Quote:

Originally Posted by

>but now I'm worried - how can I be sure that my other queries won't
>suffer from the same problem? Should I never use parameters
>because of this possibility?
>>
An ability using parameters is very powerful , don't afraid using
parameters , just test it carefuly
>


Sure, except that the content of the database is out of my control -
this particular scenario (where nearly all the records matched a
particular key, but the query was first run against a different key)
could easily arrise in a production environment. More to the point,
I've seen no evidence that I'm getting any performance benefits from
using parameterized queries.
I suspect I will at least add a configuration option to avoid
parameterized queries (relatively straightforward, as I have a layer
of
code that handles query parameters) if I see a problem like this
again.


>
This is a ridiculous overreaction. Problems due to parameter-sniffing are
too rare to justify eliminating the benefits of using parameters. Talk about
"throwing the baby out with te bath water".


Well, yes, but as I said, the testing I've done has revealed that my
app definitely is suffering badly from parameter-sniffing problems
(I've come across yet another one since), and that the only reliable
way I've found to solve it is to NOT use parameters, which doesn't seem
to be adversely affecting performance.

Quote:

Originally Posted by

>
The article showed two, no three, ways to alleviate the problems caused by
parameter sniffing and still use parameters. So what do you do? ignore the
article's advice and "de-parameterize" your query...


Yes, but they use stored procs. I'm trying to avoid stored procs in
order to keep RDBMS independence (although for the time being, we've no
immediate need to support other databases).|||wizof...@.hotmail.com wrote:

Quote:

Originally Posted by

>
Yes, but they use stored procs. I'm trying to avoid stored procs in
order to keep RDBMS independence (although for the time being, we've no
immediate need to support other databases).


I would not do that unless there is a very strong business reason to do
so.
IMO maintaining RDBMS independence is like living in an RV instead of
living in a house - you get less comfort for your money, you pay a high
price for your mobility.
Achieving true RDBMS independence is both complex and expensive.
Details here

http://www.devx.com/dbzone/Article/32852
Under most circumstances I would settle down and use SQL Server
proprietary features to get the biggest bang for my buck. Should a need
arise to move - I would move and settle down again.

--------
Alex Kuznetsov
http://sqlserver-tips.blogspot.com/
http://sqlserver-puzzles.blogspot.com/|||On 11 Jan 2007 08:35:45 -0800, "Alex Kuznetsov"
<AK_TIREDOFSPAM@.hotmail.COMwrote:

Quote:

Originally Posted by

>IMO maintaining RDBMS independence is like living in an RV instead of
>living in a house - you get less comfort for your money, you pay a high
>price for your mobility.


I like it!

Roy Harvey
Beacon Falls, CT|||IMO maintaining RDBMS independence is like living in an RV instead of

Quote:

Originally Posted by

living in a house - you get less comfort for your money, you pay a high
price for your mobility.
Achieving true RDBMS independence is both complex and expensive.


Hahaha, now that is picture that will never make it in any of Joe
Celko's books :-)

Gert-Jan|||be EXTREMELY wary when using DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE on a production machine. They clear all cached SPs,
queries and plans. The instance is bound to run under extreme stress
for a considerable amount of time.

wizof...@.hotmail.com wrote:

Quote:

Originally Posted by

I previously posted about a problem where it seemed that changing the
case of the word "BY" in a SELECT query was causing it to run much much
faster.
>
Now I've hit the same thing again, where basically almost any change I
make to how the query is executed (so that it still performs the same
function) causes the performance to jump from a dismal 7 or 8 seconds
to instantaneous. It's a very simple query of the form:
>
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0
>
which was running fine until a moment ago, when it suddently started
running hopelessly slowly. If change anything in the query to
lowercase (or the Min to uppercase), it runs fine again. Last time
someone suggested something about a bad plan being cached, and after a
bit of research I found the commands DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE. Sure enough, after running these, the query started
running fine again. The question is
>
a) why is this happening? Is it a bug in my code, or in SQL server?
b) is it worth detecting it and fixing it automatically? I.e, should I
put some code in that notices that a query is running far too slowly,
then runs "DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE"? Or will that
cause other problems?
>
Thanks

|||swaroop.a...@.gmail.com wrote:

Quote:

Originally Posted by

be EXTREMELY wary when using DBCC DROPCLEANBUFFERS and DBCC
FREEPROCCACHE on a production machine. They clear all cached SPs,
queries and plans. The instance is bound to run under extreme stress
for a considerable amount of time.
>


The instance? You mean it affects all databases?
In this case, I determined I'd have to do it before every single query
call, so obviously that's not practical.|||(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

Sure, except that the content of the database is out of my control -
this particular scenario (where nearly all the records matched a
particular key, but the query was first run against a different key)
could easily arrise in a production environment. More to the point,
I've seen no evidence that I'm getting any performance benefits from
using parameterized queries.
I suspect I will at least add a configuration option to avoid
parameterized queries (relatively straightforward, as I have a layer of
code that handles query parameters) if I see a problem like this again.


Which version of SQL Server are you using? Here is a test that you can
try to see that you can do to actually test the benefit of
parameterised queries. First create this database:

CREATE DATABASE many_sps
go
USE many_sps
go
DECLARE @.sql nvarchar(4000),
@.x int
SELECT @.x = 1000
WHILE @.x 0
BEGIN
SELECT @.sql = 'CREATE PROCEDURE abc_' + ltrim(str(@.x)) +
'_sp @.orderid int AS
SELECT O.OrderID, O.OrderDate, O.CustomerID, C.CompanyName,
Prodcnt = OD.cnt, Totalsum = OD.total
FROM Northwind..Orders O
JOIN Northwind..Customers C ON O.CustomerID = C.CustomerID
JOIN (SELECT OrderID, cnt = COUNT(*), total = SUM(Quantity * UnitPrice)
FROM Northwind..[Order Details]
GROUP BY OrderID) AS OD ON OD.OrderID = O.OrderID
WHERE O.OrderID = @.orderid'
EXEC(@.sql)
SELECT @.x = @.x - 1
END

(Don't worry if you don't have Northwind on your server, you are not going
to run these procedures.)

Then use F7 to get the Summary page, and navigate to the Stored Procedures
node for many_sps. Select some 200 procedures, right-click and select
Script As Create To New Query Window. Go for a cup of coffee - this will
take some time depending on your hardware.

When the operation has completed (or you have gotten tired of waiting
and killed SSMS), issue this command:

ALTER DATABASE db SET PARAMETERIZATION FORCED

Redo the scripting operation. It will now complete in five seconds.

The reason for this is that SQL Server Management Studio does not use
parameterised queries. For every procedure it scripts, Mgmt Studio
issues around five queries. All these queries makes it to the
cache that explodes, and all these queries are compiled.

When you set a database to forced parameterisation, SQL Server will
auto-parameterise all statements (with some exceptions documented in
Books Online); normally it only auto-parameterise very simple queries.
In the case of Mgmt Studio it's reallly a go-faster switch.

So dismissing caching of parameterised queries can be a serious mistake.
But it is certainly true that there are situations where parameter
sniffing can be a problem. If it is possible for you to tell in the
appliocation "this is an odd value that needs a special plan", then
you can modify the query text by adding a redudant condition like
"AND 1 = 1". Actually as you have found, changing "BY" to "by" or even
adding extra spaces help. This is because the lookup in the cache
is done on a hash without first collapsing spaces or parsing the
query text.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog (esquel@.sommarskog.se) writes:

Quote:

Originally Posted by

When the operation has completed (or you have gotten tired of waiting
and killed SSMS), issue this command:
>
ALTER DATABASE db SET PARAMETERIZATION FORCED
>
Redo the scripting operation. It will now complete in five seconds.


By the way, this is something important for your application as well.
Say that a DBA finds out that your app is thrashing the cache by not
using parameterised queries, and sets the database to forced
parameterisation, you will get back the behaviour you have now.

A bettery remedy is to add OPTION (RECOMPILE) at the end of sensitive
queries. This forces a statement recompile, and the query will not be
put in cache. This means that you can still use parameterised queries
and get the other benefits of it. (Protection for SQL injection and
repsecting the user's regional settings.) You also avoid thrashing
the cache.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

When you set a database to forced parameterisation, SQL Server will
auto-parameterise all statements (with some exceptions documented in
Books Online); normally it only auto-parameterise very simple queries.
In the case of Mgmt Studio it's reallly a go-faster switch.
>
So dismissing caching of parameterised queries can be a serious mistake.
But it is certainly true that there are situations where parameter
sniffing can be a problem. If it is possible for you to tell in the
appliocation "this is an odd value that needs a special plan", then
you can modify the query text by adding a redudant condition like
"AND 1 = 1". Actually as you have found, changing "BY" to "by" or even
adding extra spaces help. This is because the lookup in the cache
is done on a hash without first collapsing spaces or parsing the
query text.
>


Thanks...one of the most helpful replies I've had on usenet for some
time now!

The problem is that it's pretty hard for me to know that a value is
"odd". In this case, like I said the query in this case is

SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0

Where @.0 is actually a value from a list that is obtained from
elsewhere (not the database). It loops through this list, calling the
same query for each one.

Now, I suppose I could first do

SELECT Count(*) FROM MyTable WHERE MyKey = @.0

and determine if the number was very low, and if so, de-parameterize it
or add a space or whatever, but then this second query would
potentially suffer from the same problem.

I suppose another alternative is to build another query first

SELECT MyKey, Min(MyValue) FROM MyTable GROUP BY MyKey

then use this to obtain the minimum value for each key, but there's
only so much time I can spend rewriting queries to side-step quirky
performance issues (the application has many many ad-hoc queries,
nearly all parameterized on the same key, so they are all prone to the
same problem).

BTW, this is under SQL server 2000. I've yet to determine if this
particular problem does actually exist under 2005, which is what we're
using for the production environment. Will definitely do that before I
waste too much more time on it.|||(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

Thanks...one of the most helpful replies I've had on usenet for some
time now!
>
The problem is that it's pretty hard for me to know that a value is
"odd".


I can understand that this is not always simple. I didn't say this, in
hope it would be. :-)

However, I think I have a cure for you:

Quote:

Originally Posted by

In this case, like I said the query in this case is
>
SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0
>
Where @.0 is actually a value from a list that is obtained from
elsewhere (not the database). It loops through this list, calling the
same query for each one.


Stop! Don't do that! The problems with query plans aside, this is an
ineffecient use of SQL Server. Get all data at once with:

SELECT t.MyKey, Min(t.MyValue)
FROM MyTable t
JOIN list_to_table(@.list) f ON t.MyKey = f.value
GROUP BY t.MyKey

Where list_to_table is a table-valued function that transform the list
to a table. I have a whole bunch of such functions on
http://www.sommarskog.se/arrays-in-sql.html.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Stephen,

You forget in the list, the version of the framework that is used. It is to
AdoNet as well you know.

:-)

Cor

"Stephen Howe" <stephenPOINThoweATtns-globalPOINTcomschreef in bericht
news:%23kg1j%23YNHHA.992@.TK2MSFTNGP04.phx.gbl...

Quote:

Originally Posted by

Quote:

Originally Posted by

>a) why is this happening? Is it a bug in my code, or in SQL server?
>b) is it worth detecting it and fixing it automatically? I.e, should I
>put some code in that notices that a query is running far too slowly,
>then runs "DBCC DROPCLEANBUFFERS; DBCC FREEPROCCACHE"? Or will that
>cause other problems?


>
Pardon me but
>
Why did you not say which version of SQL Server you are running on?
And why not mention the version of MDAC you have installed (since you
posted
to microsoft.public.data.ado)?
And why not mention something about the Primary Keys & Indexes of the
table
you are querying - I take it you have an index on MyKey?
And have you checked the documentation, whitepapers, MSDN for details on
query performance?
And have you checked the query plan in Query Analyser?
>
Cheers
>
Stephen Howe
>
>

|||Erland Sommarskog wrote:

Quote:

Originally Posted by

(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

Thanks...one of the most helpful replies I've had on usenet for some
time now!

The problem is that it's pretty hard for me to know that a value is
"odd".


>
I can understand that this is not always simple. I didn't say this, in
hope it would be. :-)
>
However, I think I have a cure for you:
>

Quote:

Originally Posted by

In this case, like I said the query in this case is

SELECT Min(MyValue) FROM MyTable WHERE MyKey = @.0

Where @.0 is actually a value from a list that is obtained from
elsewhere (not the database). It loops through this list, calling the
same query for each one.


>
Stop! Don't do that! The problems with query plans aside, this is an
ineffecient use of SQL Server. Get all data at once with:
>
SELECT t.MyKey, Min(t.MyValue)
FROM MyTable t
JOIN list_to_table(@.list) f ON t.MyKey = f.value
GROUP BY t.MyKey
>
Where list_to_table is a table-valued function that transform the list
to a table. I have a whole bunch of such functions on
http://www.sommarskog.se/arrays-in-sql.html.
>


I agree this sort of thing would be preferable, and more efficient, but
as it is, providing I don't get the bad plan problem, it's efficient
enough as it is, and I'm wary of making too many big changes at this
point. That sort of thing I'd prefer to leave for the next version.

The thing is, I just tried exporting all the records to a CSV file,
creating a clean database with the same tables/indices, re-importing
all the records, and the same problem DOESN'T happen: I can run that
query with the "odd" value first, then with the common value, and both
queries run fast. So something else has happened to my database that
surely I must be able to reset somehow?|||(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

I agree this sort of thing would be preferable, and more efficient, but
as it is, providing I don't get the bad plan problem, it's efficient
enough as it is, and I'm wary of making too many big changes at this
point. That sort of thing I'd prefer to leave for the next version.


But keep in mind that the solution you have now will not scale well. If
the data in production is ten times larger than you have expected, you
will get ten times more execution time, even with the good plan.

Quote:

Originally Posted by

The thing is, I just tried exporting all the records to a CSV file,
creating a clean database with the same tables/indices, re-importing
all the records, and the same problem DOESN'T happen: I can run that
query with the "odd" value first, then with the common value, and both
queries run fast. So something else has happened to my database that
surely I must be able to reset somehow?


I eavesdropped a discussion at PASS in Seattle last year, when a guy
had done extensive tests, and he could repeat a scenario that depending
on which order he loaded the same data, he would get different plans,
good or bad. I presume that part of the answer lies what exactly is in
the statistics. Normally, statistics are only samples, and if the
statistics does not well reflect the data distribution, your plans
will not always be the best.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

I agree this sort of thing would be preferable, and more efficient, but
as it is, providing I don't get the bad plan problem, it's efficient
enough as it is, and I'm wary of making too many big changes at this
point. That sort of thing I'd prefer to leave for the next version.


>
But keep in mind that the solution you have now will not scale well. If
the data in production is ten times larger than you have expected, you
will get ten times more execution time, even with the good plan.


Sure. I definitely plan on doing some query optimization and
consolidation for the next version. Your routines may well come in
handy, so thanks.

Quote:

Originally Posted by

>

Quote:

Originally Posted by

The thing is, I just tried exporting all the records to a CSV file,
creating a clean database with the same tables/indices, re-importing
all the records, and the same problem DOESN'T happen: I can run that
query with the "odd" value first, then with the common value, and both
queries run fast. So something else has happened to my database that
surely I must be able to reset somehow?


>
I eavesdropped a discussion at PASS in Seattle last year, when a guy
had done extensive tests, and he could repeat a scenario that depending
on which order he loaded the same data, he would get different plans,
good or bad. I presume that part of the answer lies what exactly is in
the statistics. Normally, statistics are only samples, and if the
statistics does not well reflect the data distribution, your plans
will not always be the best.
>


Well I found another solution - reindex the table.

I ran

dbcc dbreindex('mytable', ' ', 90) on the initial database, and now the
problem is gone away. My main concern was that if we did see this
problem occuring in productoion databases, how could we fix it, other
than changing the code, and at least now I have such a solution, and
it's a bit less drastic than exporting and re-importing all the data
(which potentially could have taken hours).

I'm not sure whether 90 is the best parameter value here, that was just
from the MSDN article.|||(wizofaus@.hotmail.com) writes:

Quote:

Originally Posted by

Well I found another solution - reindex the table.
>
I ran
>
dbcc dbreindex('mytable', ' ', 90) on the initial database, and now the
problem is gone away. My main concern was that if we did see this
problem occuring in productoion databases, how could we fix it, other
than changing the code, and at least now I have such a solution, and
it's a bit less drastic than exporting and re-importing all the data
(which potentially could have taken hours).


It's recommended to run a maintenance job to reindex the table with some
frequency. The main reason for this is to prevent defragmentation. A side
effect of this is that statistics are updated with fullscan, that is all
rows are considered. That gives the optimizer more accurate information.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||On 13 Jan 2007 18:15:11 -0800, wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

>Well I found another solution - reindex the table.
>
>I ran
>
>dbcc dbreindex('mytable', ' ', 90) on the initial database, and now the
>problem is gone away. My main concern was that if we did see this
>problem occuring in productoion databases, how could we fix it, other
>than changing the code, and at least now I have such a solution, and
>it's a bit less drastic than exporting and re-importing all the data
>(which potentially could have taken hours).


How large is your table?

Maybe a simple "update statistics" would also fix things?

J.|||JXStern wrote:

Quote:

Originally Posted by

On 13 Jan 2007 18:15:11 -0800, wizofaus@.hotmail.com wrote:
>

Quote:

Originally Posted by

Well I found another solution - reindex the table.

I ran

dbcc dbreindex('mytable', ' ', 90) on the initial database, and now the
problem is gone away. My main concern was that if we did see this
problem occuring in productoion databases, how could we fix it, other
than changing the code, and at least now I have such a solution, and
it's a bit less drastic than exporting and re-importing all the data
(which potentially could have taken hours).


>
How large is your table?


'bout 2 million records, 9 columns.

Quote:

Originally Posted by

>
Maybe a simple "update statistics" would also fix things?
>


Quite possibly - unfortunately I can't re-create the problem now to
test it!
Will be the first thing I try if I see the same problem again, though.|||On 15 Jan 2007 13:36:21 -0800, wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

Quote:

Originally Posted by

>How large is your table?


>
>'bout 2 million records, 9 columns.

Quote:

Originally Posted by

>>
>Maybe a simple "update statistics" would also fix things?
>>


>Quite possibly - unfortunately I can't re-create the problem now to
>test it!
>Will be the first thing I try if I see the same problem again, though.


Could probably reproduce it - insert a 1,000,000 rows = 1, then a
handfull of rows numbered 2-10. Query it for =1, it will scan, then
query it for =2. Or something like that.

J.|||JXStern wrote:

Quote:

Originally Posted by

On 15 Jan 2007 13:36:21 -0800, wizofaus@.hotmail.com wrote:
>

Quote:

Originally Posted by

Quote:

Originally Posted by

How large is your table?


'bout 2 million records, 9 columns.

Quote:

Originally Posted by

>
Maybe a simple "update statistics" would also fix things?
>


Quite possibly - unfortunately I can't re-create the problem now to
test it!
Will be the first thing I try if I see the same problem again, though.


>
Could probably reproduce it - insert a 1,000,000 rows = 1, then a
handfull of rows numbered 2-10. Query it for =1, it will scan, then
query it for =2. Or something like that.
>


Not sure, but now I have another issue which does seem to point towards
parameters being a problem.

If I run a query with quite a large number of parameters which
basically

SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN (@.13, @.14, @.15...@.20)

it takes over 2 seconds, but when I substitute the last 8 parameters
(13-20) with their values, i.e.

SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN ('value1', 'value2', 'value3', 'value4'...'value8')

the query takes 15 milliseconds! (actually it took slightly longer -
about 400msec - on the first run - the previous query always takes ~2
seconds no matter how often I run it).

Curiously, the fewer parameters I substitute for the Key2 filter, the
slower it runs (I can't improve on the 15 msec by doing the same for
they Key1 filter), in a more or less linear relationship.

I've tried all the previously mentioned "tricks", including reindexing
the table, but no luck.

So it seems maybe SQL server has problems optimizing if there are too
many parameters.
FWIW, I did try putting the parameter values in a temporary table and
even using a join, but the total time is significantly longer than
15ms.

Again, the parameter values are under my control, so there's no risk of
SQL injection, so if there's another good reason NOT to do my own
parameter substitution in this case, I'd be interested in hearing it.|||wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

Not sure, but now I have another issue which does seem to point
towards parameters being a problem.
>
If I run a query with quite a large number of parameters which
basically
>
SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN (@.13, @.14, @.15...@.20)
>
it takes over 2 seconds, but when I substitute the last 8 parameters
(13-20) with their values, i.e.
>
SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN ('value1', 'value2', 'value3', 'value4'...'value8')
>
the query takes 15 milliseconds! (actually it took slightly longer -
about 400msec - on the first run - the previous query always takes ~2
seconds no matter how often I run it).


Now you've got me really intrigued. Based on everything I've ever read, IN
comparisons are supposed to be non-sargable, and therefore non-optimizable.
Someone from the SQL Server groups please correct me if I am wrong. Oh wait,
maybe you have a sufficient number of values to cause the query engine to
use a temp table and join, in which case optimization can occur.

I suggest you use SQL Profiler to determne the difference in the execution
plans for each method. If the fast query plan involves an index that is not
used in the slow query plan, you can use an index hint to force the query
engine to use that index
(http://www.sql-server-performance.c...t_sql_where.asp)

Quote:

Originally Posted by

>
Curiously, the fewer parameters I substitute for the Key2 filter, the
slower it runs (I can't improve on the 15 msec by doing the same for
they Key1 filter), in a more or less linear relationship.
>
I've tried all the previously mentioned "tricks", including reindexing
the table, but no luck.


What about the trick that involves using local variables in your batch,
instead of directly using the parameters:

declare @.tmp1, @.tmp2, etc.
set @.tmp1=@.parm1
etc.
SELECT ...(@.tmp1, ...)

Personally, I would do this in a stored procedure, but you have ignored
previous advice to use stored procedures.

Quote:

Originally Posted by

>
So it seems maybe SQL server has problems optimizing if there are too
many parameters.
FWIW, I did try putting the parameter values in a temporary table and
even using a join, but the total time is significantly longer than
15ms.


ISTR reading somewhere that with a sufficient number of values, that the
query engine does this anyways behind the scenes.

Quote:

Originally Posted by

>
Again, the parameter values are under my control, so there's no risk
of SQL injection,


I always cringe when I hear somebody say this. Unless those values are
hard-coded into your application code, you have to be getting those values
from somewhere. If a user was involved at any point in the process that
generates those values, then you need to at least entertain the possibility
that some funny business may have occurred. Look in these articles for
"secondary sql injection". It is always a mistake to assume that your user
base is too ignorant to take advantage of these techniques:
http://mvp.unixwiz.net/techtips/sql-injection.html
http://www.sqlsecurity.com/DesktopDefault.aspx?tabid=23
http://www.nextgenss.com/papers/adv...l_injection.pdf
http://www.nextgenss.com/papers/mor...l_injection.pdf
The bottom line may turn out to be that you need to choose between secure
and fast.

--
Microsoft MVP - ASP/ASP.NET
Please reply to the newsgroup. This email account is my spam trap so I
don't check it very often. If you must reply off-line, then remove the
"NO SPAM"|||wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

>
JXStern wrote:

Quote:

Originally Posted by

On 15 Jan 2007 13:36:21 -0800, wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

>How large is your table?
>
>'bout 2 million records, 9 columns.
>>
>Maybe a simple "update statistics" would also fix things?
>>
>Quite possibly - unfortunately I can't re-create the problem now to
>test it!
>Will be the first thing I try if I see the same problem again, though.


Could probably reproduce it - insert a 1,000,000 rows = 1, then a
handfull of rows numbered 2-10. Query it for =1, it will scan, then
query it for =2. Or something like that.


Not sure, but now I have another issue which does seem to point towards
parameters being a problem.
>
If I run a query with quite a large number of parameters which
basically
>
SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN (@.13, @.14, @.15...@.20)
>
it takes over 2 seconds, but when I substitute the last 8 parameters
(13-20) with their values, i.e.
>
SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN ('value1', 'value2', 'value3', 'value4'...'value8')
>
the query takes 15 milliseconds! (actually it took slightly longer -
about 400msec - on the first run - the previous query always takes ~2
seconds no matter how often I run it).


Indeed, that is because with literals will really compile the statement
based on the actual values. The optimizer will build a kind of binary
tree. It will also remove any duplicates (when applicable). So the
execution phase will be very fast.

On the other hand, the compilation phase is relatively expensive. And if
you have hundreds of values, the optimizer start to choke.

Quote:

Originally Posted by

Curiously, the fewer parameters I substitute for the Key2 filter, the
slower it runs (I can't improve on the 15 msec by doing the same for
they Key1 filter), in a more or less linear relationship.


Explanation: see above

Quote:

Originally Posted by

I've tried all the previously mentioned "tricks", including reindexing
the table, but no luck.
>
So it seems maybe SQL server has problems optimizing if there are too
many parameters.
FWIW, I did try putting the parameter values in a temporary table and
even using a join, but the total time is significantly longer than
15ms.


It is hard to optimizer a scenario like yours. You might try something
like this:

SELECT SUM(cnt)
FROM (
SELECT COUNT(*) AS cnt FROM MyTable WHERE Key1=@.0
UNION ALL
SELECT COUNT(*) AS cnt FROM MyTable WHERE Key1=@.1
UNION ALL
...
) AS T

Although the optimizer might automatically come up with a query plan
that reflects this strategy, I doubt that it actually will.

Please let me know if it actually increases your query performance.

HTH,
Gert-Jan|||Bob Barrows [MVP] wrote:

Quote:

Originally Posted by

wizofaus@.hotmail.com wrote:

Quote:

Originally Posted by

Not sure, but now I have another issue which does seem to point
towards parameters being a problem.

If I run a query with quite a large number of parameters which
basically

SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN (@.13, @.14, @.15...@.20)

it takes over 2 seconds, but when I substitute the last 8 parameters
(13-20) with their values, i.e.

SELECT Count(*) FROM MyTable WHERE Key1 IN (@.0, @.1, @.2...@.12) AND Key2
IN ('value1', 'value2', 'value3', 'value4'...'value8')

the query takes 15 milliseconds! (actually it took slightly longer -
about 400msec - on the first run - the previous query always takes ~2
seconds no matter how often I run it).


>
Now you've got me really intrigued. Based on everything I've ever read, IN
comparisons are supposed to be non-sargable, and therefore non-optimizable.
Someone from the SQL Server groups please correct me if I am wrong. Oh wait,
maybe you have a sufficient number of values to cause the query engine to
use a temp table and join, in which case optimization can occur.


But that's the thing - I tried doing that explicitly myself, and it's
considerably slower.

Quote:

Originally Posted by

>
I suggest you use SQL Profiler to determne the difference in the execution
plans for each method. If the fast query plan involves an index that is not
used in the slow query plan, you can use an index hint to force the query
engine to use that index
(http://www.sql-server-performance.c...t_sql_where.asp)
>

Quote:

Originally Posted by


Curiously, the fewer parameters I substitute for the Key2 filter, the
slower it runs (I can't improve on the 15 msec by doing the same for
they Key1 filter), in a more or less linear relationship.

I've tried all the previously mentioned "tricks", including reindexing
the table, but no luck.


>
What about the trick that involves using local variables in your batch,
instead of directly using the parameters:
>
declare @.tmp1, @.tmp2, etc.
set @.tmp1=@.parm1
etc.
SELECT ...(@.tmp1, ...)
>
Personally, I would do this in a stored procedure, but you have ignored
previous advice to use stored procedures.


Actually I tried stored procs as well, and it didn't seem to be
helping. Also, can you write a stored proc to take a variable number
of parameters?

Quote:

Originally Posted by

>

Quote:

Originally Posted by


So it seems maybe SQL server has problems optimizing if there are too
many parameters.
FWIW, I did try putting the parameter values in a temporary table and
even using a join, but the total time is significantly longer than
15ms.


>
ISTR reading somewhere that with a sufficient number of values, that the
query engine does this anyways behind the scenes.
>

Quote:

Originally Posted by


Again, the parameter values are under my control, so there's no risk
of SQL injection,


>
I always cringe when I hear somebody say this. Unless those values are
hard-coded into your application code, you have to be getting those values
from somewhere. If a user was involved at any point in the process that
generates those values, then you need to at least entertain the possibility
that some funny business may have occurred.


Perhaps, but in this case they are auto-generated - the user has no
control over what the actual values are (only how many there are).

At any rate, if the only thing to take into consideration here is
performance vs security, then I'm afraid performance does win. No-one
will use the application if it's a dog. And as it happens, the
database doesn't hold particularly sensitive (or irreplaceable) data
anyway.|||Bob Barrows [MVP] (reb01501@.NOyahoo.SPAMcom) writes:

Quote:

Originally Posted by

Now you've got me really intrigued. Based on everything I've ever read,
IN comparisons are supposed to be non-sargable, and therefore
non-optimizable. Someone from the SQL Server groups please correct me if
I am wrong.


coi IN (val1, val2, ...)

is just a shortcut for

col1 = val1 OR col2 = val2 OR ...

and it's perfectly possible for the optimizer to work with IN to produce a
good plan. The main problem is that for many values, the optimization time
can exceed the real execution time by far.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx