Good morning all
I'm creating a table drawn from all the various books online for SQL
Profiler event classes and have discovered seeming duplicates in the SQL 2005
BOL topic "SQL Server Event Class Reference" (which appeared AFTER the
December update):
Broker:Forwarded Message Sent - EventClass 190 (was 139), designated as
Always 190
Progress Report: Online Index Operation - EventClass 190
Broker:Forwarded Message Dropped - EventClass 191 (was 140)
TM:Save Tran Starting - EventClass 191
Data File Auto Grow event class - EventClass 92
Log File Auto Grow event class - EventClass 92
Data File Auto Shrink event class - EventClass 95
Log File Auto Shrink event class - EventClass 95
Broker:Queue Disabled - EventClass 143 (REMOVED?)
Anyone else discovered this or is this by 'design', and of course has anyone
from MS got an idea on the 'right' event numbers?
Cheers
CharlCharl wrote:
> Good morning all
> I'm creating a table drawn from all the various books online for SQL
> Profiler event classes and have discovered seeming duplicates in the
> SQL 2005 BOL topic "SQL Server Event Class Reference" (which appeared
> AFTER the December update):
> Broker:Forwarded Message Sent - EventClass 190 (was 139), designated
> as Always 190
> Progress Report: Online Index Operation - EventClass 190
> Broker:Forwarded Message Dropped - EventClass 191 (was 140)
> TM:Save Tran Starting - EventClass 191
> Data File Auto Grow event class - EventClass 92
> Log File Auto Grow event class - EventClass 92
> Data File Auto Shrink event class - EventClass 95
> Log File Auto Shrink event class - EventClass 95
> Broker:Queue Disabled - EventClass 143 (REMOVED?)
> Anyone else discovered this or is this by 'design', and of course has
> anyone from MS got an idea on the 'right' event numbers?
> Cheers
> Charl
Query the system view sys.trace_events
--
David Gugick
Quest Software
www.quest.com|||Hi Charl
I addressed this in your other thread, posted 10 minutes earlier than this
one.
Please look at the new metadata view in SQL Server 2005 called
sys.trace_events.
--
HTH
Kalen Delaney, SQL Server MVP
www.solidqualitylearning.com
"Charl" <Charl@.discussions.microsoft.com> wrote in message
news:AC94110E-12E3-41C3-AE20-52F4F20AD891@.microsoft.com...
> Good morning all
> I'm creating a table drawn from all the various books online for SQL
> Profiler event classes and have discovered seeming duplicates in the SQL
> 2005
> BOL topic "SQL Server Event Class Reference" (which appeared AFTER the
> December update):
> Broker:Forwarded Message Sent - EventClass 190 (was 139), designated as
> Always 190
> Progress Report: Online Index Operation - EventClass 190
> Broker:Forwarded Message Dropped - EventClass 191 (was 140)
> TM:Save Tran Starting - EventClass 191
> Data File Auto Grow event class - EventClass 92
> Log File Auto Grow event class - EventClass 92
> Data File Auto Shrink event class - EventClass 95
> Log File Auto Shrink event class - EventClass 95
> Broker:Queue Disabled - EventClass 143 (REMOVED?)
> Anyone else discovered this or is this by 'design', and of course has
> anyone
> from MS got an idea on the 'right' event numbers?
> Cheers
> Charl
>|||Thanks to Kalen and David for the reply, I have joined the table previously
created from the BOL information to sys.trace_events. Having retained the
original data from BOL before the December update and ignoring the Broker
eventclass changes there is only one missing eventclass - 144 Broker:Mirror
Route.
My point is that the BOL data is somewhat inaccurate in this case especially
given that most DBAs needing to run Profiler will (I hope!) be refering to
BOL...
Thus when you refer to the EventClass information and it lists an eventclass
it is surely fair to expect the data to be valid?
Best regards,
Charl
"Kalen Delaney" wrote:
> Hi Charl
> I addressed this in your other thread, posted 10 minutes earlier than this
> one.
> Please look at the new metadata view in SQL Server 2005 called
> sys.trace_events.
> --
> HTH
> Kalen Delaney, SQL Server MVP
> www.solidqualitylearning.com
>
> "Charl" <Charl@.discussions.microsoft.com> wrote in message
> news:AC94110E-12E3-41C3-AE20-52F4F20AD891@.microsoft.com...
> > Good morning all
> >
> > I'm creating a table drawn from all the various books online for SQL
> > Profiler event classes and have discovered seeming duplicates in the SQL
> > 2005
> > BOL topic "SQL Server Event Class Reference" (which appeared AFTER the
> > December update):
> >
> > Broker:Forwarded Message Sent - EventClass 190 (was 139), designated as
> > Always 190
> >
> > Progress Report: Online Index Operation - EventClass 190
> >
> > Broker:Forwarded Message Dropped - EventClass 191 (was 140)
> >
> > TM:Save Tran Starting - EventClass 191
> >
> > Data File Auto Grow event class - EventClass 92
> >
> > Log File Auto Grow event class - EventClass 92
> >
> > Data File Auto Shrink event class - EventClass 95
> >
> > Log File Auto Shrink event class - EventClass 95
> >
> > Broker:Queue Disabled - EventClass 143 (REMOVED?)
> >
> > Anyone else discovered this or is this by 'design', and of course has
> > anyone
> > from MS got an idea on the 'right' event numbers?
> >
> > Cheers
> >
> > Charl
> >
> >
>
>|||Charl wrote:
> Thanks to Kalen and David for the reply, I have joined the table
> previously created from the BOL information to sys.trace_events.
> Having retained the original data from BOL before the December update
> and ignoring the Broker eventclass changes there is only one missing
> eventclass - 144 Broker:Mirror Route.
> <SNIP>
You can email feedback directly to Microsoft about the page in question
using the Send Feedback option at the top and then pressing the Send
Feedback button at the bottom.
David Gugick
Quest Software
Showing posts with label discovered. Show all posts
Showing posts with label discovered. Show all posts
Tuesday, March 20, 2012
Wednesday, March 7, 2012
Blocked process mystery
This morning, we discovered that our application was timing out on several
pages. We tracked it down to a table that couldn't be read without timing
out. In Enterprise Manager, we found a table (TAB) lock on the table that
was of mode IS that was blocking other spids. The text property of the lock
showed this lock to be on a simple reporting stored procedure, which just
did a SELECT on a couple of tables that should have only taken a second or
two. We tried to debug the problem for several minutes, but to no avail.
Finally, we killed the lock and the database problem was immediately solved.
A look at SQL Profiler (which we run continuously) showed that a query that
matched the text property of the lock and had the same pid as the lock had
been started last Tuesday and ended at roughly the same time as we killed
the lock. The index name listed with the lock in Enterprise Manager also
was strange, since the index listed was not used by the stored procedure
listed in the properties.
We have had several similar problems with our database in the past, but this
is the first time we didn't resort to just a reboot. Why would a simple
stored procedure executing a select cause such problems? Why would this
procedure be allowed to run for a week? Why would we experience no problems
until days after (the table being locked was core to almost every page in
the system and was fine until this morning)? Can the index listed in the
lock information be used to debug the problem?
An even bigger question is how to handle such a problem after it occurs?
Killing the spid seems to have caused some problems in the dotNet
application and forced me to restart the app. Is there a more graceful
method of rolling back the offending process?
the spid you found did not actually lock on the tab, it was an intent share
lock . .effectively indicating that it was going to lock the table(possibly
pages within the table). what version of sql do you have running(and what
service pack) if you are running sql2k plus sp3 then ::fn_get_sql can be
useful in the future to indicate what the currently executing sql was for
that particular spid.
I would also switch on traceflag -T1204 on this server to ensure that you
capture the full details of the blocking in the sqlerror log.
I would also look at your current configurations for query govenor cost
limit and possibly reduce this to an acceptable value which is in line with
your 'Longest Running Query'(LRQ) . . .if you have profiler running
regularly u should be able to determine what your longest running query is
and set you query govenor cost limit accordingly, preventing such problems.
re: stopping the spid . .the only option is to kill the spid(or find out the
application that had started the process and closing the application)
it will be useful for the next time it happens to perform a select * from
master.dbo.sysprocesse(nolock) to identify what the current waittype / wait
resource /waittine were for the query in question . . as a means to
establish if there are other hidden issues within your system.
HTH
Olu Adedeji
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cin2ht$rbf$1@.utornnr1pp.grouptelecom.net...
> This morning, we discovered that our application was timing out on several
> pages. We tracked it down to a table that couldn't be read without timing
> out. In Enterprise Manager, we found a table (TAB) lock on the table that
> was of mode IS that was blocking other spids. The text property of the
lock
> showed this lock to be on a simple reporting stored procedure, which just
> did a SELECT on a couple of tables that should have only taken a second or
> two. We tried to debug the problem for several minutes, but to no avail.
> Finally, we killed the lock and the database problem was immediately
solved.
> A look at SQL Profiler (which we run continuously) showed that a query
that
> matched the text property of the lock and had the same pid as the lock
had
> been started last Tuesday and ended at roughly the same time as we killed
> the lock. The index name listed with the lock in Enterprise Manager also
> was strange, since the index listed was not used by the stored procedure
> listed in the properties.
> We have had several similar problems with our database in the past, but
this
> is the first time we didn't resort to just a reboot. Why would a simple
> stored procedure executing a select cause such problems? Why would this
> procedure be allowed to run for a week? Why would we experience no
problems
> until days after (the table being locked was core to almost every page in
> the system and was fine until this morning)? Can the index listed in the
> lock information be used to debug the problem?
> An even bigger question is how to handle such a problem after it occurs?
> Killing the spid seems to have caused some problems in the dotNet
> application and forced me to restart the app. Is there a more graceful
> method of rolling back the offending process?
>
pages. We tracked it down to a table that couldn't be read without timing
out. In Enterprise Manager, we found a table (TAB) lock on the table that
was of mode IS that was blocking other spids. The text property of the lock
showed this lock to be on a simple reporting stored procedure, which just
did a SELECT on a couple of tables that should have only taken a second or
two. We tried to debug the problem for several minutes, but to no avail.
Finally, we killed the lock and the database problem was immediately solved.
A look at SQL Profiler (which we run continuously) showed that a query that
matched the text property of the lock and had the same pid as the lock had
been started last Tuesday and ended at roughly the same time as we killed
the lock. The index name listed with the lock in Enterprise Manager also
was strange, since the index listed was not used by the stored procedure
listed in the properties.
We have had several similar problems with our database in the past, but this
is the first time we didn't resort to just a reboot. Why would a simple
stored procedure executing a select cause such problems? Why would this
procedure be allowed to run for a week? Why would we experience no problems
until days after (the table being locked was core to almost every page in
the system and was fine until this morning)? Can the index listed in the
lock information be used to debug the problem?
An even bigger question is how to handle such a problem after it occurs?
Killing the spid seems to have caused some problems in the dotNet
application and forced me to restart the app. Is there a more graceful
method of rolling back the offending process?
the spid you found did not actually lock on the tab, it was an intent share
lock . .effectively indicating that it was going to lock the table(possibly
pages within the table). what version of sql do you have running(and what
service pack) if you are running sql2k plus sp3 then ::fn_get_sql can be
useful in the future to indicate what the currently executing sql was for
that particular spid.
I would also switch on traceflag -T1204 on this server to ensure that you
capture the full details of the blocking in the sqlerror log.
I would also look at your current configurations for query govenor cost
limit and possibly reduce this to an acceptable value which is in line with
your 'Longest Running Query'(LRQ) . . .if you have profiler running
regularly u should be able to determine what your longest running query is
and set you query govenor cost limit accordingly, preventing such problems.
re: stopping the spid . .the only option is to kill the spid(or find out the
application that had started the process and closing the application)
it will be useful for the next time it happens to perform a select * from
master.dbo.sysprocesse(nolock) to identify what the current waittype / wait
resource /waittine were for the query in question . . as a means to
establish if there are other hidden issues within your system.
HTH
Olu Adedeji
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cin2ht$rbf$1@.utornnr1pp.grouptelecom.net...
> This morning, we discovered that our application was timing out on several
> pages. We tracked it down to a table that couldn't be read without timing
> out. In Enterprise Manager, we found a table (TAB) lock on the table that
> was of mode IS that was blocking other spids. The text property of the
lock
> showed this lock to be on a simple reporting stored procedure, which just
> did a SELECT on a couple of tables that should have only taken a second or
> two. We tried to debug the problem for several minutes, but to no avail.
> Finally, we killed the lock and the database problem was immediately
solved.
> A look at SQL Profiler (which we run continuously) showed that a query
that
> matched the text property of the lock and had the same pid as the lock
had
> been started last Tuesday and ended at roughly the same time as we killed
> the lock. The index name listed with the lock in Enterprise Manager also
> was strange, since the index listed was not used by the stored procedure
> listed in the properties.
> We have had several similar problems with our database in the past, but
this
> is the first time we didn't resort to just a reboot. Why would a simple
> stored procedure executing a select cause such problems? Why would this
> procedure be allowed to run for a week? Why would we experience no
problems
> until days after (the table being locked was core to almost every page in
> the system and was fine until this morning)? Can the index listed in the
> lock information be used to debug the problem?
> An even bigger question is how to handle such a problem after it occurs?
> Killing the spid seems to have caused some problems in the dotNet
> application and forced me to restart the app. Is there a more graceful
> method of rolling back the offending process?
>
Blocked process mystery
This morning, we discovered that our application was timing out on several
pages. We tracked it down to a table that couldn't be read without timing
out. In Enterprise Manager, we found a table (TAB) lock on the table that
was of mode IS that was blocking other spids. The text property of the lock
showed this lock to be on a simple reporting stored procedure, which just
did a SELECT on a couple of tables that should have only taken a second or
two. We tried to debug the problem for several minutes, but to no avail.
Finally, we killed the lock and the database problem was immediately solved.
A look at SQL Profiler (which we run continuously) showed that a query that
matched the text property of the lock and had the same pid as the lock had
been started last Tuesday and ended at roughly the same time as we killed
the lock. The index name listed with the lock in Enterprise Manager also
was strange, since the index listed was not used by the stored procedure
listed in the properties.
We have had several similar problems with our database in the past, but this
is the first time we didn't resort to just a reboot. Why would a simple
stored procedure executing a select cause such problems? Why would this
procedure be allowed to run for a week? Why would we experience no problems
until days after (the table being locked was core to almost every page in
the system and was fine until this morning)? Can the index listed in the
lock information be used to debug the problem?
An even bigger question is how to handle such a problem after it occurs?
Killing the spid seems to have caused some problems in the dotNet
application and forced me to restart the app. Is there a more graceful
method of rolling back the offending process?the spid you found did not actually lock on the tab, it was an intent share
lock . .effectively indicating that it was going to lock the table(possibly
pages within the table). what version of sql do you have running(and what
service pack) if you are running sql2k plus sp3 then ::fn_get_sql can be
useful in the future to indicate what the currently executing sql was for
that particular spid.
I would also switch on traceflag -T1204 on this server to ensure that you
capture the full details of the blocking in the sqlerror log.
I would also look at your current configurations for query govenor cost
limit and possibly reduce this to an acceptable value which is in line with
your 'Longest Running Query'(LRQ) . . .if you have profiler running
regularly u should be able to determine what your longest running query is
and set you query govenor cost limit accordingly, preventing such problems.
re: stopping the spid . .the only option is to kill the spid(or find out the
application that had started the process and closing the application)
it will be useful for the next time it happens to perform a select * from
master.dbo.sysprocesse(nolock) to identify what the current waittype / wait
resource /waittine were for the query in question . . as a means to
establish if there are other hidden issues within your system.
HTH
Olu Adedeji
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cin2ht$rbf$1@.utornnr1pp.grouptelecom.net...
> This morning, we discovered that our application was timing out on several
> pages. We tracked it down to a table that couldn't be read without timing
> out. In Enterprise Manager, we found a table (TAB) lock on the table that
> was of mode IS that was blocking other spids. The text property of the
lock
> showed this lock to be on a simple reporting stored procedure, which just
> did a SELECT on a couple of tables that should have only taken a second or
> two. We tried to debug the problem for several minutes, but to no avail.
> Finally, we killed the lock and the database problem was immediately
solved.
> A look at SQL Profiler (which we run continuously) showed that a query
that
> matched the text property of the lock and had the same pid as the lock
had
> been started last Tuesday and ended at roughly the same time as we killed
> the lock. The index name listed with the lock in Enterprise Manager also
> was strange, since the index listed was not used by the stored procedure
> listed in the properties.
> We have had several similar problems with our database in the past, but
this
> is the first time we didn't resort to just a reboot. Why would a simple
> stored procedure executing a select cause such problems? Why would this
> procedure be allowed to run for a week? Why would we experience no
problems
> until days after (the table being locked was core to almost every page in
> the system and was fine until this morning)? Can the index listed in the
> lock information be used to debug the problem?
> An even bigger question is how to handle such a problem after it occurs?
> Killing the spid seems to have caused some problems in the dotNet
> application and forced me to restart the app. Is there a more graceful
> method of rolling back the offending process?
>
pages. We tracked it down to a table that couldn't be read without timing
out. In Enterprise Manager, we found a table (TAB) lock on the table that
was of mode IS that was blocking other spids. The text property of the lock
showed this lock to be on a simple reporting stored procedure, which just
did a SELECT on a couple of tables that should have only taken a second or
two. We tried to debug the problem for several minutes, but to no avail.
Finally, we killed the lock and the database problem was immediately solved.
A look at SQL Profiler (which we run continuously) showed that a query that
matched the text property of the lock and had the same pid as the lock had
been started last Tuesday and ended at roughly the same time as we killed
the lock. The index name listed with the lock in Enterprise Manager also
was strange, since the index listed was not used by the stored procedure
listed in the properties.
We have had several similar problems with our database in the past, but this
is the first time we didn't resort to just a reboot. Why would a simple
stored procedure executing a select cause such problems? Why would this
procedure be allowed to run for a week? Why would we experience no problems
until days after (the table being locked was core to almost every page in
the system and was fine until this morning)? Can the index listed in the
lock information be used to debug the problem?
An even bigger question is how to handle such a problem after it occurs?
Killing the spid seems to have caused some problems in the dotNet
application and forced me to restart the app. Is there a more graceful
method of rolling back the offending process?the spid you found did not actually lock on the tab, it was an intent share
lock . .effectively indicating that it was going to lock the table(possibly
pages within the table). what version of sql do you have running(and what
service pack) if you are running sql2k plus sp3 then ::fn_get_sql can be
useful in the future to indicate what the currently executing sql was for
that particular spid.
I would also switch on traceflag -T1204 on this server to ensure that you
capture the full details of the blocking in the sqlerror log.
I would also look at your current configurations for query govenor cost
limit and possibly reduce this to an acceptable value which is in line with
your 'Longest Running Query'(LRQ) . . .if you have profiler running
regularly u should be able to determine what your longest running query is
and set you query govenor cost limit accordingly, preventing such problems.
re: stopping the spid . .the only option is to kill the spid(or find out the
application that had started the process and closing the application)
it will be useful for the next time it happens to perform a select * from
master.dbo.sysprocesse(nolock) to identify what the current waittype / wait
resource /waittine were for the query in question . . as a means to
establish if there are other hidden issues within your system.
HTH
Olu Adedeji
"Stephen Brown" <nospam@.telusplanet.net> wrote in message
news:cin2ht$rbf$1@.utornnr1pp.grouptelecom.net...
> This morning, we discovered that our application was timing out on several
> pages. We tracked it down to a table that couldn't be read without timing
> out. In Enterprise Manager, we found a table (TAB) lock on the table that
> was of mode IS that was blocking other spids. The text property of the
lock
> showed this lock to be on a simple reporting stored procedure, which just
> did a SELECT on a couple of tables that should have only taken a second or
> two. We tried to debug the problem for several minutes, but to no avail.
> Finally, we killed the lock and the database problem was immediately
solved.
> A look at SQL Profiler (which we run continuously) showed that a query
that
> matched the text property of the lock and had the same pid as the lock
had
> been started last Tuesday and ended at roughly the same time as we killed
> the lock. The index name listed with the lock in Enterprise Manager also
> was strange, since the index listed was not used by the stored procedure
> listed in the properties.
> We have had several similar problems with our database in the past, but
this
> is the first time we didn't resort to just a reboot. Why would a simple
> stored procedure executing a select cause such problems? Why would this
> procedure be allowed to run for a week? Why would we experience no
problems
> until days after (the table being locked was core to almost every page in
> the system and was fine until this morning)? Can the index listed in the
> lock information be used to debug the problem?
> An even bigger question is how to handle such a problem after it occurs?
> Killing the spid seems to have caused some problems in the dotNet
> application and forced me to restart the app. Is there a more graceful
> method of rolling back the offending process?
>
Subscribe to:
Posts (Atom)