Showing posts with label blocking. Show all posts
Showing posts with label blocking. Show all posts

Monday, March 19, 2012

Blocking...

Hi just read an article on the internet that states
"A quick way to resolve a blocking problem is to disconnect the
connection blocking the rest of the users. All you have to do is ask
your database administrator to execute the KILL statement. Obviously,
this is not a permanent solution, and won't really help anything except
take care of an immediate need."
Now when i run sp_who ... i see that the spid that is blocking is the
same as the spid itself...
Now the above article states that killing the one that is blocking the
rest of the other spids is ok ...
But what if its the same SPID that is blocking ...what does it signify
& whats the solution
ThanksI'm not too sure what version of SQL Server you're using, but if it is SQL
2000 sp4, this might be related to latch waits which now has enhanced
reporting: http://support.microsoft.com/default.aspx/kb/906344.
Alternatively it might be a parallel operation across multiple processors.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||On 31 Aug 2006 06:59:03 -0700, Double_B wrote:
>Hi just read an article on the internet that states
>"A quick way to resolve a blocking problem is to disconnect the
>connection blocking the rest of the users. All you have to do is ask
>your database administrator to execute the KILL statement. Obviously,
>this is not a permanent solution, and won't really help anything except
>take care of an immediate need."
Hi Double_B,
Also keep in mind that the transaction you kill has to rollback all
changes before it will release any locks. If it''s mainly a reporting
job, that will probably be pretty fast - but if you rollback a job that
has been changing table data for the last two hours, you can expect the
rollback to take as least as long!
--
Hugo Kornelis, SQL Server MVP

Blocking...

Hi just read an article on the internet that states
"A quick way to resolve a blocking problem is to disconnect the
connection blocking the rest of the users. All you have to do is ask
your database administrator to execute the KILL statement. Obviously,
this is not a permanent solution, and won't really help anything except
take care of an immediate need."
Now when i run sp_who ... i see that the spid that is blocking is the
same as the spid itself...
Now the above article states that killing the one that is blocking the
rest of the other spids is ok ...
But what if its the same SPID that is blocking ...what does it signify
& whats the solution
ThanksI'm not too sure what version of SQL Server you're using, but if it is SQL
2000 sp4, this might be related to latch waits which now has enhanced
reporting: http://support.microsoft.com/default.aspx/kb/906344.
Alternatively it might be a parallel operation across multiple processors.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com|||On 31 Aug 2006 06:59:03 -0700, Double_B wrote:

>Hi just read an article on the internet that states
>"A quick way to resolve a blocking problem is to disconnect the
>connection blocking the rest of the users. All you have to do is ask
>your database administrator to execute the KILL statement. Obviously,
>this is not a permanent solution, and won't really help anything except
>take care of an immediate need."
Hi Double_B,
Also keep in mind that the transaction you kill has to rollback all
changes before it will release any locks. If it''s mainly a reporting
job, that will probably be pretty fast - but if you rollback a job that
has been changing table data for the last two hours, you can expect the
rollback to take as least as long!
Hugo Kornelis, SQL Server MVP

Blocking with RECEIVE and SqlDataReader for real-time updates

I have this scenario which is working fine, but would like to know if others have tried it or can recommend a better approach. Below is a brief description, but code should fully explain:

A process updates a SQL Server table via a stored proc, which in turn writes information to a service broker target queue. In my client/server architecture, I would like clients to see this new information as soon as it gets written to the target queue. To do this, I have a WCF service that calls a stored procedure; the stored proc contains an infinite loop which has a RECEIVE statement with an infinite timeout. Once RECEIVE reads data from the SB queue, data is retrieved via a SqlDataReader and results are sent to clients via a pub/sub. I then wait for more rows until RECEIVE unblocks and so on.

Stroed Proc

Code Snippet

-- ...
WHILE 1 = 1
BEGIN
-- Param declarations

...

BEGIN TRANSACTION;
WAITFOR
(
-- Blocks while TargetQueue is empty
RECEIVE
TOP(1)
@.conversation_handle = conversation_handle,
@.message_type_name = message_type_name,
@.conversation_group_id = conversation_group_id,
@.message_body =
CASE
WHEN validation = 'X' THEN CAST(message_body AS XML)
ELSE CAST(N'' AS XML)
END
FROM [dbo].[TargetQueue] -- No time out!
)

-- Handle errors
...

-- Return received information. After this statement is executed,
-- reader.NextResult unblocks and reader.Read() can read these
-- new values
SELECT 'Conversation Group Id' = @.conversation_group_id,
'Conversation Handle' = @.conversation_handle,
'Message Type Name' = @.message_type_name,
'Message Body' = @.message_body ;

COMMIT TRANSACTION


END -- WHILE

C# Code

Code Snippet

// Create a SqlCommand, and initialize to execute above stored proc
// (set command time to zero!)
...

// ExecuteReader blocks until RECEIVE reads data from the target queue
using (SqlDataReader reader = cmd.ExecuteReader())
{
// Begin an infinite loop
while (true)
{
// Process data retrieved by RECEIVE
while (reader.Read())
{
Trace.WriteLine("Conversation Group Id :" + reader.GetGuid(0) +
"Conversation Handle: " + reader.GetGuid(1) +
"Message Type Name : " + reader.GetString(2) +
"Message Body : " + reader.GetString(3));
// Send data via pub/sub
...
}

// Blocks until stored procedure returns another select statement
// i.e., blobks until RECEIVE unblocks and retrieves more data from queue
reader.NextResult();
} // while
}

There isn't anything wrong with your approach from what you show, but I do have a few questions:

Who ends the dialogs and how? The RECEIVE loop has to be prepared to process the with EndDialog and Error messages, otherwise you will leak conversations and your database will grow indefinetly. Also the stored procedure you mention, how does it find the dialog to send on and how does it end it?

Are the notifications important? Can you afford to loose one? You will lose notifications because WCF is cannot guarantee reliable delivery.

What scale are you talking about? One single RECEIVE loop with TOP(1) will have problems if you expect thousands of notifications per second.

|||

Remus,

Thanks for your thoughtful questions. My comments are below:

>What scale are you talking about? One single RECEIVE loop with TOP(1) will have problems if you expect thousands of notifications per second

Currently there will be less than 100 (One hundred) notifications per day. It is likely however, that these notifications will come in bursts as the process that ultimately populates the target queue (via uspSendMessage below) kicks in multiple times during the day. I can remove TOP(1) if necessary.

>Are the notifications important? Can you afford to loose one? You will lose notifications because WCF is cannot guarantee reliable delivery

Yes notifications to clients are very important as they will be used for trade position management. With WCF I am planning on using MSMQ to gurarantee delivery to clients. Is this what you meant?

> Who ends the dialogs and how? how does the stored proc find the dialog to send on and how does it end it?

I have two stored proc. uspSendMessage which updates a database table with new information, also does BEGIN DIALOG, then SEND ON CONVERSATION, then END CONVERSATION and COMMIT TRAN. uspReceiveMessage on the other hand does a RECEIVE (does not specify a conversion group id order to retreive all data), SELECT to select and send data to the SqlDataReader, END CONVERSATION, and then COMMIT TRAN.

Both stored proc from my test harness are shown below:

Code Snippet

CREATE PROCEDURE uspSendMessage

-- Add the parameters for the stored procedure here

@.CompanyName nvarchar(40),

@.PhoneName nvarchar(24)

AS

BEGIN

-- SET NOCOUNT ON added to prevent extra result sets from

-- interfering with SELECT statements.

SET NOCOUNT ON

begin try

-- Begin a transaction.

BEGIN TRANSACTION;

-- Insert data

insert into Shippers (CompanyName, Phone) values(@.CompanyName, @.PhoneName)

-- Create the message.

DECLARE @.message XML

SET @.message = N'<Shipper><CompanyName>' + @.CompanyName + '</CompanyName><Phone>' + @.PhoneName + '</Phone></Shipper>'

-- Declare a variable to hold the conversation handle, and then start the conversation

DECLARE @.conversationHandle UNIQUEIDENTIFIER

BEGIN DIALOG CONVERSATION @.conversationHandle

FROM SERVICE InitiatorService

TO SERVICE 'TargetService'

ON CONTRACT NewShipperContract

WITH ENCRYPTION = OFF;

-- Send the message on the dialog.

SEND ON CONVERSATION @.conversationHandle MESSAGE TYPE NewShipperMessage (@.message);

-- End the conversation.

END CONVERSATION @.conversationHandle

COMMIT TRANSACTION

end try

begin catch

select error_number() as ErrorNum, error_message() as ErrorMSg

-- Test XACT_STATE

IF (XACT_STATE()) = -1 -- Uncmmitable transaction state

BEGIN

ROLLBACK TRANSACTION;

END;

IF (XACT_STATE()) = 1 -- Tranaction active

BEGIN

COMMIT TRANSACTION;

END;

end catch

END

Code Snippet

CREATE PROCEDURE uspReceiveMessage

AS

BEGIN

-- SET NOCOUNT ON added to prevent extra result sets from

-- interfering with SELECT statements.

SET NOCOUNT ON;

WHILE 1 = 1

BEGIN

DECLARE @.conversation_handle UNIQUEIDENTIFIER,

@.conversation_group_id UNIQUEIDENTIFIER,

@.message_body XML,

@.message_type_name NVARCHAR(128);

BEGIN TRANSACTION;

WAITFOR

(

RECEIVE TOP(1)

@.conversation_handle = conversation_handle,

@.message_type_name = message_type_name,

@.conversation_group_id = conversation_group_id,

@.message_body =

CASE

WHEN validation = 'X' THEN CAST(message_body AS XML)

ELSE CAST(N'<none/>' AS XML)

END

FROM [dbo].[TargetQueue]

)

-- Handle error

IF @.@.ERROR <> 0

BEGIN

COMMIT TRANSACTION

BREAK

END

-- Show received information

SELECT 'Conversation Group Id' = @.conversation_group_id,

'Conversation Handle' = @.conversation_handle,

'Message Type Name' = @.message_type_name,

'Message Body' = @.message_body ;

-- If the message_type_name indicates that the message is an error

-- or an end dialog message, end the conversation.

IF @.message_type_name = 'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'

OR @.message_type_name = 'http://schemas.microsoft.com/SQL/ServiceBroker/Error'

BEGIN

END CONVERSATION @.conversation_handle ;

END ;

COMMIT TRANSACTION

END -- WHILE

END

|||

Yazan Diranieh wrote:

>What scale are you talking about? One single RECEIVE loop with TOP(1) will have problems if you expect thousands of notifications per second

Currently there will be less than 100 (One hundred) notifications per day. It is likely however, that these notifications will come in bursts as the process that ultimately populates the target queue (via uspSendMessage below) kicks in multiple times during the day. I can remove TOP(1) if necessary.

If you will hit scalabilty problems removing TOP(1) will not help, as you are sending only one message per dialog anyway, so RECEIVE can only return one message at a time. See http://blogs.msdn.com/remusrusanu/archive/2007/04/24/reusing-conversations.aspx and http://blogs.msdn.com/remusrusanu/archive/2007/05/02/recycling-conversations.aspx on how to send more than one message per dialog. I wouldn't focus on this at the moment though.

Yazan Diranieh wrote:

>Are the notifications important? Can you afford to loose one? You will lose notifications because WCF is cannot guarantee reliable delivery

Yes notifications to clients are very important as they will be used for trade position management. With WCF I am planning on using MSMQ to gurarantee delivery to clients. Is this what you meant?

As it is now, if the process crashes after the RECEIVE was commited but before the WCF send occured (or reached the outgoing MSMQ queue), your notification is lost. You must use a distributed transaction and enroll both the SqlCommand.ExecuteReader and the WCF send in the same transaction context. This also means you cannot use one infintely executing batch, but instead loop in CLR code and call one RECEIVE at a time.

Yazan Diranieh wrote:

> Who ends the dialogs and how? how does the stored proc find the dialog to send on and how does it end it?

I have two stored proc. uspSendMessage which updates a database table with new information, also does BEGIN DIALOG, then SEND ON CONVERSATION, then END CONVERSATION and COMMIT TRAN. uspReceiveMessage on the other hand does a RECEIVE (does not specify a conversion group id order to retreive all data), SELECT to select and send data to the SqlDataReader, END CONVERSATION, and then COMMIT TRAN.

You are doing Fire-and-Forget, this will get you into trouble, see http://blogs.msdn.com/remusrusanu/archive/2006/04/06/570578.aspx. Try to end the conversation from the target side first and have the sender end its side as a response to the target's EndDialog message.

|||Very informative. Thanks very much.

Blocking while getting Snapshot

We have a production database that is replicated(transactional).
Whenever we have to take a snapshot, it creates blocking on the
production tables and causes timeouts within the production
application. This is creating big problems and I am wondering if there
is away to avoid the blocking which causes the timeouts.
Please have a look at the option to allow concurrent snapshot generation
(snapshot tab - 'do not lock tables...').
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||That works very well for us. Thanks for the information.
|||I have the same problem and tried to use the 'Do Not Lock Table Option'.
However, the snapshot loading failed. Please help if you could post your
test result in this threat. Thanks a lot.
|||Please can you post up your complete error message.
Rgds,
Paul Ibison

Sunday, March 11, 2012

Blocking while creating index?

If I dont use online indexing, will there be blocking for the entire
duration of the index creation be it clustered or non clustered ? I thought
there might be an exclusive lock on the table for the entire duration where
even selects would be blocked...but it doesnt appear to be true.
Please let me know
Using SQL 2005
See BOL, Alter Index, Online OFF
"Table locks are applied for the duration of the index operation. An offline
index operation that creates, rebuilds, or drops a clustered index, or
rebuilds or drops a nonclustered index, acquires a Schema modification
(Sch-M) lock on the table. This prevents all user access to the underlying
table for the duration of the operation. An offline index operation that
creates a nonclustered index acquires a Shared (S) lock on the table. This
prevents updates to the underlying table but allows read operations, such as
SELECT statements."
Hope this helps,
Ben Nevarez
Senior Database Administrator
AIG SunAmerica
"Hassan" wrote:

> If I dont use online indexing, will there be blocking for the entire
> duration of the index creation be it clustered or non clustered ? I thought
> there might be an exclusive lock on the table for the entire duration where
> even selects would be blocked...but it doesnt appear to be true.
> Please let me know
> Using SQL 2005
>

Blocking while creating index?

If I dont use online indexing, will there be blocking for the entire
duration of the index creation be it clustered or non clustered ? I thought
there might be an exclusive lock on the table for the entire duration where
even selects would be blocked...but it doesnt appear to be true.
Please let me know
Using SQL 2005See BOL, Alter Index, Online OFF
"Table locks are applied for the duration of the index operation. An offline
index operation that creates, rebuilds, or drops a clustered index, or
rebuilds or drops a nonclustered index, acquires a Schema modification
(Sch-M) lock on the table. This prevents all user access to the underlying
table for the duration of the operation. An offline index operation that
creates a nonclustered index acquires a Shared (S) lock on the table. This
prevents updates to the underlying table but allows read operations, such as
SELECT statements."
Hope this helps,
Ben Nevarez
Senior Database Administrator
AIG SunAmerica
"Hassan" wrote:

> If I dont use online indexing, will there be blocking for the entire
> duration of the index creation be it clustered or non clustered ? I though
t
> there might be an exclusive lock on the table for the entire duration wher
e
> even selects would be blocked...but it doesnt appear to be true.
> Please let me know
> Using SQL 2005
>

Blocking while creating index?

If I dont use online indexing, will there be blocking for the entire
duration of the index creation be it clustered or non clustered ? I thought
there might be an exclusive lock on the table for the entire duration where
even selects would be blocked...but it doesnt appear to be true.
Please let me know
Using SQL 2005See BOL, Alter Index, Online OFF
"Table locks are applied for the duration of the index operation. An offline
index operation that creates, rebuilds, or drops a clustered index, or
rebuilds or drops a nonclustered index, acquires a Schema modification
(Sch-M) lock on the table. This prevents all user access to the underlying
table for the duration of the operation. An offline index operation that
creates a nonclustered index acquires a Shared (S) lock on the table. This
prevents updates to the underlying table but allows read operations, such as
SELECT statements."
Hope this helps,
Ben Nevarez
Senior Database Administrator
AIG SunAmerica
"Hassan" wrote:
> If I dont use online indexing, will there be blocking for the entire
> duration of the index creation be it clustered or non clustered ? I thought
> there might be an exclusive lock on the table for the entire duration where
> even selects would be blocked...but it doesnt appear to be true.
> Please let me know
> Using SQL 2005
>

Blocking when using MS DTC

Hi,
I have a blocking situation occurring that I am having trouble sorting out. It occurs when using MS DTC and using SSIS as the transaction co-ordinator. I suppose you might argue that this should be in the SSIS forum but as the blocking is occurring in SQL Server I thought I'd try here as well.

I have 2 data-flows in SSIS. One of them INSERTs to mytable, the other UPDATEs mytable. The UPDATE is getting blocked by the INSERT which holds a lock on the table (I can see this thru sp_lock). Now although this is from SSIS all you really need to know is:

-These 2 operations occur using seperate connections
-I am running the 2 operations under the same MS DTC transaction
-mytable has a PK on it. If I remove the PK then my observations so far suggest that the blocking problem disappears
-The default isolation level on the DTC transaction is Serializable. I have tried various others and got the same problem
-The locks are held by SPID=-2 (which, as far as I am aware, is DTC)

Also note that:
-I can put the 2 operations in the same data-flow which means they occur under the same connection

Now, I'm not too hot on transactions and DTC etc... Is there anything I can do here to stop my blocking problem? Or am I just scuppered?

Feel free to ask questions and I'll answer qwith as much detail as I can. I'd appreciate any advice anyone has in resolving this.

-Jamie

P.S. Here's the thread I posted on the SSIS forum about this: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=154834&SiteID=1

It seems as though page level locking is occurring. Does anyone have any advice as to how to go about eradicating this? I understand that row-level locking is possible now in SQL Server. In theory this would solve my problem because I am not updating records that I am inserting.

-Jamie

Blocking when using MS DTC

Hi,
I have a blocking situation occurring that I am having trouble sorting out. It occurs when using MS DTC and using SSIS as the transaction co-ordinator. I suppose you might argue that this should be in the SSIS forum but as the blocking is occurring in SQL Server I thought I'd try here as well.

I have 2 data-flows in SSIS. One of them INSERTs to mytable, the other UPDATEs mytable. The UPDATE is getting blocked by the INSERT which holds a lock on the table (I can see this thru sp_lock). Now although this is from SSIS all you really need to know is:

-These 2 operations occur using seperate connections
-I am running the 2 operations under the same MS DTC transaction
-mytable has a PK on it. If I remove the PK then my observations so far suggest that the blocking problem disappears
-The default isolation level on the DTC transaction is Serializable. I have tried various others and got the same problem
-The locks are held by SPID=-2 (which, as far as I am aware, is DTC)

Also note that:
-I can put the 2 operations in the same data-flow which means they occur under the same connection

Now, I'm not too hot on transactions and DTC etc... Is there anything I can do here to stop my blocking problem? Or am I just scuppered?

Feel free to ask questions and I'll answer qwith as much detail as I can. I'd appreciate any advice anyone has in resolving this.

-Jamie

P.S. Here's the thread I posted on the SSIS forum about this: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=154834&SiteID=1

It seems as though page level locking is occurring. Does anyone have any advice as to how to go about eradicating this? I understand that row-level locking is possible now in SQL Server. In theory this would solve my problem because I am not updating records that I am inserting.

-Jamie

Blocking When doing bcp

Any ideas?

I execute a sproc from QA.
It interogates a folder:

Select @.Command_String = 'Dir ' + @.FilePath + '\*.txt'
Insert Into XLAT_Folder exec master..xp_cmdshell @.Command_String
[/code[

Which I then Parse and get the details...

Set up a CURSOR (Booo hiss...)

[code]

DECLARE XLAT_Folder CURSOR FOR
SELECT Create_Time
, File_Size
, [File_Name]
FROM XLAT_Folder_Parsed
ORDER BY [File_Name]


WHILE FETCH STATUS = 0
BEGIN
BEGIN TRAN

Then, based on the methodology that the file name must match the table and format file (I check to make sure everything is out there)

I then bcp the data in using my connection pooling id (I'm logged on as sa in qa)

SET @.cmd = 'bcp ' + @.db_name + '..' + SUBSTRING(@.File_Name,1,CHARINDEX('.',@.File_Name)-1) + ' in '
+ @.FilePathAndName
+ ' -f' + 'd:\Data\Tax\' + SUBSTRING(@.File_Name,1,CHARINDEX('.',@.File_Name)-1) + '.fmt'
+ ' -S' + @.@.servername + ' -U -P'

SET @.Command_string = 'EXEC master..xp_cmdshell ''' + @.cmd + '''' + ', NO_OUTPUT'

INSERT INTO #XLAT_BCP_Results(Col1) Exec(@.Command_String)


MOVE DATA FILE TO ARCHIVE
COMMIT TRAN
ANOTHER FETCH

A spid is launched to do the bcp...I have 4 files...on the last load The connection Pooling lauched spid gets blocked by the sa spid that launched the sproc...it doesn't happen all the time, just most of the time...

I've put a trace on in Profiler, but I don't see anything...I've picked the event class of deadlocks...but I never see it...

When I do sp_who, it shows the blocking...

I'm so confused...Can you get the table/index that has the blocking lock on it?|||Well the blocking spid I found in sp_who, is the QA window I lauched the sproc from...the blocked spid is the idependent bcp thread (well not so independent)..

There's no RI between these code tables, and just a trigger...that moves the rows to history table...

I'll check sp_locks...|||Every single lock on the "parent" spid has a status of "grant" and a type of IX...|||I am pretty sure there wasa way of telling a blocking lock from a regular lock. I will have to do a bit of checking, but this job thing has me a bit tied up.|||In the output of sp_lock, look for a status of "WAIT". This is the process that is attempting to get at the locked resource, and of course, must wait for the first process to get done with it. Alternatively, you can check the waitresource column in the sysprocesses table. This will also tell you what the processes that is blocked is waiting on.
This will give you the name of the table/index that you are getting hung up on. After this, I am afraid it is going to be a slog through the code to find the actual stumbling block.
Let us know how it comes out.|||Thanks...

Been there...

It seems that even though I expected lines inside of a sproc to be serial, it looks like the delete doesn't complete before the sproc continues to the nect line of code, which is the xp_cmdshell bcp...

I find this totally unbelievable, but in every instance, the independent xpshell thread (which is executed with a different login) is the one that is being blocked. And it is being blocked by the spid that executed the sproc...

With me so far...

I then placed the COMMIT immediatley after the DELETE, and viola!

No more problems...now tell me...are the lines in a sproc serial?

I always thought they were, and I'll be damned if I believe any differently, but I have NO other explination.

Thanks for everyones help.

WORK AROUND ...HO!

Blocking updates of a table

Hi,
I've been trying to design a way for me to issue a transaction that:
    Block all inserts on a table when row X has a certain value (call it A)
    Add a row to the table with row X containing A Add rows to another table Unblock inserts Commit transaction
Is this possible? Can anyone give me some pointers as to what to do?
Thanks in advance!

For (1) and (2)
While inserting the row that has a certain value (A), you can request a TABLOCKX. This will acquire X lock on the table and there by block other inserts by other concurrent transactions.

For (3): Is it like any other insert to another table? In that case, nothing special needs to be done

Thanks
Sunil Agarwal

|||For 3 that is what I meant.
For 1 and 2, it sounds like TABLOCKX locks the whole table exclusively (from what I can find in Books Online) - have I read it correctly? Is there any way to block other transactions from inserting rows (while the first transaction is not yet commited) when a table key column (A in my original post) is a certain value (X in my original post) - rather than just locking the whole table?
Thanks for your help!
|||For 1 and 2, it sounds like TABLOCKX locks the whole table exclusively (from what I can find in Books Online) - have I read it correctly

sunil> yes

Is there any way to block other transactions from inserting rows (while the first transaction is not yet commited) when a table key column (A in my original post) is a certain value (X in my original post) - rather than just locking the whole table?

sunila> if you only want to block inserts by other transactions under the condition you have mentioned but allow updates/selects, then there is no way.
thanks,|||I assume you only want to block Inserts but not Updates, Deletes and SELECTS.

You could try using an Insert trigger that would rollback the other transactions until a condition has been achieved.

Blocking UDP Port 1434

In order to protect from 'slammer' kind of virus, we are
planning to block UDP Port 1434 on all gateways. Named
instances seem to rely on UDP Port 1434. Since TCP port
the sql instance is listening on can be re-assigned, Can
we reassign the UDP port 1434 as well? Any input will be
appreciated. Thank you!No. You can use client aliases to enable clients to resolve named instance
network endpoints
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Wayne" <anonymous@.discussions.microsoft.com> wrote in message
news:15ca401c41699$7b2a65c0$a301280a@.phx
.gbl...
> In order to protect from 'slammer' kind of virus, we are
> planning to block UDP Port 1434 on all gateways. Named
> instances seem to rely on UDP Port 1434. Since TCP port
> the sql instance is listening on can be re-assigned, Can
> we reassign the UDP port 1434 as well? Any input will be
> appreciated. Thank you!
>|||We are talking about thousands of clients and I am not
sure all applications support the use of aliases. Is there
an easy way out? Will the use of Proxy Server be a better
choice? Thanks for your input!

>--Original Message--
>No. You can use client aliases to enable clients to
resolve named instance
>network endpoints
>--
>HTH
>Jasper Smith (SQL Server MVP)
>I support PASS - the definitive, global
>community for SQL Server professionals -
>http://www.sqlpass.org
>
>"Wayne" <anonymous@.discussions.microsoft.com> wrote in
message
> news:15ca401c41699$7b2a65c0$a301280a@.phx
.gbl...
>
>.
>|||Not sure if this is of use to you. But still... You can configure SQL Server
to Listen on multiple ports. Refer to the KB article :
http://support.microsoft.com/defaul...kb;en-us;294453 for further
information.
HTH,
Vinod Kumar
MCSE, DBA, MCAD, MCSD
http://www.extremeexperts.com
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp
"Wayne" <anonymous@.discussions.microsoft.com> wrote in message
news:15eb901c416ab$59b613a0$a501280a@.phx
.gbl...
> We are talking about thousands of clients and I am not
> sure all applications support the use of aliases. Is there
> an easy way out? Will the use of Proxy Server be a better
> choice? Thanks for your input!
>
> resolve named instance
> message

Blocking UDP Port 1434

In order to protect from 'slammer' kind of virus, we are
planning to block UDP Port 1434 on all gateways. Named
instances seem to rely on UDP Port 1434. Since TCP port
the sql instance is listening on can be re-assigned, Can
we reassign the UDP port 1434 as well? Any input will be
appreciated. Thank you!Don't know that you can change the SQL Server Resolution
Service running on UDP port 1434. However, you can still
connect without using the resolution service as long as the
client knows the port to go through. You don't have to HAVE
to go through UDP 1434 to connect to a named instance.
-Sue
On Tue, 30 Mar 2004 12:55:25 -0800, "Wayne"
<anonymous@.discussions.microsoft.com> wrote:

>In order to protect from 'slammer' kind of virus, we are
>planning to block UDP Port 1434 on all gateways. Named
>instances seem to rely on UDP Port 1434. Since TCP port
>the sql instance is listening on can be re-assigned, Can
>we reassign the UDP port 1434 as well? Any input will be
>appreciated. Thank you!|||Thank you, Sue, for the input. I thought for named
instances, tcp ports can be coded on server and clients,
but NOT UDP port 1434. The following is what I get from
BOL. If it can be done without using UDP port 1434, how
should I configure the client to go through gateways?
'Have the network administrator configure the firewall to
forward the IP address and TCP port the instance of SQL
Server is listening on (using either 1433 for a default
instance, or the TCP port you configured a named instance
to listen on). Also configure the firewall to forward
requests for UDP port 1434 on the same IP address. SQL
Server 2000 uses UDP port 1434 to establish communications
links from applications.'
--

>--Original Message--
>Don't know that you can change the SQL Server Resolution
>Service running on UDP port 1434. However, you can still
>connect without using the resolution service as long as
the
>client knows the port to go through. You don't have to
HAVE
>to go through UDP 1434 to connect to a named instance.
>-Sue
>On Tue, 30 Mar 2004 12:55:25 -0800, "Wayne"
><anonymous@.discussions.microsoft.com> wrote:
>
>.
>|||You can use the client network utility to set up an alias and specify the
port or you can specify it in a connection string
HTH
Jasper Smith (SQL Server MVP)
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
<anonymous@.discussions.microsoft.com> wrote in message
news:115da01c416a0$77cbcd40$a001280a@.phx
.gbl...
> Thank you, Sue, for the input. I thought for named
> instances, tcp ports can be coded on server and clients,
> but NOT UDP port 1434. The following is what I get from
> BOL. If it can be done without using UDP port 1434, how
> should I configure the client to go through gateways?
> --
> 'Have the network administrator configure the firewall to
> forward the IP address and TCP port the instance of SQL
> Server is listening on (using either 1433 for a default
> instance, or the TCP port you configured a named instance
> to listen on). Also configure the firewall to forward
> requests for UDP port 1434 on the same IP address. SQL
> Server 2000 uses UDP port 1434 to establish communications
> links from applications.'
> --
>
> the
> HAVE|||Like Jasper said...you can code the port in the connection
or use an alias
Just to add a little clarification for you,
UDP 1434 is used to find the port number for the instance.
It's not used by the SQL Server instance or directly by
clients to connect to SQL Server. It's just to enumerate the
instances on a server and find the listening port for the
specific instance.
If you try to connect to SomeServer\MyNamedInstance and
that's what you specify for the connection, it hits UDP 1434
to use the SQL Server Resolution Service to find what port
number SomeServer\MyNamedInstance is listening on. You can
bypass that by specifying the port yourself and then there
is no need to go through UDP 1434.
-Sue
On Tue, 30 Mar 2004 13:46:33 -0800,
<anonymous@.discussions.microsoft.com> wrote:
>Thank you, Sue, for the input. I thought for named
>instances, tcp ports can be coded on server and clients,
>but NOT UDP port 1434. The following is what I get from
>BOL. If it can be done without using UDP port 1434, how
>should I configure the client to go through gateways?
>--
>'Have the network administrator configure the firewall to
>forward the IP address and TCP port the instance of SQL
>Server is listening on (using either 1433 for a default
>instance, or the TCP port you configured a named instance
>to listen on). Also configure the firewall to forward
>requests for UDP port 1434 on the same IP address. SQL
>Server 2000 uses UDP port 1434 to establish communications
>links from applications.'
>--
>
>the
>HAVE|||THANK YOU for the clarification. I guess there is no easy
way out but to hardcode each client with a specific port
number if UDP 1434 is blocked. It would be nice if the
enumeration port UDP 1434 can be changed...

>--Original Message--
>Like Jasper said...you can code the port in the connection
>or use an alias
>Just to add a little clarification for you,
>UDP 1434 is used to find the port number for the instance.
>It's not used by the SQL Server instance or directly by
>clients to connect to SQL Server. It's just to enumerate
the
>instances on a server and find the listening port for the
>specific instance.
>If you try to connect to SomeServer\MyNamedInstance and
>that's what you specify for the connection, it hits UDP
1434
>to use the SQL Server Resolution Service to find what port
>number SomeServer\MyNamedInstance is listening on. You can
>bypass that by specifying the port yourself and then there
>is no need to go through UDP 1434.
>-Sue
>On Tue, 30 Mar 2004 13:46:33 -0800,
><anonymous@.discussions.microsoft.com> wrote:
>
to
instance
communications
are
port
Can
be
>.
>|||I have tow laptops and accessing the server as SomeServer\MyNamedInstance. O
ne laptop goes directly to port 1433, the other one goes to port 1434 first
and fails to connect. Where is this option which makes the second laptop go
to port 1434 first?
Posted using Wimdows.net NntpNews Component -
Post Made from http://www.SqlJunkies.com/newsgroups Our newsgroup engine sup
ports Post Alerts, Ratings, and Searching.

Blocking Transactions

Hi Guys...

Another issue, this time regarding transactions.
It seems that blocking occurs very often when processes are being put into transactions.

For example, I have two tables, tableA and tableB.

I create a sequence container, and inside, i put in two process.
1. Execute SQL - this to insert data into tableA.
2. Data Flow Task - move data from tableA to tableB
Set the sequence container transactions to required.

When I run the package, it always get stuck at process 2.
I tried running the SP_WHO2 command, and it shows that its suspended, being blocked by id "-2".
Any idea to solve this?

Thanks.

Cheers,

Ryan TanSince the transaction is active when #2 is running, #1 probably gets X locks that prevent #2 from getting data. Looks like you either need to commit the transaction after #1 is done, or change the connection manager to allow a less severe isolation level.

Blocking Transaction

I have a 3 simple packages get called from master package. First package populates time dimension (stored proc), second one populates transaction type dimension(stored proc) and the final one populates date dimension.

I set the TransactionOption = Required in the Master package, every thing else (package & component) set TransactionOption = Supported.

I have been testing transactions, basically I made time and transaction type dimension to fail and the package did roll back. I am stuck in the date dimension.

Within Date dimension I got a stored procedure which populates calendar date attributes into Wrk_Date table. Then I have a data flow task which reads from thats working table, but it cant access it. I tried running SP_WHO2 command, and the status is SUSPENDED, being blocked by id -2.

I saw someone had similar problem and I did read Ash's comments. I did try to change the isolation level, didnt help.
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=74372&SiteID=1

Any thoughts please?

Thanks

Sutha

Sutha,

I had enormous problems with this and eventually just dropped the use of distributed transactions altogether. In some cases it simply wouldn't work because of the way DTC holds onto resources. Since then I have used checkpoint files and it works just fine.

-Jamie

|||

Jamie

Thanks for the infor, but it is annoying. Do you know this is going to be fixed at any point ?

Client is not happy having a checkpoint. They want to rollback the whole dimension load. For example if we have got 6 dimensions and 5 of them loaded fine and 6th one failed, then they want to rollback the whole lot. What would be the solution in that scenario please, if we cant use transactionoption correctly?

Thanks

Sutha

|||

I don't think its a case of it being fixed - its just "the way it is". DTC isn't doing anything wrong - its doing exactly what you've told it to i.e. Maintaining a lock on a table because of a running transaction.

Its a difficult one to broach with the client, I can see that. Maybe you could restructure your package so that the locks don't occur.

-Jamie

|||

Jamie

Thanks. I was going through step by step and identified the reason for locks. During step 2, if date table exist in stage database it TRUNCATES the table else it creates the table. Truncate statement seems to hold the lock and it is not releasing it further down the line. I have managed to get the rollback to work after removing truncate.

Thanks

Sutha

Blocking Threshold Exceeded

I have a site that calls a stored procedure to populate data on an ASP page.
50% of the time when I access this page I get a timeout error. I looked in
the Event Viewer and noticed MSSQLSERVER errors with this description:
Error: 50001, Severity: 16, State: 1
Blocking Threshold Exceeded, Threshold at 3. Total Blockers at 44
The total blockers number changes occasionaly but I don't know what this
means and how to fix it. I am not doing anything complex with the stored
procedure. It is a select statement which joins multiple tables. No
inserts, updates, or deletes are being done.
Does anyone have an idea how to correct this?
Thanks.Even though the procedure that performs the query is not
inserting/updating/deleting, is this still being done by any process? When
it actually completes, what is the average and maximum runtime of the query?
Also, the following article describes various methods for identifying what
processes are blocked, what process is doing the blocking, and what specific
T-SQL statement is blocking.
INF: Understanding and Resolving SQL Server 7.0 or 2000 Blocking Problems
http://support.microsoft.com/defaul...kb;EN-US;224453
If this is a reporting type query against data potentially with uncommitted
transactions, then you may want to consider using "set transaction isolation
level read uncommitted", so long as you understand how this can impact the
results. Read up on this in Books Online.
"Jason" <Jason@.discussions.microsoft.com> wrote in message
news:A1346683-78D7-40CE-9F7F-3AE8521D392F@.microsoft.com...
>I have a site that calls a stored procedure to populate data on an ASP
>page.
> 50% of the time when I access this page I get a timeout error. I looked
> in
> the Event Viewer and noticed MSSQLSERVER errors with this description:
> Error: 50001, Severity: 16, State: 1
> Blocking Threshold Exceeded, Threshold at 3. Total Blockers at 44
> The total blockers number changes occasionaly but I don't know what this
> means and how to fix it. I am not doing anything complex with the stored
> procedure. It is a select statement which joins multiple tables. No
> inserts, updates, or deletes are being done.
> Does anyone have an idea how to correct this?
> Thanks.
>

Blocking Telnet command for SQL Server Detection 1433/1434

Any ideas how I can I block telnet connections to SQL Server ports ?

The only way is to turn of TCP/IP connections for SQL Server on the server.

HTH, jens SUessmeyer.

http://www.sqlserver2005.de

Blocking SQL server by machine name?

Hello,
I've Windows 2000 server with SQL 2000 server running.
I have a SQL user (let's call miniSa) which is mostly "sa" on one SQL box.
And that account is used to all over the places (VB apps,Web app, DTS
connection). Now I know that a person shouldn't be access to the SQL box bu
t
he does using the account. Only way I can track down him is from SQL profil
e
with his machine name.
Is there a way that I can block the SQL box only from a specific machine nam
e?
Thank you in advance.
SangHun"SangHunJung" <SangHunJung@.discussions.microsoft.com> wrote in message
news:F23A84F9-6350-4BE8-B002-62A9BD320BCB@.microsoft.com...
> I've Windows 2000 server with SQL 2000 server running.
> I have a SQL user (let's call miniSa) which is mostly "sa" on one SQL box.
> And that account is used to all over the places (VB apps,Web app, DTS
> connection). Now I know that a person shouldn't be access to the SQL box
but
> he does using the account. Only way I can track down him is from SQL
profile
> with his machine name.
> Is there a way that I can block the SQL box only from a specific machine
name?
It's kind of ugly, but you could do TCP/IP filtering on the server level and
block the IP address of the computer that your "SQL user" uses. A better way
would be a review of the security implementation, eliminate this commonly
used account and implement Windows Authentication with nt group membership.
Steve|||Thanks for the reply Steve.
Using TCP/IP Filtering is not an option because of DHCP server. I may use
MAC address but that way that user may not use all the apps in the server.
I
don't want that happen either. I want just SQL server databases access
denied from the PC.
I will work on the whole problem but I need some time and ofcourse runing
several projects, support developers, and admin issues......tough.
Any other suggestions?
SangHun
"Steve Thompson" wrote:

> "SangHunJung" <SangHunJung@.discussions.microsoft.com> wrote in message
> news:F23A84F9-6350-4BE8-B002-62A9BD320BCB@.microsoft.com...
> but
> profile
> name?
> It's kind of ugly, but you could do TCP/IP filtering on the server level a
nd
> block the IP address of the computer that your "SQL user" uses. A better w
ay
> would be a review of the security implementation, eliminate this commonly
> used account and implement Windows Authentication with nt group membership
.
> Steve
>
>|||
> I will work on the whole problem but I need some time and ofcourse runing
> several projects, support developers, and admin issues......tough.
> Any other suggestions?
Yes, I still recommend my previous suggestion, some times there are no
+easy+ solutions. Sorry.
Steve
[vbcol=seagreen]
commonly[vbcol=seagreen]
membership.|||You can use IPSec to block a particular machine from contacting another
machine on the network.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.

Blocking SPID ... no apparent reason

Hi,
I hava a JAVA application that updates a SQL2000 (SP3a)database.
The application handles different types of "jobs" which effectively update the DB.

One job in particular appears to block all subsequent jobs. It comprises of a large amount of inserts/updates in a single transaction. This is necessary as it is an "all or nothing" scenario - so we cannot break the transaction into smaller ones. The transaction appears to succeed as it reaches the COMMIT TRAN statement without error.
However the records do not get written to the database.
EM indicates a large number of locks held on the tables accessed by the transaction and these do not get released.

Using the SP sp_blocker_pss80, the blocking SPID has a waittime of 0 and a waittype of 0x0000 - the lastwaittype is WRITELOG and its status is AWAITING COMMAND

I am using MS SQLSERVER JDBC Driver SP2 (considering using jTDS)

I have tried
- increasing Transaction Log size
- Moving Transaction Log to a separate Disk
- Reducing Isolation Mode to Read Uncommitted
- Set AutoCOMMIT to true
- set Close Cursor on COMMIT
- set SelectMethod to Direct - (we use Cursor by default)

None of these have succeeded in fixing the issue.

The job will succeed if it is the first/only job to access the database.
But if another job precedes it - then the blocking occurs.
I have verified that the preceding job only holds shared dataabase locks
before the blocking job is run.

Each job will use its own JDBC connections to access the database for reading
purposes, but all of the writing goes through the blocking SPID.

Any ideas?
Thanks, LiamIf you close the JDBC connection (when the transaction is complete), does that fix the problem?

-PatP|||I wonder if it is not a deadlock situation. By default deadlocks are not logged. Try running the following:

dbcc traceon (-1, 1205)

then run the big update process. This should start logging deadlock information to the SQL Errorlog.

Except for the data not being written at the end, you have described exactly what locking is designed to do. While someone is writing data to the database, no one else can read that data until they are done. You can potentially try to reduce table locks by checking that the update process is using an appropriate index. Try running the Index Tuning Wizard, and see if it makes any suggestions. This should, of course, be done on a test box first.|||Hi,
Thanks for replies.
1) Connections cannot be terminated after batch jobs complete due to nature of the application

2) DBCC TRACEON resulted in a number of the following entries in the SQL Log

Starting deadlock search 5306
Target Resource Owner:
ResType:LockOwner Stype:'OR' Mode: S SPID:57 ECID:0 Ec:(0x484D9510) Value:0x4b0eb320
Node:1 ResType:LockOwner Stype:'OR' Mode: S SPID:57 ECID:0 Ec:(0x484D9510) Value:0x4b0eb320
Node:2 ResType:LockOwner Stype:'OR' Mode: S SPID:53 ECID:0 Ec:(0x42ECD510) Value:0x4b103b00
End deadlock search 5306 ... a deadlock was not found.

3) Index tuner - havent tackled yet|||I thought 1205 gave you information about deadlocks in progress. I was apparently wrong. Try this:

dbcc traceoff (-1, 1205)
go
dbcc traceon (-1, 1204)

Blocking seen when two filegroups are backed up in parallel

When I backup two filegroups in parallel I see one backup
blocking the other.
eg two filegroups consisting of one data file each. Both
data files live on the f drive. One filegroup backup
writes to the h drive, the other to the i drive. Running
the backups in parallel results in one backup blocking the
other.
I suspect this is because both my data files live on the f
drive. Can anyone confirm this as the problem ?
Thanks.It's probably more to do with the fact they share the same log file and
backups need access to the logs and will put certain locks on them that
prevents some things from happening in parallel like that.
--
Andrew J. Kelly
SQL Server MVP
"anon" <anonymous@.discussions.microsoft.com> wrote in message
news:0db801c39f02$40fa1cb0$a401280a@.phx.gbl...
> When I backup two filegroups in parallel I see one backup
> blocking the other.
> eg two filegroups consisting of one data file each. Both
> data files live on the f drive. One filegroup backup
> writes to the h drive, the other to the i drive. Running
> the backups in parallel results in one backup blocking the
> other.
> I suspect this is because both my data files live on the f
> drive. Can anyone confirm this as the problem ?
> Thanks.
>|||Andrew - thanks for the reply. I think you're right.
Despite what SQL 2000 Books Online says under the
section 'Optimizing Database, Differential Database, and
File Backup Performance', I never see parallel backups.
I've now placed my two data files on different logical
drives and create the backups on two other logical drives
and still I get one backup blocked by the other.
Seems like this particular performance benefit of using
file groups doesn't really exist :( Again, nice if someone
could confirm this.
>--Original Message--
>It's probably more to do with the fact they share the
same log file and
>backups need access to the logs and will put certain
locks on them that
>prevents some things from happening in parallel like that.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"anon" <anonymous@.discussions.microsoft.com> wrote in
message
>news:0db801c39f02$40fa1cb0$a401280a@.phx.gbl...
>> When I backup two filegroups in parallel I see one
backup
>> blocking the other.
>> eg two filegroups consisting of one data file each. Both
>> data files live on the f drive. One filegroup backup
>> writes to the h drive, the other to the i drive. Running
>> the backups in parallel results in one backup blocking
the
>> other.
>> I suspect this is because both my data files live on
the f
>> drive. Can anyone confirm this as the problem ?
>> Thanks.
>
>.
>|||It can do "Parallel" backups as long as they aren't to the same database.
They are referring to backing up multiple db's at the same time.
--
Andrew J. Kelly
SQL Server MVP
"anon" <anonymous@.discussions.microsoft.com> wrote in message
news:08e001c39fa9$ea087ca0$a501280a@.phx.gbl...
> Andrew - thanks for the reply. I think you're right.
> Despite what SQL 2000 Books Online says under the
> section 'Optimizing Database, Differential Database, and
> File Backup Performance', I never see parallel backups.
> I've now placed my two data files on different logical
> drives and create the backups on two other logical drives
> and still I get one backup blocked by the other.
> Seems like this particular performance benefit of using
> file groups doesn't really exist :( Again, nice if someone
> could confirm this.
> >--Original Message--
> >It's probably more to do with the fact they share the
> same log file and
> >backups need access to the logs and will put certain
> locks on them that
> >prevents some things from happening in parallel like that.
> >
> >--
> >
> >Andrew J. Kelly
> >SQL Server MVP
> >
> >
> >"anon" <anonymous@.discussions.microsoft.com> wrote in
> message
> >news:0db801c39f02$40fa1cb0$a401280a@.phx.gbl...
> >> When I backup two filegroups in parallel I see one
> backup
> >> blocking the other.
> >>
> >> eg two filegroups consisting of one data file each. Both
> >> data files live on the f drive. One filegroup backup
> >> writes to the h drive, the other to the i drive. Running
> >> the backups in parallel results in one backup blocking
> the
> >> other.
> >>
> >> I suspect this is because both my data files live on
> the f
> >> drive. Can anyone confirm this as the problem ?
> >>
> >> Thanks.
> >>
> >
> >
> >.
> >