Showing posts with label transaction. Show all posts
Showing posts with label transaction. Show all posts

Sunday, March 11, 2012

Blocking when using MS DTC

Hi,
I have a blocking situation occurring that I am having trouble sorting out. It occurs when using MS DTC and using SSIS as the transaction co-ordinator. I suppose you might argue that this should be in the SSIS forum but as the blocking is occurring in SQL Server I thought I'd try here as well.

I have 2 data-flows in SSIS. One of them INSERTs to mytable, the other UPDATEs mytable. The UPDATE is getting blocked by the INSERT which holds a lock on the table (I can see this thru sp_lock). Now although this is from SSIS all you really need to know is:

-These 2 operations occur using seperate connections
-I am running the 2 operations under the same MS DTC transaction
-mytable has a PK on it. If I remove the PK then my observations so far suggest that the blocking problem disappears
-The default isolation level on the DTC transaction is Serializable. I have tried various others and got the same problem
-The locks are held by SPID=-2 (which, as far as I am aware, is DTC)

Also note that:
-I can put the 2 operations in the same data-flow which means they occur under the same connection

Now, I'm not too hot on transactions and DTC etc... Is there anything I can do here to stop my blocking problem? Or am I just scuppered?

Feel free to ask questions and I'll answer qwith as much detail as I can. I'd appreciate any advice anyone has in resolving this.

-Jamie

P.S. Here's the thread I posted on the SSIS forum about this: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=154834&SiteID=1

It seems as though page level locking is occurring. Does anyone have any advice as to how to go about eradicating this? I understand that row-level locking is possible now in SQL Server. In theory this would solve my problem because I am not updating records that I am inserting.

-Jamie

Blocking when using MS DTC

Hi,
I have a blocking situation occurring that I am having trouble sorting out. It occurs when using MS DTC and using SSIS as the transaction co-ordinator. I suppose you might argue that this should be in the SSIS forum but as the blocking is occurring in SQL Server I thought I'd try here as well.

I have 2 data-flows in SSIS. One of them INSERTs to mytable, the other UPDATEs mytable. The UPDATE is getting blocked by the INSERT which holds a lock on the table (I can see this thru sp_lock). Now although this is from SSIS all you really need to know is:

-These 2 operations occur using seperate connections
-I am running the 2 operations under the same MS DTC transaction
-mytable has a PK on it. If I remove the PK then my observations so far suggest that the blocking problem disappears
-The default isolation level on the DTC transaction is Serializable. I have tried various others and got the same problem
-The locks are held by SPID=-2 (which, as far as I am aware, is DTC)

Also note that:
-I can put the 2 operations in the same data-flow which means they occur under the same connection

Now, I'm not too hot on transactions and DTC etc... Is there anything I can do here to stop my blocking problem? Or am I just scuppered?

Feel free to ask questions and I'll answer qwith as much detail as I can. I'd appreciate any advice anyone has in resolving this.

-Jamie

P.S. Here's the thread I posted on the SSIS forum about this: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=154834&SiteID=1

It seems as though page level locking is occurring. Does anyone have any advice as to how to go about eradicating this? I understand that row-level locking is possible now in SQL Server. In theory this would solve my problem because I am not updating records that I am inserting.

-Jamie

Blocking updates of a table

Hi,
I've been trying to design a way for me to issue a transaction that:
    Block all inserts on a table when row X has a certain value (call it A)
    Add a row to the table with row X containing A Add rows to another table Unblock inserts Commit transaction
Is this possible? Can anyone give me some pointers as to what to do?
Thanks in advance!

For (1) and (2)
While inserting the row that has a certain value (A), you can request a TABLOCKX. This will acquire X lock on the table and there by block other inserts by other concurrent transactions.

For (3): Is it like any other insert to another table? In that case, nothing special needs to be done

Thanks
Sunil Agarwal

|||For 3 that is what I meant.
For 1 and 2, it sounds like TABLOCKX locks the whole table exclusively (from what I can find in Books Online) - have I read it correctly? Is there any way to block other transactions from inserting rows (while the first transaction is not yet commited) when a table key column (A in my original post) is a certain value (X in my original post) - rather than just locking the whole table?
Thanks for your help!
|||For 1 and 2, it sounds like TABLOCKX locks the whole table exclusively (from what I can find in Books Online) - have I read it correctly

sunil> yes

Is there any way to block other transactions from inserting rows (while the first transaction is not yet commited) when a table key column (A in my original post) is a certain value (X in my original post) - rather than just locking the whole table?

sunila> if you only want to block inserts by other transactions under the condition you have mentioned but allow updates/selects, then there is no way.
thanks,|||I assume you only want to block Inserts but not Updates, Deletes and SELECTS.

You could try using an Insert trigger that would rollback the other transactions until a condition has been achieved.

Blocking Transaction

I have a 3 simple packages get called from master package. First package populates time dimension (stored proc), second one populates transaction type dimension(stored proc) and the final one populates date dimension.

I set the TransactionOption = Required in the Master package, every thing else (package & component) set TransactionOption = Supported.

I have been testing transactions, basically I made time and transaction type dimension to fail and the package did roll back. I am stuck in the date dimension.

Within Date dimension I got a stored procedure which populates calendar date attributes into Wrk_Date table. Then I have a data flow task which reads from thats working table, but it cant access it. I tried running SP_WHO2 command, and the status is SUSPENDED, being blocked by id -2.

I saw someone had similar problem and I did read Ash's comments. I did try to change the isolation level, didnt help.
http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=74372&SiteID=1

Any thoughts please?

Thanks

Sutha

Sutha,

I had enormous problems with this and eventually just dropped the use of distributed transactions altogether. In some cases it simply wouldn't work because of the way DTC holds onto resources. Since then I have used checkpoint files and it works just fine.

-Jamie

|||

Jamie

Thanks for the infor, but it is annoying. Do you know this is going to be fixed at any point ?

Client is not happy having a checkpoint. They want to rollback the whole dimension load. For example if we have got 6 dimensions and 5 of them loaded fine and 6th one failed, then they want to rollback the whole lot. What would be the solution in that scenario please, if we cant use transactionoption correctly?

Thanks

Sutha

|||

I don't think its a case of it being fixed - its just "the way it is". DTC isn't doing anything wrong - its doing exactly what you've told it to i.e. Maintaining a lock on a table because of a running transaction.

Its a difficult one to broach with the client, I can see that. Maybe you could restructure your package so that the locks don't occur.

-Jamie

|||

Jamie

Thanks. I was going through step by step and identified the reason for locks. During step 2, if date table exist in stage database it TRUNCATES the table else it creates the table. Truncate statement seems to hold the lock and it is not releasing it further down the line. I have managed to get the rollback to work after removing truncate.

Thanks

Sutha

Thursday, March 8, 2012

Blocking and Transactions OK?

I'm wondering if I'm doing this right. Wondering about the transactions
and error handling. (Do I even need to put BEGIN TRANSACTION AND
COMMIT TRANSACTION in there?)

I think that this sproc is causing occasional blocking:

Alter Procedure sprocINSERTSTUFF
@.Col1Data int = Null,
@.Col2Data nvarchar(255) = Null,
@.Col3Data ntext = Null,
@.UniqueID int OUTPUT

AS

set nocount on
set xact_abort on

DECLARE @.err int
DECLARE @.ServerDate DateTime
SELECT @.ServerDate = GETUTCDATE()

BEGIN TRANSACTION

INSERT INTO
tblStuff (Col1, Col2, Col3, DateCreated, etc.)
VALUES
(@.Col1Data, @.Col2Data, @.Col3Data, @.ServerDate, etc.)

SELECT @.err = @.@.error IF @.err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@.err END

SELECT @.UniqueID = SCOPE_IDENTITY()

COMMIT TRANSACTION

BEGIN TRANSACTION

INSERT INTO
tblMoreStuff (UniqueID, DateCreated, Col1, Col2, Col3)
Values
(@.UniqueID, @.ServerDate, @.Col1Data, @.Col2Data, 'Text Label: ' +
isnull(Cast(@.Col3Data AS nvarchar(4000)),'<none>')

SELECT @.err = @.@.error IF @.err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@.err END

COMMIT TRANSACTION
SELECT @.err = @.@.error IF @.err <> 0 RETURN @.errlaurenq uantrell (laurenquantrell@.hotmail.com) writes:
> I'm wondering if I'm doing this right. Wondering about the transactions
> and error handling. (Do I even need to put BEGIN TRANSACTION AND
> COMMIT TRANSACTION in there?)

Depends on your business requirements. If it's OK that a row gets
inserter into tblStuff, but not any row in tblMoreStuff, you can take
it out entirely. If you want both rows or none of the rows inserted,
you should take out the COMMIT and the BEGIN in the middle, to make
it one single transaction.

> I think that this sproc is causing occasional blocking:

Since it's only two plain insert statements, that's a bit surprising.
But if there is a long-running trigger on one of the tables you could
get blocking.

> @.Col3Data ntext = Null,
>...
> 'Text Label: ' +
> isnull(Cast(@.Col3Data AS nvarchar(4000)),'<none>')
>...

What's the point with accepting an ntext parameter, if you truncate it
anyway?

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland,
Thanks for your reply.

> @.Col3Data ntext = Null,
>...
> 'Text Label: ' +
> isnull(Cast(@.Col3Data AS nvarchar(4000)),'<none>')
>...

>What's the point with accepting an ntext parameter, if you truncate it anyway?

The second table is used to hold a history of changes to the first
table tblStuff and in this particular case it's not necessary (or
desirable) to have the unlimited text in any ntext column stored, so
it's truncated.|||I forgot to mention in my front end application (MS Access 2K) I use
the following:

On Error GoTo myErr:
' Execute the sproc that runs the Insert above
myErr:
CurrentProject.Connection.Execute "IF @.@.trancount > 0 ROLLBACK
TRANSACTION", , adExecuteNoRecords

I am wondering if this should be sufficient to not leave an open
transaction in the event of an error.|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> I forgot to mention in my front end application (MS Access 2K) I use
> the following:
> On Error GoTo myErr:
> ' Execute the sproc that runs the Insert above
> myErr:
> CurrentProject.Connection.Execute "IF @.@.trancount > 0 ROLLBACK
> TRANSACTION", , adExecuteNoRecords
> I am wondering if this should be sufficient to not leave an open
> transaction in the event of an error.

In theory maybe. But good software practice is that every module
cleans up after itself, and does not rely on somebody else to do it.
Least of all one should trust an application that uses ADO.

Look at it this way: you have this handling in many stored procedures.
Maybe you happen to forget to insert it in some place. And you have
this handling in many places in your client code. Maybe you acceidently
leave it in some place. So keep a double safeguard.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||>Look at it this way: you have this handling in many stored procedures.
>Maybe you happen to forget to insert it in some place. And you have
>this handling in many places in your client code. Maybe you acceidently
>leave it in some place. So keep a double safeguard.

I'm not sure what you'rer suggesting...

I thought that this would clear up any error in the sproc:

SELECT @.err = @.@.error IF @.err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@.err END

COMMIT TRANSACTION
SELECT @.err = @.@.error IF @.err <> 0 RETURN @.err

Are you suggesting I put:

IF @.@.trancount > 0 ROLLBACK TRANSACTION

inthe sproc as well and not check for that in the front-end
application?|||>Look at it this way: you have this handling in many stored procedures.
>Maybe you happen to forget to insert it in some place. And you have
>this handling in many places in your client code. Maybe you acceidently
>leave it in some place. So keep a double safeguard.

I'm not sure what you'rer suggesting...

I thought that this would clear up any error in the sproc:

SELECT @.err = @.@.error IF @.err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
@.err END

COMMIT TRANSACTION
SELECT @.err = @.@.error IF @.err <> 0 RETURN @.err

Are you suggesting I put:

IF @.@.trancount > 0 ROLLBACK TRANSACTION

inthe sproc as well and not check for that in the front-end
application?|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
>>Look at it this way: you have this handling in many stored procedures.
>>Maybe you happen to forget to insert it in some place. And you have
>>this handling in many places in your client code. Maybe you acceidently
>>leave it in some place. So keep a double safeguard.
> I'm not sure what you'rer suggesting...
> I thought that this would clear up any error in the sproc:
> SELECT @.err = @.@.error IF @.err <> 0 BEGIN ROLLBACK TRANSACTION RETURN
> @.err END
>
> COMMIT TRANSACTION
> SELECT @.err = @.@.error IF @.err <> 0 RETURN @.err
>
> Are you suggesting I put:
> IF @.@.trancount > 0 ROLLBACK TRANSACTION
> inthe sproc as well and not check for that in the front-end
> application?

So I'm suggesting that you should keep things as they are. (But make
sure that your transaction scope is the right one. The procedure
you posted seemed funny to me with a COMMIT in the middle.)

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Erland. Thanks for the help and the good avice. The COMMIT in the
middle in this situation is because the first insert puts a new row of
data into a table, the second insert records the row/column info into a
row in a history table. In this situation, I want to be sure that the
new record is recorded in the primary data table even if the history is
not. If there is a failure on the second insert, I don't want to
rollback the first insert, even though the records might not end up in
sync (original and history) this would be corrected hopefully the next
time the record is updated.
I have considered and rejected using triggers for this purpose even
though that has been suggested.
Thanks again for your help in m understanding of how these things work!
lq|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> Erland. Thanks for the help and the good avice. The COMMIT in the
> middle in this situation is because the first insert puts a new row of
> data into a table, the second insert records the row/column info into a
> row in a history table. In this situation, I want to be sure that the
> new record is recorded in the primary data table even if the history is
> not. If there is a failure on the second insert, I don't want to
> rollback the first insert, even though the records might not end up in
> sync (original and history) this would be corrected hopefully the next
> time the record is updated.

Good. I just wanted to make sure that it's on purpose.

(I remember a system I worked with looooong ago. There was a stored
proecdure that filled up a table, and it was one long transaction.
Unfortunately, it tended to fill up the transaction log. (This was
Sybase 4.x, no autogrow.) My colleagues found the fix - they inserted
some COMMIT/BEGIN. Oh well.)

> I have considered and rejected using triggers for this purpose even
> though that has been suggested.

Well, that would buy you a transaction for the current and history table.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Wednesday, March 7, 2012

Blocked transaction problem

Hello,

I am trying to execute next query, but when doing it, TABLE1 locks and it does not finish.

SERVER2 is a linked server.

BEGIN TRAN
INSERT INTO TABLE1
SELECT * FROM SERVER2.DATABASE2.DBO.TABLE2 WHERE TAB_F1 IS NULL
COMMIT TRAN

I have same configuration in other 2 computers and it works ok.

What is the problem?

Thank you!!

This should not lock your transaction. But depending on the server you are using it can be that all data will retrieved from the linked server first to do a filter on the calling server. That will take more time than just doing the Select on the remote server.

HTH, Jens K. Suessmeyer.


http://www.sqlserver2005.de

|||

Hello,

there are only 3 rows on remote server.

Both servers are SQL Server 2000.

|||What about if you run the thing without any transaction ? Is it the same problem ? Do you see any locks on the table during the stale time ?

HTH, Jens K. Suessmeyer

http://www.sqlserver2005.de

|||

Hello,

if I run it without any transaction it finishes very fast.

But I want to know the reason of the problem, because I have one place that runs it this way, and I can not install a new one with same queries.

Thank you.

|||See which locks are allocated when using the command with transactions to see where the blocking is based on.

HTH, Jens K. Suessmeyer.

http://www.sqlserver2005.de|||

You are essentially doing a distributed transaction because of the use of DML with linked server table. So the delay might be attributed to network issues. Check the MSDTC configuration including network configuration to make sure everything is fine. You can also see the transaction statistics in the MSDTC console (Component Services applet) to see the average response time. If that is close to your execution time then the problem is external to SQL Server and you should ask in the Windows newsgroups or search MSKB for optimizing MSDTC setup.

If the problem is not related to MSDTC, then you can determine the wait types in SQL and troubleshoot the issue. Run following:

1. DBCC SQLPERF(WAITSTATS, 'CLEAR') WITH NO_INFOMSGS

2. your current batch

3. DBCC SQLPERF(WAITSTATS) WITH NO_INFOMSGS

Now, based on the wait type you can figure out what is the cause of the delay. Search MSKB articles for more details on using DBCC SQLPERF.

|||

Hello,

I can not see anything because the execution of the query never ends.
I leave up to 30 minutes and it does not end.

I can execute the query without a transaction and it finishes in a few miliseconds. So, it is posible to query other server.

Could it be a problem with server names?
I mean, the name registered at SQL Server Enterprise Manager, the one at Linked Server and aliases (SQL Server Client Network Utility).

Currently all of them are the same, but at SQL Server installation time the server name was a diferent one.

Thanks!!

|||

Hi!

I have the same problem!!

A simple select againts a second server with a TRAN

BEGIN TRAN
SELECT * FROM SERVER2.DATABASE2.DBO.TABLE2
COMMIT TRAN

execution of the query never ends

Without TRAN it finishes in a few miliseconds

someone have any ideas to resolve this problem?

thanks in advance.

Friday, February 24, 2012

Blkby Spid -2

I have an issue with and testing enviornment between weblogics application
and SQL server where we are trying to the transaction threshold above 5000
transaction per hour. We have a test that was ran yesterday for 12 hours
trying 8000 transactions per hour threw weblogics against sql server. When
we came in this morning the connections to sql server showed that all
connection from this server we were doing test on were blocked by the same
spid, when getting down the the specific spid, that spid was blocked by spid
-2. I had a simular issue about 1 year ago but cannot seem to remember what
causes the block spid -2 issue. My system is Sql 2000 running sp4. If you
need to post the specific sp_who2 dump I can but figured this information
would be sufficiant.
ThanksHi
In SQL Server 2000 and later, all orphaned distributed transactions are
assigned the session ID value of '-2'. Orphaned distributed transactions are
distributed transactions that are not associated with any session ID.
Stop and restart MSDTC.
Regards
--
Mike
This posting is provided "AS IS" with no warranties, and confers no rights.
"JosephPruiett" <JosephPruiett@.discussions.microsoft.com> wrote in message
news:0083D010-4C3D-4F16-886A-8DED076B43B1@.microsoft.com...
>I have an issue with and testing enviornment between weblogics application
> and SQL server where we are trying to the transaction threshold above 5000
> transaction per hour. We have a test that was ran yesterday for 12 hours
> trying 8000 transactions per hour threw weblogics against sql server.
> When
> we came in this morning the connections to sql server showed that all
> connection from this server we were doing test on were blocked by the same
> spid, when getting down the the specific spid, that spid was blocked by
> spid
> -2. I had a simular issue about 1 year ago but cannot seem to remember
> what
> causes the block spid -2 issue. My system is Sql 2000 running sp4. If
> you
> need to post the specific sp_who2 dump I can but figured this information
> would be sufficiant.
> Thanks
>
>

Blkby Spid -2

I have an issue with and testing enviornment between weblogics application
and SQL server where we are trying to the transaction threshold above 5000
transaction per hour. We have a test that was ran yesterday for 12 hours
trying 8000 transactions per hour threw weblogics against sql server. When
we came in this morning the connections to sql server showed that all
connection from this server we were doing test on were blocked by the same
spid, when getting down the the specific spid, that spid was blocked by spid
-2. I had a simular issue about 1 year ago but cannot seem to remember what
causes the block spid -2 issue. My system is Sql 2000 running sp4. If you
need to post the specific sp_who2 dump I can but figured this information
would be sufficiant.
ThanksHi
In SQL Server 2000 and later, all orphaned distributed transactions are
assigned the session ID value of '-2'. Orphaned distributed transactions are
distributed transactions that are not associated with any session ID.
Stop and restart MSDTC.
Regards
--
Mike
This posting is provided "AS IS" with no warranties, and confers no rights.
"JosephPruiett" <JosephPruiett@.discussions.microsoft.com> wrote in message
news:0083D010-4C3D-4F16-886A-8DED076B43B1@.microsoft.com...
>I have an issue with and testing enviornment between weblogics application
> and SQL server where we are trying to the transaction threshold above 5000
> transaction per hour. We have a test that was ran yesterday for 12 hours
> trying 8000 transactions per hour threw weblogics against sql server.
> When
> we came in this morning the connections to sql server showed that all
> connection from this server we were doing test on were blocked by the same
> spid, when getting down the the specific spid, that spid was blocked by
> spid
> -2. I had a simular issue about 1 year ago but cannot seem to remember
> what
> causes the block spid -2 issue. My system is Sql 2000 running sp4. If
> you
> need to post the specific sp_who2 dump I can but figured this information
> would be sufficiant.
> Thanks
>
>