Showing posts with label field. Show all posts
Showing posts with label field. Show all posts

Thursday, March 29, 2012

boolean value in column

I need to do an aggregate query against an integer data type, and want to
return three columns based on the value: 1, 2, or 3. If the field name is
"Action", I should be able to sum (action=1), (action=2), and (action=3).
First I tried the following:
SELECT SU.SProc, SUM(SU.[Action] = 1), SUM(SU.[Action] = 2), SUM(SU.[Action]
= 3)
The result of each would be -1 or zero. That doesn't work, I guess because
no boolean data type exists. Using CAST or CONVERT doesn't work either for
the same reason: (SU.[Action] = 1) cannot be interpreted. So thought I would
try IF/ELSE, but cannot get the syntax right. I've used it before but can't
remember how I did it. May have been with DB2. Should go something like:
SELECT SU.SProc, SUM(IF SU.[Action] = 1 BEGIN 1 END ELSE BEGIN 0 END),
SUM(IF SU.[Action] = 2 BEGIN 1 END ELSE BEGIN 0 END), SUM(IF SU.[Action] = 3
BEGIN 1 END ELSE BEGIN 0 END)
If someone would enlighten me I'd appreciate it. ThanksSELECT
SUM CASE WHEN foobar_action = 1 then 1 ELSE 0 END) AS total_1,
SUM CASE WHEN foobar_action = 2 then 1 ELSE 0 END) AS total_2,
SUM CASE WHEN foobar_action = 3 then 1 ELSE 0 END) AS total_3
FROM Foobar;|||David McDivitt wrote:
> I need to do an aggregate query against an integer data type, and
> want to return three columns based on the value: 1, 2, or 3. If the
> field name is "Action", I should be able to sum (action=1),
> (action=2), and (action=3). First I tried the following:
> SELECT SU.SProc, SUM(SU.[Action] = 1), SUM(SU.[Action] = 2),
> SUM(SU.[Action] = 3)
> The result of each would be -1 or zero. That doesn't work, I guess
> because no boolean data type exists. Using CAST or CONVERT doesn't
> work either for the same reason: (SU.[Action] = 1) cannot be
> interpreted. So thought I would try IF/ELSE, but cannot get the
> syntax right. I've used it before but can't remember how I did it.
> May have been with DB2. Should go something like:
> SELECT SU.SProc, SUM(IF SU.[Action] = 1 BEGIN 1 END ELSE BEGIN 0 END),
> SUM(IF SU.[Action] = 2 BEGIN 1 END ELSE BEGIN 0 END), SUM(IF
> SU.[Action] = 3 BEGIN 1 END ELSE BEGIN 0 END)
> If someone would enlighten me I'd appreciate it. Thanks
Try using a CASE statement:
create table #abc (action int)
insert into #abc values (1)
insert into #abc values (2)
insert into #abc values (2)
insert into #abc values (3)
insert into #abc values (3)
insert into #abc values (3)
Select
SUM( CASE action
WHEN 1 THEN 1
ELSE 0
END ) as "Action 1",
SUM( CASE action
WHEN 2 THEN 1
ELSE 0
END ) as "Action 2",
SUM( CASE action
WHEN 3 THEN 1
ELSE 0
END ) as "Action 3"
From #abc
Action 1 Action 2 Action 3
-- -- --
1 2 3
drop table #abc
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Look up CASE WHEN in BOL
Roji. P. Thomas
Net Asset Management
http://toponewithties.blogspot.com
"David McDivitt" <x12code-del@.del-yahoo.com> wrote in message
news:ve9pg1dr6kvo45sc4s2mmmt003cisrhg6t@.
4ax.com...
>I need to do an aggregate query against an integer data type, and want to
> return three columns based on the value: 1, 2, or 3. If the field name is
> "Action", I should be able to sum (action=1), (action=2), and (action=3).
> First I tried the following:
> SELECT SU.SProc, SUM(SU.[Action] = 1), SUM(SU.[Action] = 2),
> SUM(SU.[Action]
> = 3)
> The result of each would be -1 or zero. That doesn't work, I guess because
> no boolean data type exists. Using CAST or CONVERT doesn't work either for
> the same reason: (SU.[Action] = 1) cannot be interpreted. So thought I
> would
> try IF/ELSE, but cannot get the syntax right. I've used it before but
> can't
> remember how I did it. May have been with DB2. Should go something like:
> SELECT SU.SProc, SUM(IF SU.[Action] = 1 BEGIN 1 END ELSE BEGIN 0 END),
> SUM(IF SU.[Action] = 2 BEGIN 1 END ELSE BEGIN 0 END), SUM(IF SU.[Action] =
> 3
> BEGIN 1 END ELSE BEGIN 0 END)
> If someone would enlighten me I'd appreciate it. Thanks
>|||>From: "David Gugick" <david.gugick-nospam@.quest.com>
>Date: Wed, 24 Aug 2005 13:17:27 -0400
>Lines: 62
>David McDivitt wrote:
>Try using a CASE statement:
>create table #abc (action int)
>insert into #abc values (1)
>insert into #abc values (2)
>insert into #abc values (2)
>insert into #abc values (3)
>insert into #abc values (3)
>insert into #abc values (3)
>Select
> SUM( CASE action
> WHEN 1 THEN 1
> ELSE 0
Thanks guys the CASE statement was what I was looking for.

Boolean logic on binary data

Hello,

I have a 21-byte binary field in a SQL Server table with which I want to do boolean AND logic in a report. (Basically, I want to test whether or not individual bits are turned on.) How can I do this? Is it possible to deal with the binary data without first converting it to a string?

Thanks,
MarkYou can analyze your bitmap values using RDL expressions which are regular VB.NET.|||How exactly do I refer to the data, however? What I mean is, I need to access the bits of this big blob of binary data, but SSRS is not binary-friendly--it wants a string or an int.

All I want to do is access 1 particular bit of this binary blob. If I have to convert this to a string somewhere I will, but I'd prefer not to if at all possible.

(FYI, the report needs to analyze the logonHours attribute for an Active Directory user. This is stored as a byte array, with each bit representing 1 hour in a given week--it's 1 if the user can logon, or 0 if he cannot. I'm converting this to be a 21-byte binary field in SQL.)sql

boolean fields displayed as true or false

I have a bit field in my data source that displays as true/false on my
report. I can't see how to set this so that it displays as 1 or 0. In
Visual Studio, can someone help?=iif (Fields!yourData.value = "True", 1, 0)
"beeyule" wrote:
> I have a bit field in my data source that displays as true/false on my
> report. I can't see how to set this so that it displays as 1 or 0. In
> Visual Studio, can someone help?

Tuesday, March 27, 2012

Boolean bit field Which is True False

I have a field that I need to query for true and false.
The field is a bit.
Is True 0 or 1.
I can't open Books Online and the download instructions based on my version
SQL 2000, are not clear and I don't know what detailed version I have nor
where to find it.
--
MichaelMI guess it depends on how your app inserts the data...
create table foo (hidden bit)
insert into foo values ('Y')
select * from foo
error:
Server: Msg 245, Level 16, State 1, Line 1
Syntax error converting the varchar value 'Y' to a column of data type bit.
Keith Kratochvil
"Michael Miller" <MichaelMiller@.discussions.microsoft.com> wrote in message
news:0CBE02FF-B292-4379-839A-BEF5ADDBE26A@.microsoft.com...
>I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my
> version
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM|||Michael Miller wrote:
> I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my versio
n
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM
A bit column is neither True nor False because bit is a numeric
datatype not a boolean. Usually 1 is understood to mean True and 0 is
understood to mean False. Of course it's always possible that the
person who designed your database may not have respected that
convention at all. One reason I dislike using the bit type is precisely
because of this ambiguity. A meaningful code makes it much clearer just
what the designer's intention is. On the other hand proper
documentation ought to answer your question in any case :-)
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||They are what you want them to be; SQL server only stores the 0 or 1. In
Access, true is numerically represented by -1, and false is 0.
You could say that 1 is true and 0 is false, and extract data from the
column using "true" and "false" with a CASE expression:
SELECT
CASE
WHEN Column1 = 1 THEN 'True'
ELSE 'False'
END
FROM [Your Table]
"Michael Miller" wrote:

> I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my versio
n
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM|||I think .Net automatically converts bit columns to 0=False and 1=True.
Outside of .Net it would be application specific. If you are going to use a
bit column, then I would make it a point to code around values of 0 and 1,
rather than True or False. You also need to take into account NULL values,
if this column allows them. BOL does not have much to say about this
datatype. Bottom line, bit is an integer value, and not a boolean value.
It is up to your code to determine how to use it. Here are some quotes from
BOL.
Converting bit Data
Converting to bit promotes any nonzero value to 1.
bit
Integer data type 1, 0, or NULL.
Remarks
Columns of type bit cannot have indexes on them.
Microsoft SQL ServerT optimizes the storage used for bit columns. If there
are 8 or fewer bit columns in a table, the columns are stored as 1 byte. If
there are from 9 through 16 bit columns, they are stored as 2 bytes, and so
on.
Special Data
Special data consists of data that does not fit any of the categories of
data such as binary data, character data, Unicode data, date and time data,
numeric data and monetary data.
Microsoft SQL ServerT 2000 includes four types of special data:
bit
Consists of either a 1 or a 0. Use the bit data type when representing TRUE
or FALSE, or YES or NO. For example, a client questionnaire that asks if
this is the client's first visit can be stored in a bit column.
Constants
A constant, also known as a literal or a scalar value, is a symbol that
represents a specific data value. The format of a constant depends on the
data type of the value it represents.
bit constants
bit constants are represented by the numbers zero or one, and are not
enclosed in quotation marks. If a number larger than one is used, it is
converted to one.
"Michael Miller" <MichaelMiller@.discussions.microsoft.com> wrote in message
news:0CBE02FF-B292-4379-839A-BEF5ADDBE26A@.microsoft.com...
> I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my
version
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM|||bit is an integer data type, and can be used to store 0, 1, or NULL.
You will need to define whether a value of 1 stands for true (or otherwise)
throughout your application.
- Data Types (SS2000 books online)
http://msdn.microsoft.com/library/d...br />
7msw.asp
Martin C K Poon
Senior Analyst Programmer
====================================
"Michael Miller" <MichaelMiller@.discussions.microsoft.com> bl
news:0CBE02FF-B292-4379-839A-BEF5ADDBE26A@.microsoft.com g...
> I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my
version
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM|||Thank you ALL.
I'll investigate and see what the intention of the creator was.
I'll start with 1=true/yes and 0=false/no
--
MichaelM
"Michael Miller" wrote:

> I have a field that I need to query for true and false.
> The field is a bit.
> Is True 0 or 1.
> I can't open Books Online and the download instructions based on my versio
n
> SQL 2000, are not clear and I don't know what detailed version I have nor
> where to find it.
> --
> MichaelM|||>> I have a field [sic] that I need to query for true and false.<<
Let's get back to the basics of an RDBMS. Rows are not records; fields
are not columns; tables are not files. Totally different concepts!
You are missing another basic point here. There is no BOOLEAN data
type in SQL and it is a sign of bad programming to write with lower
level bit flags in a high-level language.
Get a book on RDBMS and read about three-valued logic and NULLs to
learn why this cannot be implemented in the SQL model.
Also, look at C# and VB -- do you use -1, +1, or 0 for BOOLEAN?|||Understood. I am querying and reporting and needed to know how they are
using a particular field, which is bit. I didn't set up the database and
have differed with the contractor on other design matters as well. For
example, we have two types of tables, one called SAG and the other called
TRG. Instead of making a 3 char field, he set up one fieldname that called
TRG, and then he says whether it is true or not (the bit field). Since he
has 0's and 1's, I was trying to figure out which was which.
I would have named the field GridType, and entered either SAG or TRG, for
readability.
I understand about the education. I have no formal education in SQL Server,
other than some OJT working on a VB6 project with SQL, for about 3 years.
Added to that is 5 years in Access and 15 years in Paradox (DOS 1.0 -->Win
8.0).
Thanks for your input.
--
MichaelM
"--CELKO--" wrote:

> Let's get back to the basics of an RDBMS. Rows are not records; fields
> are not columns; tables are not files. Totally different concepts!
>
> You are missing another basic point here. There is no BOOLEAN data
> type in SQL and it is a sign of bad programming to write with lower
> level bit flags in a high-level language.
> Get a book on RDBMS and read about three-valued logic and NULLs to
> learn why this cannot be implemented in the SQL model.
> Also, look at C# and VB -- do you use -1, +1, or 0 for BOOLEAN?
>

Monday, March 19, 2012

blog select with comment count

The following sql works great when the field for my main blog message is type nvarchar but doesn't work for text which I need to convert to.

select
a.id, b.textField
count(b.a_id) as myCount
from a left join b on a.id = b.a_id
group by a.id, b.textField

What other methods could i use to get "myCount" within one sql statement?

Thanks in advance, JeffHere is more information:

using sql server 2000 (from asp.net) the following sql works great when the "textForm" field is of type nvarchar but doesn't work for the text type.

I don't believe nvarchar will work for my needs since it is limited to 8000 (4000 due to double storage). I believe I will need to use the text type.

select bm.message_id,
bm.title,
bm.display_date,
bm.message,
count(bc.comment_id) as commentCount
from blog_messages as bm LEFT JOIN blog_comments as bc
on bm.message_id = bc.message_id
where bm.active_flag = 1
group by bm.message_id, bm.title, bm.display_date, bm.message
order by display_date DESC

The purpose of the select statement is to display a list of blog entries that includes the number of comments on each entry. An example of this in cold fusion is found at
http://www.camdenfamily.com/morpheus/blog/

With my limited knowledge of the asp.net repeater control, I'm not sure how to integrate 2 select statements (1st for blog content, 2nd for comment count). It seems the fastest way to get all of the information I want is to use a stored procedure using either a temp table or a cursor to compile all of the data together. However this method may forcee me to select the count from table b for each record of table a which could be time consuming.

Is there any way to achieve this goal with one select using the union statement or another type of join?

Although this might be a better sql question, I am also wondering about alternative asp.net solutions including repeaters with 2 or more select statements or using an array or list.|||How about something like:

select a.id,
b.textField,
myCount = (select count(*) from b where a_id = a.id)
from a

You might need to include a COALESCE function around the correlated subquery to check for NULL and send back a zero.|||it worked! thanks!

select bm.message_id,
bm.title,
bm.display_date,
bm.message,
commentCount = (select count('x')
from blog_comments as bc
where bc.message_id = bm.message_id)
from blog_messages as bm
where bm.active_flag = 1
order by bm.display_date DESC

Sunday, March 11, 2012

Blocking Problems

We have a stored proc that is causing alot of blocking
problems. Basically, it selects a field based on the PK
and returns the value (0 or 1) as an output parameter:
SELECT @.Anonymous_Bool = Anonymous_Bool
FROM Member_Properties
WHERE GUID = @.GUID
The estimated execution plan shows that it does a
clustered index seek. When we tested it out it was fine,
but when it's in production, it looks like it starts
causing massive blocking problems. I don't see how the
proc itself could be a problem since it is very
efficient.
Has anyone else encountered a similar problem? could it
possibly be something in the front end code? or maybe
connection pooling? any ideas on a direction to look
would be much appreciated...Is @.Guid a UniqueIdentifier datatype? If not and the GUID column is it may
have an issue actually using the index properly. But I suspect you are
seeing the results of the connection changing the isolation level to
serializable. Or you can be seeing the effects of page splitting due to the
Guid is clustered. Check to see what is going on with profile on the
connections doing the blocking. In my opinion Guid's are a horrible PK and
especially if you cluster on them. There are few if any alternatives that
can perform worse than a Guid in a clustered index on a table with new rows
being inserted.
--
Andrew J. Kelly
SQL Server MVP
"A. Sugrue" <sugruea@.hotmail.com> wrote in message
news:058701c356ec$ec57e640$a401280a@.phx.gbl...
> We have a stored proc that is causing alot of blocking
> problems. Basically, it selects a field based on the PK
> and returns the value (0 or 1) as an output parameter:
> SELECT @.Anonymous_Bool = Anonymous_Bool
> FROM Member_Properties
> WHERE GUID = @.GUID
> The estimated execution plan shows that it does a
> clustered index seek. When we tested it out it was fine,
> but when it's in production, it looks like it starts
> causing massive blocking problems. I don't see how the
> proc itself could be a problem since it is very
> efficient.
> Has anyone else encountered a similar problem? could it
> possibly be something in the front end code? or maybe
> connection pooling? any ideas on a direction to look
> would be much appreciated...|||Yes it is a uniqueidentifier and is the PK with a
clustered index and it is also the rowguidcol for merge
replication.
>--Original Message--
>Is @.Guid a UniqueIdentifier datatype? If not and the
GUID column is it may
>have an issue actually using the index properly. But I
suspect you are
>seeing the results of the connection changing the
isolation level to
>serializable. Or you can be seeing the effects of page
splitting due to the
>Guid is clustered. Check to see what is going on with
profile on the
>connections doing the blocking. In my opinion Guid's are
a horrible PK and
>especially if you cluster on them. There are few if any
alternatives that
>can perform worse than a Guid in a clustered index on a
table with new rows
>being inserted.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>news:058701c356ec$ec57e640$a401280a@.phx.gbl...
>> We have a stored proc that is causing alot of blocking
>> problems. Basically, it selects a field based on the PK
>> and returns the value (0 or 1) as an output parameter:
>> SELECT @.Anonymous_Bool = Anonymous_Bool
>> FROM Member_Properties
>> WHERE GUID = @.GUID
>> The estimated execution plan shows that it does a
>> clustered index seek. When we tested it out it was
fine,
>> but when it's in production, it looks like it starts
>> causing massive blocking problems. I don't see how the
>> proc itself could be a problem since it is very
>> efficient.
>> Has anyone else encountered a similar problem? could it
>> possibly be something in the front end code? or maybe
>> connection pooling? any ideas on a direction to look
>> would be much appreciated...
>
>.
>|||No, but since we already had a uniqueidentifier there as
the PK, there was no need to add another column to be the
rowguidcol.
>--Original Message--
>Having a Guid for merge replication purposes is fine but
that does not mean
>it has to be the PK.
>--
>Andrew J. Kelly
>SQL Server MVP
>
>"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>news:053f01c357ae$41b2aaa0$a301280a@.phx.gbl...
>> Yes it is a uniqueidentifier and is the PK with a
>> clustered index and it is also the rowguidcol for merge
>> replication.
>> >--Original Message--
>> >Is @.Guid a UniqueIdentifier datatype? If not and the
>> GUID column is it may
>> >have an issue actually using the index properly. But I
>> suspect you are
>> >seeing the results of the connection changing the
>> isolation level to
>> >serializable. Or you can be seeing the effects of page
>> splitting due to the
>> >Guid is clustered. Check to see what is going on with
>> profile on the
>> >connections doing the blocking. In my opinion Guid's
are
>> a horrible PK and
>> >especially if you cluster on them. There are few if
any
>> alternatives that
>> >can perform worse than a Guid in a clustered index on a
>> table with new rows
>> >being inserted.
>> >
>> >--
>> >
>> >Andrew J. Kelly
>> >SQL Server MVP
>> >
>> >
>> >"A. Sugrue" <sugruea@.hotmail.com> wrote in message
>> >news:058701c356ec$ec57e640$a401280a@.phx.gbl...
>> >> We have a stored proc that is causing alot of
blocking
>> >> problems. Basically, it selects a field based on
the PK
>> >> and returns the value (0 or 1) as an output
parameter:
>> >>
>> >> SELECT @.Anonymous_Bool = Anonymous_Bool
>> >> FROM Member_Properties
>> >> WHERE GUID = @.GUID
>> >>
>> >> The estimated execution plan shows that it does a
>> >> clustered index seek. When we tested it out it was
>> fine,
>> >> but when it's in production, it looks like it starts
>> >> causing massive blocking problems. I don't see how
the
>> >> proc itself could be a problem since it is very
>> >> efficient.
>> >>
>> >> Has anyone else encountered a similar problem?
could it
>> >> possibly be something in the front end code? or
maybe
>> >> connection pooling? any ideas on a direction to look
>> >> would be much appreciated...
>> >
>> >
>> >.
>> >
>
>.
>

blocking items dependent on words within a field

I am sitting here with both my Crystal 8.5 and my SQL guides out and still can not find the answers I am looking for...

I know that if I want to block an item that starts with a certain phrase I can use the code
Code:
------------------------

not({item.itm_desc} startswith ["cement","interpore","collagraft","mesh"])
------------------------
... however, my problem is that the items that I want to block do not START the description... the words are within the description - just not at the beginning of it . I know it is still possible to block items based on those words, but I can not remember how to code the report to do so...

Also - just to make life a little more interesting - there is 1 item that begins with the word "mesh" that i DO want to show up on this report. is there coding that says something to the effect of "ignore all items beginning with "mesh" EXCEPT item # 54446" ?

Does anyone have any ideas? Any suggestions would be greatly appreciated...i just wanted to let yall know someone gave me the following suggestion in regard to the second question and it worked great:
Code:
------------------------

If {Item_no}="54446" then {item.itm_desc}={item.itm_desc} elsenot({item.itm_desc}startswith "mesh")
------------------------

i am still searching for the first answer, but at least the second one has been put to rest...

thanks

Saturday, February 25, 2012

BLOB index search

Hi,
I am wondering if I can user SQL server to store Word documents in the BLOB
field and then use Index Server to crawl through those documents (maybe on a
timed basis) to generate a catalog which can be used for web based queries
against. If this can be done then any caveats would also be welcome.
Thanks,Yes. You can do that via SQL Full Text Search:
http://www.microsoft.com/sql/evaluation/features/fulltext.asp
--
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://blogs.msdn.com/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"Paul" <ancientsiam@.yahoo.com> wrote in message
news:ebl2DFEMFHA.3448@.TK2MSFTNGP10.phx.gbl...
> Hi,
> I am wondering if I can user SQL server to store Word documents in the
BLOB
> field and then use Index Server to crawl through those documents (maybe on
a
> timed basis) to generate a catalog which can be used for web based queries
> against. If this can be done then any caveats would also be welcome.
> Thanks,
>

BLOB Field with vbNewLine Query Help

Hello,
I have a Access form that writes a record when anything is changed. This
record is written to a MEMO field with a vbCrLf after each one. This is
done so when the form is opened, the text box will have each change
written as a New Line.
This MEMO Field is written to a Link Table in SQL 2000.
The data is written like so:
Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
If I look at the Data in the table, it returns like so:
________________________________________
________________
| Field 1 |Field 2 |
|_______________________________________
_______|________|
11/19/1998 CiC No. was changed from 54241 | 53 |
9/19/1998 CiC No. was changed from 542418 | |
11/28/2004 Line ID 4 Ad ID was changed from 26 | |
11/29/2004 Agency Disc. was changed from 5 | |
________________________________________
_______|________|
The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
[Field 2]=53
I've been asked to create a report based on the LAST change date. In
this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
best way to go through this 197, Line Break blob and come up with a
query to return the last date?
- Somehow use RIGHT() ans go backwards to find the date?
- Somehow find the position of, in this case, the 3rd New Line Character
and then SELECT the next LEFT up to the end of the date ?
I've been dying over this for two days now. Can someone give me a
pointer. Thank you much.Answered in .programming. Please don't multi-post.
http://www.aspfaq.com/
(Reverse address to reply.)
"Jim Lou" <jim@.no_spam_no.com> wrote in message
news:MPG.1c165b5cff12dc6a9896a6@.msnews.microsoft.com...
> Hello,
> I have a Access form that writes a record when anything is changed. This
> record is written to a MEMO field with a vbCrLf after each one. This is
> done so when the form is opened, the text box will have each change
> written as a New Line.
> This MEMO Field is written to a Link Table in SQL 2000.
> The data is written like so:
> Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
> Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
> If I look at the Data in the table, it returns like so:
> ________________________________________
________________
> | Field 1 |Field 2 |
> |_______________________________________
_______|________|
> 11/19/1998 CiC No. was changed from 54241 | 53 |
> 9/19/1998 CiC No. was changed from 542418 | |
> 11/28/2004 Line ID 4 Ad ID was changed from 26 | |
> 11/29/2004 Agency Disc. was changed from 5 | |
> ________________________________________
_______|________|
> The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
> [Field 2]=53
>
> I've been asked to create a report based on the LAST change date. In
> this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
> best way to go through this 197, Line Break blob and come up with a
> query to return the last date?
> - Somehow use RIGHT() ans go backwards to find the date?
> - Somehow find the position of, in this case, the 3rd New Line Character
> and then SELECT the next LEFT up to the end of the date ?
> I've been dying over this for two days now. Can someone give me a
> pointer. Thank you much.

BLOB Field with vbNewLine Query Help

Hello,
I have a Access form that writes a record when anything is changed. This
record is written to a MEMO field with a vbCrLf after each one. This is
done so when the form is opened, the text box will have each change
written as a New Line.
This MEMO Field is written to a Link Table in SQL 2000.
The data is written like so:
Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
If I look at the Data in the table, it returns like so:
__________________________________________________ ______
|Field 1 |Field 2 |
|______________________________________________|__ ______|
11/19/1998 CiC No. was changed from 54241 | 53 |
9/19/1998 CiC No. was changed from 542418 | |
11/28/2004 Line ID 4 Ad ID was changed from 26 | |
11/29/2004 Agency Disc. was changed from 5 | |
_______________________________________________|__ ______|
The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
[Field 2]=53
I've been asked to create a report based on the LAST change date. In
this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
best way to go through this 197, Line Break blob and come up with a
query to return the last date?
- Somehow use RIGHT() ans go backwards to find the date?
- Somehow find the position of, in this case, the 3rd New Line Character
and then SELECT the next LEFT up to the end of the date ?
I've been dying over this for two days now. Can someone give me a
pointer. Thank you much.
Answered in .programming. Please don't multi-post.
http://www.aspfaq.com/
(Reverse address to reply.)
"Jim Lou" <jim@.no_spam_no.com> wrote in message
news:MPG.1c165b5cff12dc6a9896a6@.msnews.microsoft.c om...
> Hello,
> I have a Access form that writes a record when anything is changed. This
> record is written to a MEMO field with a vbCrLf after each one. This is
> done so when the form is opened, the text box will have each change
> written as a New Line.
> This MEMO Field is written to a Link Table in SQL 2000.
> The data is written like so:
> Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
> Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
> If I look at the Data in the table, it returns like so:
> __________________________________________________ ______
> | Field 1 |Field 2 |
> |______________________________________________|__ ______|
> 11/19/1998 CiC No. was changed from 54241 | 53 |
> 9/19/1998 CiC No. was changed from 542418 | |
> 11/28/2004 Line ID 4 Ad ID was changed from 26 | |
> 11/29/2004 Agency Disc. was changed from 5 | |
> _______________________________________________|__ ______|
> The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
> [Field 2]=53
>
> I've been asked to create a report based on the LAST change date. In
> this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
> best way to go through this 197, Line Break blob and come up with a
> query to return the last date?
> - Somehow use RIGHT() ans go backwards to find the date?
> - Somehow find the position of, in this case, the 3rd New Line Character
> and then SELECT the next LEFT up to the end of the date ?
> I've been dying over this for two days now. Can someone give me a
> pointer. Thank you much.

BLOB Field with vbNewLine Query Help

Hello,
I have a Access form that writes a record when anything is changed. This
record is written to a MEMO field with a vbCrLf after each one. This is
done so when the form is opened, the text box will have each change
written as a New Line.
This MEMO Field is written to a Link Table in SQL 2000.
The data is written like so:
Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
If I look at the Data in the table, it returns like so:
________________________________________________________
| Field 1 |Field 2 |
|______________________________________________|________|
11/19/1998 CiC No. was changed from 54241 | 53 |
9/19/1998 CiC No. was changed from 542418 | |
11/28/2004 Line ID 4 Ad ID was changed from 26 | |
11/29/2004 Agency Disc. was changed from 5 | |
_______________________________________________|________|
The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
[Field 2]=53
I've been asked to create a report based on the LAST change date. In
this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
best way to go through this 197, Line Break blob and come up with a
query to return the last date?
- Somehow use RIGHT() ans go backwards to find the date?
- Somehow find the position of, in this case, the 3rd New Line Character
and then SELECT the next LEFT up to the end of the date ?
I've been dying over this for two days now. Can someone give me a
pointer. Thank you much.Answered in .programming. Please don't multi-post.
--
http://www.aspfaq.com/
(Reverse address to reply.)
"Jim Lou" <jim@.no_spam_no.com> wrote in message
news:MPG.1c165b5cff12dc6a9896a6@.msnews.microsoft.com...
> Hello,
> I have a Access form that writes a record when anything is changed. This
> record is written to a MEMO field with a vbCrLf after each one. This is
> done so when the form is opened, the text box will have each change
> written as a New Line.
> This MEMO Field is written to a Link Table in SQL 2000.
> The data is written like so:
> Forms!OrderHeader!Changes = Forms!OrderHeader!Changes & vbCrLf & _
> Format(Date) & " Agency Disc. was changed from " & Adisc.OldValue
> If I look at the Data in the table, it returns like so:
> ________________________________________________________
> | Field 1 |Field 2 |
> |______________________________________________|________|
> 11/19/1998 CiC No. was changed from 54241 | 53 |
> 9/19/1998 CiC No. was changed from 542418 | |
> 11/28/2004 Line ID 4 Ad ID was changed from 26 | |
> 11/29/2004 Agency Disc. was changed from 5 | |
> _______________________________________________|________|
> The above is simply executing: SELECT [Field 1] FROM tblTest WHERE
> [Field 2]=53
>
> I've been asked to create a report based on the LAST change date. In
> this case 11/29/2004 (Line 4 in Field 1). My question is: what's the
> best way to go through this 197, Line Break blob and come up with a
> query to return the last date?
> - Somehow use RIGHT() ans go backwards to find the date?
> - Somehow find the position of, in this case, the 3rd New Line Character
> and then SELECT the next LEFT up to the end of the date ?
> I've been dying over this for two days now. Can someone give me a
> pointer. Thank you much.

blob field or memo field cant be used in formula field

I have a field with more then 250 size characters.When i add the field as Data base field , i can add and see the values.But i want to apply formula on the field to match the value of the field with other table field.
When i do so , i am getting "blob field or memo field can't be used in formula".

can any one suggest me a way to convert the memo field into text /string so i can compare with other field.
i tried with if (mid ({table.field),1,125} = table2.field)
but i am still getting the errorWithin the Crystal Reports formula editor, blog and image fields are not available. If you need to manipulate a text field that is more than 255 characters in length, or use the value in a large text field within a formula, you will need to do this outside of Crystal.
I suggest creating a stored procedure in your DB that returns all the field and/or portions of fields (substrings) you need for your report. Any parameters used by the report should also be included in the procedure. You might want to use the query Crystal has created for you existing report as a guide for created the Stored Proc.
Then create a report using the stored procedure as the data source. If stored procedures are not available as a data source, check your database options tab from the Crystal file menu->options

BLOB Field in a table causing huge wastage?

Our backups are extremely slow as the docs table in the site database is
extremely large.
I've tried to shrink the file and shrink the database using DBCC
ShrinkDatabase and then DBCC ShrinkFile with no luck.
On examining the database i found this...
Table_Name data_KB unused_KB unused_pct
-- -- -- --
Docs 16490400 KB 24676360 KB 60%
DocVersions 1957800 KB 3915816 KB 67%
DBCC ShowContig (Docs) shows the following
DBCC SHOWCONTIG scanning 'Docs' table...
Table: 'Docs' (1993058136); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 13643
- Extents Scanned.......................: 1761
- Extent Switches.......................: 1760
- Avg. Pages per Extent..................: 7.7
- Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
- Logical Scan Fragmentation ..............: 0.01%
- Extent Scan Fragmentation ...............: 0.28%
- Avg. Bytes Free per Page................: 210.6
- Avg. Page Density (full)................: 97.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
so as far as i can see the table isnt fragmented but is extremely wasteful.
My thinking is this...
Files in the docs table range widely in size from a few hundred KB to 50MB.
Am i right in thinking that the "content" column in Docs is set to a size
that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB are
wasted on other rows?
I really need someones help in reducing the size of the database, i've tried
reindexing the database too, again with no difference other than the extent
scan fragmentation then goes shooting up to 99.97%Hi Mauro
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
There are no files in a table. Do you mean LOB data (text, image or ntext)?
How is the table defined?
How is the LOB data inserted?
Is the LOB data ever modified? If so, how?
What version are you using?
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
This is NOT correct. How would SQL Server know the largest size you are
planning to insert? The largest size a LOB column can hold is 2 GB and it
obviously isn't reserving 2 GB per row.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> Our backups are extremely slow as the docs table in the site database is
> extremely large.
> I've tried to shrink the file and shrink the database using DBCC
> ShrinkDatabase and then DBCC ShrinkFile with no luck.
> On examining the database i found this...
> Table_Name data_KB unused_KB unused_pct
> -- -- -- --
> Docs 16490400 KB 24676360 KB 60%
> DocVersions 1957800 KB 3915816 KB 67%
> DBCC ShowContig (Docs) shows the following
> DBCC SHOWCONTIG scanning 'Docs' table...
> Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 13643
> - Extents Scanned.......................: 1761
> - Extent Switches.......................: 1760
> - Avg. Pages per Extent..................: 7.7
> - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
> - Logical Scan Fragmentation ..............: 0.01%
> - Extent Scan Fragmentation ...............: 0.28%
> - Avg. Bytes Free per Page................: 210.6
> - Avg. Page Density (full)................: 97.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> so as far as i can see the table isnt fragmented but is extremely
> wasteful.
> My thinking is this...
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
> I really need someones help in reducing the size of the database, i've
> tried
> reindexing the database too, again with no difference other than the
> extent
> scan fragmentation then goes shooting up to 99.97%
>|||I noticed full text indexing was on in the table, so i disabled it for the
table and across the entire sharepoint site. this again had no effect. For
curiosity's sake, i am going to DTS the entire table row by row to another
table and see if it is 26GB as it reports or 40GB as is shown by the file
size.
"Kalen Delaney" wrote:
> Hi Mauro
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> There are no files in a table. Do you mean LOB data (text, image or ntext)?
> How is the table defined?
its defined by Sharepoint Server 2003 (in its unmodified state) there are
three blob fields
> How is the LOB data inserted?
as far as i know its inserted using a normal insert statement, and also
inserted by sharepoint through the sharepoint user interface.
> Is the LOB data ever modified? If so, how?
not as such, it can be overwritten or deleted by a client app. no one can
actually edit the LOB that is inserted directly to the database, however,
sharepoint does allow microsoft office integration which allows us to edit MS
Office documents (and any other documents which have internet explorer
integration).
> What version are you using?
its SQL Server 2000 with SP4.
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>|||thanks for the help, on closer inspection (looking at the file size's
themselves) i've found sql server was reporting the wrong size. Ive since
forced the db to update the stats and its now reporting the correct size.
"Kalen Delaney" wrote:
> Hi Mauro
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> There are no files in a table. Do you mean LOB data (text, image or ntext)?
> How is the table defined?
> How is the LOB data inserted?
> Is the LOB data ever modified? If so, how?
> What version are you using?
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> > Our backups are extremely slow as the docs table in the site database is
> > extremely large.
> >
> > I've tried to shrink the file and shrink the database using DBCC
> > ShrinkDatabase and then DBCC ShrinkFile with no luck.
> >
> > On examining the database i found this...
> > Table_Name data_KB unused_KB unused_pct
> > -- -- -- --
> > Docs 16490400 KB 24676360 KB 60%
> > DocVersions 1957800 KB 3915816 KB 67%
> >
> > DBCC ShowContig (Docs) shows the following
> > DBCC SHOWCONTIG scanning 'Docs' table...
> > Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> > TABLE level scan performed.
> > - Pages Scanned........................: 13643
> > - Extents Scanned.......................: 1761
> > - Extent Switches.......................: 1760
> > - Avg. Pages per Extent..................: 7.7
> > - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
> > - Logical Scan Fragmentation ..............: 0.01%
> > - Extent Scan Fragmentation ...............: 0.28%
> > - Avg. Bytes Free per Page................: 210.6
> > - Avg. Page Density (full)................: 97.40%
> > DBCC execution completed. If DBCC printed error messages, contact your
> > system administrator.
> >
> > so as far as i can see the table isnt fragmented but is extremely
> > wasteful.
> >
> > My thinking is this...
> >
> > Files in the docs table range widely in size from a few hundred KB to
> > 50MB.
> > Am i right in thinking that the "content" column in Docs is set to a size
> > that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> > are
> > wasted on other rows?
> >
> > I really need someones help in reducing the size of the database, i've
> > tried
> > reindexing the database too, again with no difference other than the
> > extent
> > scan fragmentation then goes shooting up to 99.97%
> >
>
>|||Full text indexes are stored completely separately and are not considered
part of the space used by the table.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:BD339FF2-F59C-434D-B270-83746525FD87@.microsoft.com...
>I noticed full text indexing was on in the table, so i disabled it for the
> table and across the entire sharepoint site. this again had no effect.
> For
> curiosity's sake, i am going to DTS the entire table row by row to another
> table and see if it is 26GB as it reports or 40GB as is shown by the file
> size.
> "Kalen Delaney" wrote:
>> Hi Mauro
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> There are no files in a table. Do you mean LOB data (text, image or
>> ntext)?
>> How is the table defined?
> its defined by Sharepoint Server 2003 (in its unmodified state) there are
> three blob fields
>> How is the LOB data inserted?
> as far as i know its inserted using a normal insert statement, and also
> inserted by sharepoint through the sharepoint user interface.
>> Is the LOB data ever modified? If so, how?
> not as such, it can be overwritten or deleted by a client app. no one can
> actually edit the LOB that is inserted directly to the database, however,
> sharepoint does allow microsoft office integration which allows us to edit
> MS
> Office documents (and any other documents which have internet explorer
> integration).
>> What version are you using?
> its SQL Server 2000 with SP4.
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> This is NOT correct. How would SQL Server know the largest size you are
>> planning to insert? The largest size a LOB column can hold is 2 GB and it
>> obviously isn't reserving 2 GB per row.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>>
>|||I'm glad you figured this out.
I usually suggest that running with updateusage should be your first step
if you are concerned about the values returned by sp_spaceused.
--
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:7945DA78-EAA0-4248-8C66-9C33B4BF9BB3@.microsoft.com...
> thanks for the help, on closer inspection (looking at the file size's
> themselves) i've found sql server was reporting the wrong size. Ive since
> forced the db to update the stats and its now reporting the correct size.
> "Kalen Delaney" wrote:
>> Hi Mauro
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> There are no files in a table. Do you mean LOB data (text, image or
>> ntext)?
>> How is the table defined?
>> How is the LOB data inserted?
>> Is the LOB data ever modified? If so, how?
>> What version are you using?
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> This is NOT correct. How would SQL Server know the largest size you are
>> planning to insert? The largest size a LOB column can hold is 2 GB and it
>> obviously isn't reserving 2 GB per row.
>> --
>> HTH
>> Kalen Delaney, SQL Server MVP
>>
>> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
>> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
>> > Our backups are extremely slow as the docs table in the site database
>> > is
>> > extremely large.
>> >
>> > I've tried to shrink the file and shrink the database using DBCC
>> > ShrinkDatabase and then DBCC ShrinkFile with no luck.
>> >
>> > On examining the database i found this...
>> > Table_Name data_KB unused_KB
>> > unused_pct
>> > -- -- -- --
>> > Docs 16490400 KB 24676360 KB 60%
>> > DocVersions 1957800 KB 3915816 KB 67%
>> >
>> > DBCC ShowContig (Docs) shows the following
>> > DBCC SHOWCONTIG scanning 'Docs' table...
>> > Table: 'Docs' (1993058136); index ID: 1, database ID: 15
>> > TABLE level scan performed.
>> > - Pages Scanned........................: 13643
>> > - Extents Scanned.......................: 1761
>> > - Extent Switches.......................: 1760
>> > - Avg. Pages per Extent..................: 7.7
>> > - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
>> > - Logical Scan Fragmentation ..............: 0.01%
>> > - Extent Scan Fragmentation ...............: 0.28%
>> > - Avg. Bytes Free per Page................: 210.6
>> > - Avg. Page Density (full)................: 97.40%
>> > DBCC execution completed. If DBCC printed error messages, contact your
>> > system administrator.
>> >
>> > so as far as i can see the table isnt fragmented but is extremely
>> > wasteful.
>> >
>> > My thinking is this...
>> >
>> > Files in the docs table range widely in size from a few hundred KB to
>> > 50MB.
>> > Am i right in thinking that the "content" column in Docs is set to a
>> > size
>> > that accomodates the largest file (i.e. 50MB) and therefore up to
>> > 49.90MB
>> > are
>> > wasted on other rows?
>> >
>> > I really need someones help in reducing the size of the database, i've
>> > tried
>> > reindexing the database too, again with no difference other than the
>> > extent
>> > scan fragmentation then goes shooting up to 99.97%
>> >
>>

BLOB Field in a table causing huge wastage?

Our backups are extremely slow as the docs table in the site database is
extremely large.
I've tried to shrink the file and shrink the database using DBCC
ShrinkDatabase and then DBCC ShrinkFile with no luck.
On examining the database i found this...
Table_Name data_KB unused_KB unused_pct
-- -- -- --
Docs 16490400 KB 24676360 KB 60%
DocVersions 1957800 KB 3915816 KB 67%
DBCC ShowContig (Docs) shows the following
DBCC SHOWCONTIG scanning 'Docs' table...
Table: 'Docs' (1993058136); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 13643
- Extents Scanned.......................: 1761
- Extent Switches.......................: 1760
- Avg. Pages per Extent..................: 7.7
- Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
- Logical Scan Fragmentation ..............: 0.01%
- Extent Scan Fragmentation ...............: 0.28%
- Avg. Bytes Free per Page................: 210.6
- Avg. Page Density (full)................: 97.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
so as far as i can see the table isnt fragmented but is extremely wasteful.
My thinking is this...
Files in the docs table range widely in size from a few hundred KB to 50MB.
Am i right in thinking that the "content" column in Docs is set to a size
that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB ar
e
wasted on other rows?
I really need someones help in reducing the size of the database, i've tried
reindexing the database too, again with no difference other than the extent
scan fragmentation then goes shooting up to 99.97%Hi Mauro

> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
There are no files in a table. Do you mean LOB data (text, image or ntext)?
How is the table defined?
How is the LOB data inserted?
Is the LOB data ever modified? If so, how?
What version are you using?

> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
This is NOT correct. How would SQL Server know the largest size you are
planning to insert? The largest size a LOB column can hold is 2 GB and it
obviously isn't reserving 2 GB per row.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> Our backups are extremely slow as the docs table in the site database is
> extremely large.
> I've tried to shrink the file and shrink the database using DBCC
> ShrinkDatabase and then DBCC ShrinkFile with no luck.
> On examining the database i found this...
> Table_Name data_KB unused_KB unused_pct
> -- -- -- --
-
> Docs 16490400 KB 24676360 KB 60%
> DocVersions 1957800 KB 3915816 KB 67%
> DBCC ShowContig (Docs) shows the following
> DBCC SHOWCONTIG scanning 'Docs' table...
> Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 13643
> - Extents Scanned.......................: 1761
> - Extent Switches.......................: 1760
> - Avg. Pages per Extent..................: 7.7
> - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761
]
> - Logical Scan Fragmentation ..............: 0.01%
> - Extent Scan Fragmentation ...............: 0.28%
> - Avg. Bytes Free per Page................: 210.6
> - Avg. Page Density (full)................: 97.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> so as far as i can see the table isnt fragmented but is extremely
> wasteful.
> My thinking is this...
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
> I really need someones help in reducing the size of the database, i've
> tried
> reindexing the database too, again with no difference other than the
> extent
> scan fragmentation then goes shooting up to 99.97%
>|||I noticed full text indexing was on in the table, so i disabled it for the
table and across the entire sharepoint site. this again had no effect. For
curiosity's sake, i am going to DTS the entire table row by row to another
table and see if it is 26GB as it reports or 40GB as is shown by the file
size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
its defined by Sharepoint Server 2003 (in its unmodified state) there are
three blob fields

> How is the LOB data inserted?
as far as i know its inserted using a normal insert statement, and also
inserted by sharepoint through the sharepoint user interface.

> Is the LOB data ever modified? If so, how?
not as such, it can be overwritten or deleted by a client app. no one can
actually edit the LOB that is inserted directly to the database, however,
sharepoint does allow microsoft office integration which allows us to edit M
S
Office documents (and any other documents which have internet explorer
integration).
> What version are you using?
its SQL Server 2000 with SP4.
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>|||thanks for the help, on closer inspection (looking at the file size's
themselves) i've found sql server was reporting the wrong size. Ive since
forced the db to update the stats and its now reporting the correct size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
> How is the LOB data inserted?
> Is the LOB data ever modified? If so, how?
> What version are you using?
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
>
>|||Full text indexes are stored completely separately and are not considered
part of the space used by the table.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:BD339FF2-F59C-434D-B270-83746525FD87@.microsoft.com...
>I noticed full text indexing was on in the table, so i disabled it for the
> table and across the entire sharepoint site. this again had no effect.
> For
> curiosity's sake, i am going to DTS the entire table row by row to another
> table and see if it is 26GB as it reports or 40GB as is shown by the file
> size.
> "Kalen Delaney" wrote:
>
> its defined by Sharepoint Server 2003 (in its unmodified state) there are
> three blob fields
>
> as far as i know its inserted using a normal insert statement, and also
> inserted by sharepoint through the sharepoint user interface.
>
> not as such, it can be overwritten or deleted by a client app. no one can
> actually edit the LOB that is inserted directly to the database, however,
> sharepoint does allow microsoft office integration which allows us to edit
> MS
> Office documents (and any other documents which have internet explorer
> integration).
> its SQL Server 2000 with SP4.
>|||I'm glad you figured this out.
I usually suggest that running with updateusage should be your first step
if you are concerned about the values returned by sp_spaceused.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:7945DA78-EAA0-4248-8C66-9C33B4BF9BB3@.microsoft.com...[vbcol=seagreen]
> thanks for the help, on closer inspection (looking at the file size's
> themselves) i've found sql server was reporting the wrong size. Ive since
> forced the db to update the stats and its now reporting the correct size.
> "Kalen Delaney" wrote:
>

BLOB as primary key

In SQL Server, Can you set a BLOB field as the primary key? Also in mySQL?
Thanks
Jack
BLOBs in SQL Server are usually stored in columns with datatype image, and
you can't create a Primary Key on a column with datatype image. You can use
varbinary as the datatype for your blob column, but you will be limited to
900 bytes as the maximum length.
MySQL might allow it. Data integrity is not a major concern with the
designers of MySQL. Don't be surprised that if MySQL does allow you to
create a Primary Key on a BLOB, it might actually only create a Primary Key
on the first 100 bytes or so of the BLOB, without notifying you that it does
this.
Jacco Schalkwijk
SQL Server MVP
"Jacky Luk" <jl@.knight.com> wrote in message
news:OgmyIMaKEHA.2456@.TK2MSFTNGP12.phx.gbl...
> In SQL Server, Can you set a BLOB field as the primary key? Also in mySQL?
> Thanks
> Jack
>

Friday, February 24, 2012

BLOB as primary key

In SQL Server, Can you set a BLOB field as the primary key? Also in mySQL?
Thanks
JackBLOBs in SQL Server are usually stored in columns with datatype image, and
you can't create a Primary Key on a column with datatype image. You can use
varbinary as the datatype for your blob column, but you will be limited to
900 bytes as the maximum length.
MySQL might allow it. Data integrity is not a major concern with the
designers of MySQL. Don't be surprised that if mysql does allow you to
create a Primary Key on a BLOB, it might actually only create a Primary Key
on the first 100 bytes or so of the BLOB, without notifying you that it does
this.
Jacco Schalkwijk
SQL Server MVP
"Jacky Luk" <jl@.knight.com> wrote in message
news:OgmyIMaKEHA.2456@.TK2MSFTNGP12.phx.gbl...
> In SQL Server, Can you set a BLOB field as the primary key? Also in mySQL?
> Thanks
> Jack
>

BLOB and varying length binary data

I'm not sure about this as I don't have SQL Server handy. My question is if
you have already declared some binary data field say 1500 bytes per record.
Can SQL Server extend the length of the field if I suddenly need more like
3000 bytes for example? So is the field length dynamically adjustable? I
want it to be 1500 bytes long unless 3000 bytes data item has been
found/scanned. I don't want to allocate more as it consumes more disk space.
It goes like half of a G records, so variable length would be desirable..
Any ideas are welcome.
Thanks
Jack
Jacky,
Up to 8,000 bytes you can use the varbinary data type. After that, you can
use image data type (up to 2GB).
Dejan Sarka, SQL Server MVP
Associate Mentor
Solid Quality Learning
More than just Training
www.SolidQualityLearning.com
"Jacky Luk" <jl@.knight.com> wrote in message
news:uwpMWF4IEHA.2948@.TK2MSFTNGP11.phx.gbl...
> I'm not sure about this as I don't have SQL Server handy. My question is
if
> you have already declared some binary data field say 1500 bytes per
record.
> Can SQL Server extend the length of the field if I suddenly need more like
> 3000 bytes for example? So is the field length dynamically adjustable? I
> want it to be 1500 bytes long unless 3000 bytes data item has been
> found/scanned. I don't want to allocate more as it consumes more disk
space.
> It goes like half of a G records, so variable length would be desirable..
> Any ideas are welcome.
> Thanks
> Jack
>
|||Thank you very much Dejan )
Jacky
"Dejan Sarka" <dejan_please_reply_to_newsgroups.sarka@.avtenta.si > gl
sD:uzutsD6IEHA.3248@.TK2MSFTNGP12.phx.gbl...[vbcol=seagreen]
> Jacky,
> Up to 8,000 bytes you can use the varbinary data type. After that, you can
> use image data type (up to 2GB).
> --
> Dejan Sarka, SQL Server MVP
> Associate Mentor
> Solid Quality Learning
> More than just Training
> www.SolidQualityLearning.com
> "Jacky Luk" <jl@.knight.com> wrote in message
> news:uwpMWF4IEHA.2948@.TK2MSFTNGP11.phx.gbl...
> if
> record.
like[vbcol=seagreen]
> space.
desirable..
>

Thursday, February 16, 2012

Blank Field List When Using Stored Procedure

I am trying to use SQL Server 2000 stored procedure that does not take
any parameters in the Report Designer. When I run dataset in Data tab
of report designer and try to REFRESH the fields, this doesn't help
to get a list of fields. I get no error but no data. So the field list
is still empty. I tried
manually entering the fields via the Fields tab on the Data Set view
or by right clicking in the Fields window and this didn't help. When I
try to run the
report I get "An Error has occured during processing. Invalid attempt
to read when no data is present."
This is my stored procedure:
----
CREATE PROC dbo.sp_branch_counts
AS
IF OBJECT_ID('dbo.temp') IS NOT NULL
drop table [temp]
CREATE TABLE [temp] (
[Branch] [nvarchar] (255) NULL,
[InProgress] int NULL,
[Taken] int NULL,
[Dropped] int NULL
)
GO
declare @.loc nvarchar(50)
DECLARE loc_cursor CURSOR LOCAL
FOR
select branch_id from Branch
FOR READ ONLY
OPEN loc_cursor
FETCH NEXT FROM loc_cursor INTO @.loc
WHILE @.@.FETCH_STATUS = 0
BEGIN
insert into [temp] ([Branch],[InProgress],[Taken],[Dropped])
select distinct @.loc, * from
(select count(t.tran_status_id) as InProgress
from MYPELLA_USER_SHORT u INNER JOIN
[TRANSACTION] t ON u.username = t.USER_NAME
INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <= dateadd(day,1,'07/01/2004')
and t.tran_status_id=1
and u.branch=@.loc) a,
(select count(t.tran_status_id) as Taken
from MYPELLA_USER_SHORT u INNER JOIN
[TRANSACTION] t ON u.username = t.USER_NAME
INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <= dateadd(day,1,'07/01/2004')
and t.tran_status_id=2
and u.branch=@.loc) b,
(select count(t.tran_status_id) as Dropped
from MYPELLA_USER_SHORT u INNER JOIN
[TRANSACTION] t ON u.username = t.USER_NAME
INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <= dateadd(day,1,'07/01/2004')
and t.tran_status_id=3
and u.branch=@.loc) c
FETCH NEXT FROM loc_cursor INTO @.loc
END
CLOSE loc_cursor
DEALLOCATE loc_cursor
select * from [temp] order by Branch
GO
Please help!!!Have you tried using a table variable in your stored procedure.
"Natasha" wrote:
> I am trying to use SQL Server 2000 stored procedure that does not take
> any parameters in the Report Designer. When I run dataset in Data tab
> of report designer and try to REFRESH the fields, this doesn't help
> to get a list of fields. I get no error but no data. So the field list
> is still empty. I tried
> manually entering the fields via the Fields tab on the Data Set view
> or by right clicking in the Fields window and this didn't help. When I
> try to run the
> report I get "An Error has occured during processing. Invalid attempt
> to read when no data is present."
> This is my stored procedure:
> ----
> CREATE PROC dbo.sp_branch_counts
> AS
> IF OBJECT_ID('dbo.temp') IS NOT NULL
> drop table [temp]
> CREATE TABLE [temp] (
> [Branch] [nvarchar] (255) NULL,
> [InProgress] int NULL,
> [Taken] int NULL,
> [Dropped] int NULL
> )
> GO
> declare @.loc nvarchar(50)
> DECLARE loc_cursor CURSOR LOCAL
> FOR
> select branch_id from Branch
> FOR READ ONLY
> OPEN loc_cursor
> FETCH NEXT FROM loc_cursor INTO @.loc
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> insert into [temp] ([Branch],[InProgress],[Taken],[Dropped])
> select distinct @.loc, * from
> (select count(t.tran_status_id) as InProgress
> from MYPELLA_USER_SHORT u INNER JOIN
> [TRANSACTION] t ON u.username = t.USER_NAME
> INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> dateadd(day,1,'07/01/2004')
> and t.tran_status_id=1
> and u.branch=@.loc) a,
> (select count(t.tran_status_id) as Taken
> from MYPELLA_USER_SHORT u INNER JOIN
> [TRANSACTION] t ON u.username = t.USER_NAME
> INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> dateadd(day,1,'07/01/2004')
> and t.tran_status_id=2
> and u.branch=@.loc) b,
> (select count(t.tran_status_id) as Dropped
> from MYPELLA_USER_SHORT u INNER JOIN
> [TRANSACTION] t ON u.username = t.USER_NAME
> INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> dateadd(day,1,'07/01/2004')
> and t.tran_status_id=3
> and u.branch=@.loc) c
> FETCH NEXT FROM loc_cursor INTO @.loc
> END
> CLOSE loc_cursor
> DEALLOCATE loc_cursor
> select * from [temp] order by Branch
> GO
> Please help!!!
>|||I tried it and it worked. Thank you so much :)
Natasha
"B. Mark McKinney" <B. Mark McKinney@.discussions.microsoft.com> wrote in message news:<B724506F-89BA-41B9-B786-9E02FFFA17D3@.microsoft.com>...
> Have you tried using a table variable in your stored procedure.
> "Natasha" wrote:
> > I am trying to use SQL Server 2000 stored procedure that does not take
> > any parameters in the Report Designer. When I run dataset in Data tab
> > of report designer and try to REFRESH the fields, this doesn't help
> > to get a list of fields. I get no error but no data. So the field list
> > is still empty. I tried
> > manually entering the fields via the Fields tab on the Data Set view
> > or by right clicking in the Fields window and this didn't help. When I
> > try to run the
> > report I get "An Error has occured during processing. Invalid attempt
> > to read when no data is present."
> >
> > This is my stored procedure:
> > ----
> > CREATE PROC dbo.sp_branch_counts
> > AS
> >
> > IF OBJECT_ID('dbo.temp') IS NOT NULL
> > drop table [temp]
> > CREATE TABLE [temp] (
> > [Branch] [nvarchar] (255) NULL,
> > [InProgress] int NULL,
> > [Taken] int NULL,
> > [Dropped] int NULL
> > )
> > GO
> > declare @.loc nvarchar(50)
> >
> > DECLARE loc_cursor CURSOR LOCAL
> > FOR
> > select branch_id from Branch
> > FOR READ ONLY
> > OPEN loc_cursor
> > FETCH NEXT FROM loc_cursor INTO @.loc
> > WHILE @.@.FETCH_STATUS = 0
> > BEGIN
> > insert into [temp] ([Branch],[InProgress],[Taken],[Dropped])
> >
> > select distinct @.loc, * from
> > (select count(t.tran_status_id) as InProgress
> > from MYPELLA_USER_SHORT u INNER JOIN
> > [TRANSACTION] t ON u.username = t.USER_NAME
> > INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> > where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> > dateadd(day,1,'07/01/2004')
> > and t.tran_status_id=1
> > and u.branch=@.loc) a,
> > (select count(t.tran_status_id) as Taken
> > from MYPELLA_USER_SHORT u INNER JOIN
> > [TRANSACTION] t ON u.username = t.USER_NAME
> > INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> > where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> > dateadd(day,1,'07/01/2004')
> > and t.tran_status_id=2
> > and u.branch=@.loc) b,
> > (select count(t.tran_status_id) as Dropped
> > from MYPELLA_USER_SHORT u INNER JOIN
> > [TRANSACTION] t ON u.username = t.USER_NAME
> > INNER JOIN OFFERING o ON t.OFFERING_ID = o.OFFERING_ID
> > where T.TRAN_CREATE_DATE >='05/01/2004' AND T.TRAN_CREATE_DATE <=> > dateadd(day,1,'07/01/2004')
> > and t.tran_status_id=3
> > and u.branch=@.loc) c
> > FETCH NEXT FROM loc_cursor INTO @.loc
> > END
> > CLOSE loc_cursor
> > DEALLOCATE loc_cursor
> >
> > select * from [temp] order by Branch
> > GO
> >
> > Please help!!!
> >

Blank field in column

I am using the following query:
DELETE FROM ORDER_HEADER
WHERE (INVOICE_NUMBER IS NULL) OR (INVOICE_NUMBER = '0')
This works fine but doesn't remove the records where the
INVOICE_NUMBER field is blank.
Is there a way to identify a blank or empty field?
Hi,
Use the below query to delete invoice number with NULLS, 0's and Blanks:-
DELETE FROM ORDER_HEADER
WHERE (INVOICE_NUMBER IS NULL) OR (INVOICE_NUMBER = '0')
OR (DATALENGTH(LTRIM(RTRIM(INVOICE_NUMBER)))=0)
Thanks
Hari
MCDBA
"Bill" <anonymous@.discussions.microsoft.com> wrote in message
news:238c201c45ef6$c0af5ba0$a601280a@.phx.gbl...
> I am using the following query:
> DELETE FROM ORDER_HEADER
> WHERE (INVOICE_NUMBER IS NULL) OR (INVOICE_NUMBER = '0')
> This works fine but doesn't remove the records where the
> INVOICE_NUMBER field is blank.
> Is there a way to identify a blank or empty field?

Tuesday, February 14, 2012

Blank character in SQL table

Hello,
I have a field in database table called Person_name(varchar). The
Problem is that when the user enters something like this: Mr. Sam,
where there is a blank space after the dot, it displays only Mr. It
ignores any letters after white space. How can I change this, when
using update statement.
Regards
Divadivaof_7@.yahoo.com wrote:
> Hello,
> I have a field in database table called Person_name(varchar). The
> Problem is that when the user enters something like this: Mr. Sam,
> where there is a blank space after the dot, it displays only Mr. It
> ignores any letters after white space. How can I change this, when
> using update statement.
> Regards
> Diva
>
I'm not sure this is a SQL problem. You should check the client
application to see if it's doing some sort of parsing that is dropping
anything after the first space. Use SQL Profiler to capture the actual
INSERT statement coming from the app, I'll bet it's inserting "Mr."
instead of "Mr. Sam".
Tracy McKibben
MCDBA
http://www.realsqlguy.com|||Thanks
Can you briefly explain the idea of using SQL profiler, since I've
never dealt with the same
Tracy McKibben wrote:
> divaof_7@.yahoo.com wrote:
> I'm not sure this is a SQL problem. You should check the client
> application to see if it's doing some sort of parsing that is dropping
> anything after the first space. Use SQL Profiler to capture the actual
> INSERT statement coming from the app, I'll bet it's inserting "Mr."
> instead of "Mr. Sam".
>
> --
> Tracy McKibben
> MCDBA
> http://www.realsqlguy.com|||divaof_7@.yahoo.com wrote:
> Thanks
> Can you briefly explain the idea of using SQL profiler, since I've
> never dealt with the same
Profiler is used to trace the activity that is taking place in your
databases, I would HIGHLY recommend familiarizing yourself with it if
you're going to be working with SQL Server.
Essentially what you want to do is capture every INSERT statement that
is being issued against your table that is having problems. You can
either capture everything and sort through it manually, looking for
INSERT statements to that table, or you can use the filtering
capabilities of Profiler to capture only those statements.
Consult Books Online for further details.
Tracy McKibben
MCDBA
http://www.realsqlguy.com