Saturday, February 25, 2012

BLOB Field in a table causing huge wastage?

Our backups are extremely slow as the docs table in the site database is
extremely large.
I've tried to shrink the file and shrink the database using DBCC
ShrinkDatabase and then DBCC ShrinkFile with no luck.
On examining the database i found this...
Table_Name data_KB unused_KB unused_pct
-- -- -- --
Docs 16490400 KB 24676360 KB 60%
DocVersions 1957800 KB 3915816 KB 67%
DBCC ShowContig (Docs) shows the following
DBCC SHOWCONTIG scanning 'Docs' table...
Table: 'Docs' (1993058136); index ID: 1, database ID: 15
TABLE level scan performed.
- Pages Scanned........................: 13643
- Extents Scanned.......................: 1761
- Extent Switches.......................: 1760
- Avg. Pages per Extent..................: 7.7
- Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761]
- Logical Scan Fragmentation ..............: 0.01%
- Extent Scan Fragmentation ...............: 0.28%
- Avg. Bytes Free per Page................: 210.6
- Avg. Page Density (full)................: 97.40%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
so as far as i can see the table isnt fragmented but is extremely wasteful.
My thinking is this...
Files in the docs table range widely in size from a few hundred KB to 50MB.
Am i right in thinking that the "content" column in Docs is set to a size
that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB ar
e
wasted on other rows?
I really need someones help in reducing the size of the database, i've tried
reindexing the database too, again with no difference other than the extent
scan fragmentation then goes shooting up to 99.97%Hi Mauro

> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
There are no files in a table. Do you mean LOB data (text, image or ntext)?
How is the table defined?
How is the LOB data inserted?
Is the LOB data ever modified? If so, how?
What version are you using?

> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
This is NOT correct. How would SQL Server know the largest size you are
planning to insert? The largest size a LOB column can hold is 2 GB and it
obviously isn't reserving 2 GB per row.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
> Our backups are extremely slow as the docs table in the site database is
> extremely large.
> I've tried to shrink the file and shrink the database using DBCC
> ShrinkDatabase and then DBCC ShrinkFile with no luck.
> On examining the database i found this...
> Table_Name data_KB unused_KB unused_pct
> -- -- -- --
-
> Docs 16490400 KB 24676360 KB 60%
> DocVersions 1957800 KB 3915816 KB 67%
> DBCC ShowContig (Docs) shows the following
> DBCC SHOWCONTIG scanning 'Docs' table...
> Table: 'Docs' (1993058136); index ID: 1, database ID: 15
> TABLE level scan performed.
> - Pages Scanned........................: 13643
> - Extents Scanned.......................: 1761
> - Extent Switches.......................: 1760
> - Avg. Pages per Extent..................: 7.7
> - Scan Density [Best Count:Actual Count]......: 96.88% [1706:1761
]
> - Logical Scan Fragmentation ..............: 0.01%
> - Extent Scan Fragmentation ...............: 0.28%
> - Avg. Bytes Free per Page................: 210.6
> - Avg. Page Density (full)................: 97.40%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> so as far as i can see the table isnt fragmented but is extremely
> wasteful.
> My thinking is this...
> Files in the docs table range widely in size from a few hundred KB to
> 50MB.
> Am i right in thinking that the "content" column in Docs is set to a size
> that accomodates the largest file (i.e. 50MB) and therefore up to 49.90MB
> are
> wasted on other rows?
> I really need someones help in reducing the size of the database, i've
> tried
> reindexing the database too, again with no difference other than the
> extent
> scan fragmentation then goes shooting up to 99.97%
>|||I noticed full text indexing was on in the table, so i disabled it for the
table and across the entire sharepoint site. this again had no effect. For
curiosity's sake, i am going to DTS the entire table row by row to another
table and see if it is 26GB as it reports or 40GB as is shown by the file
size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
its defined by Sharepoint Server 2003 (in its unmodified state) there are
three blob fields

> How is the LOB data inserted?
as far as i know its inserted using a normal insert statement, and also
inserted by sharepoint through the sharepoint user interface.

> Is the LOB data ever modified? If so, how?
not as such, it can be overwritten or deleted by a client app. no one can
actually edit the LOB that is inserted directly to the database, however,
sharepoint does allow microsoft office integration which allows us to edit M
S
Office documents (and any other documents which have internet explorer
integration).
> What version are you using?
its SQL Server 2000 with SP4.
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>|||thanks for the help, on closer inspection (looking at the file size's
themselves) i've found sql server was reporting the wrong size. Ive since
forced the db to update the stats and its now reporting the correct size.
"Kalen Delaney" wrote:

> Hi Mauro
>
> There are no files in a table. Do you mean LOB data (text, image or ntext)
?
> How is the table defined?
> How is the LOB data inserted?
> Is the LOB data ever modified? If so, how?
> What version are you using?
>
> This is NOT correct. How would SQL Server know the largest size you are
> planning to insert? The largest size a LOB column can hold is 2 GB and it
> obviously isn't reserving 2 GB per row.
> --
> HTH
> Kalen Delaney, SQL Server MVP
>
> "Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
> news:16A3A1DE-D023-49B2-9CDF-9D00CFCB6DA8@.microsoft.com...
>
>|||Full text indexes are stored completely separately and are not considered
part of the space used by the table.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:BD339FF2-F59C-434D-B270-83746525FD87@.microsoft.com...
>I noticed full text indexing was on in the table, so i disabled it for the
> table and across the entire sharepoint site. this again had no effect.
> For
> curiosity's sake, i am going to DTS the entire table row by row to another
> table and see if it is 26GB as it reports or 40GB as is shown by the file
> size.
> "Kalen Delaney" wrote:
>
> its defined by Sharepoint Server 2003 (in its unmodified state) there are
> three blob fields
>
> as far as i know its inserted using a normal insert statement, and also
> inserted by sharepoint through the sharepoint user interface.
>
> not as such, it can be overwritten or deleted by a client app. no one can
> actually edit the LOB that is inserted directly to the database, however,
> sharepoint does allow microsoft office integration which allows us to edit
> MS
> Office documents (and any other documents which have internet explorer
> integration).
> its SQL Server 2000 with SP4.
>|||I'm glad you figured this out.
I usually suggest that running with updateusage should be your first step
if you are concerned about the values returned by sp_spaceused.
HTH
Kalen Delaney, SQL Server MVP
"Mauro Masucci" <MauroMasucci@.discussions.microsoft.com> wrote in message
news:7945DA78-EAA0-4248-8C66-9C33B4BF9BB3@.microsoft.com...[vbcol=seagreen]
> thanks for the help, on closer inspection (looking at the file size's
> themselves) i've found sql server was reporting the wrong size. Ive since
> forced the db to update the stats and its now reporting the correct size.
> "Kalen Delaney" wrote:
>

No comments:

Post a Comment