The conventional wisdom on where best to store large binary objects used to
be to store them on the file system because they were slowly processed in
the database. Is this still the case with the latest SQL Server?
We are designing a new system that will store many thousands of TIF files.
We would like to store them in the database because they would then be
coordinated with Backup, Restore, and Transaction issues. How much slower
is the processing of varbinary fields in the db versus the file system?
Thanks,
T"Tina" <TinaMSeaburn@.nospamexcite.com> wrote in message
news:Oy9qbdKSIHA.6060@.TK2MSFTNGP05.phx.gbl...
> The conventional wisdom on where best to store large binary objects used
> to be to store them on the file system because they were slowly processed
> in the database. Is this still the case with the latest SQL Server?
> We are designing a new system that will store many thousands of TIF files.
> We would like to store them in the database because they would then be
> coordinated with Backup, Restore, and Transaction issues. How much slower
> is the processing of varbinary fields in the db versus the file system?
> Thanks,
> T
>
The following article discusses file system storage vs SQL Server 2005
BLOBs:
ftp://ftp.research.microsoft.com/pub/tr/TR-2006-45.pdf
In SQL Server 2008 there is a new feature called FILESTREAM that directly
addresses the problem. FILESTREAM permits both SQL and file-system access to
binary data while managing the files as if they were part of the database
for backup and transaction control purposes. From my own experience I have
found that FILESTREAM can outperform file-system access for files larger
than about 2MB (YMMV).
http://msdn2.microsoft.com/en-us/library/bb895234(SQL.100).aspx
--
David Portas|||> The conventional wisdom on where best to store large binary objects used
> to be to store them on the file system because they were slowly processed
> in the database. Is this still the case with the latest SQL Server?
If you mean SQL Server 2005, then yes; if you mean SQL Server 2008, then
no... as David pointed out, FILESTREAM will address part of this problem
(but I am a little skeptical about performance of backups, concurrency, and
translation layer).
> We are designing a new system that will store many thousands of TIF files.
> We would like to store them in the database because they would then be
> coordinated with Backup, Restore, and Transaction issues. How much slower
> is the processing of varbinary fields in the db versus the file system?
Naturally I would expect a layer of overhead, depending on the number and
size of files you have, in translating the data back and forth between the
database and the application(s).
But, more importantly, and again depending on the number and size of files
you have, I am very sensitive to using expensive disk. I would be very
careful to place the files in a filegroup which is not on a very expensive
SAN (which your database might be using). High performance, fault-tolerant
disk space is very expensive, and it seems a waste to take this space away
from your database to use for, essentially, flat files. If you are using
locally attached storage then this isn't really an issue, but if the file
usage is going to be large, you might consider using a less expensive array
for the files themselves. (You can always keep backups of directory
structures as well, it;'s just very difficult to keep them point-in-time in
sync with the database.) Also keep this in mind if you use a 3rd party for
database hosting; they typically charge a lot more per TB of database
storage, versus flat file storage, so in that case make sure you weigh the
benefits you get from synchronizing your files and data, versus the hit it
takes on your budget.
A
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment