SQL Server 2000 Maintenance

The database administrator works like a doctor, keeping databases healthy. Learn the routine maintenance tasks that can help your databases live a long and active life.

Integrity[-][--][++]

SQL Server 2000 eliminates the need to run DBCC CHECKDB or DBCC CHECKALLOC. 99% of database failures and corruption with SQL Server 2000 happen due to hardware failures.

It is still a good Idea to run DBCC CHECKDB to ensure the overall health of your database prior to backing it up.

DBCC CHECKDB is as intensive as the backups itself and should be done off hours.

This will shows the errors, without fixing them DBCC CHECKDB ('yourdb') WITH NO_INFOMSGS, ALL_ERRORMSGS

Indexing[-][--][++]

Information[-][--][++]

Clustered vs Heap Performance[-][--][++]

Very good read: Cluster vs Heap Performance (cluster wins!): http://msdn.microsoft.com/en-us/library/cc917672.aspx

Here were the final thoughts

After conducting these tests using our sample database table and sample data, we made the following general observations and recommendations. Please note that these are general recommendations that apply to the test used in this study. They may not be well suited for your particular application. Therefore, we encourage you to use them as base recommendations only and validate the applicability of the results to your particular scenario.

  1. In general, the performance benefits of having a clustered index on a table outweigh the negatives for our sample database table and simulated workload. For the case where the performance was lower (concurrent INSERT test, Test 6), the difference was insignificant. Given this, we recommend creating a clustered index on all SQL Server user tables.
  2. The disk space required by the table with a clustered index was almost 35% smaller than the table with the nonclustered index on the same set of columns (2.24 GB versus 1.46 GB). The additional space utilization was almost entirely attributed to the size of the nonclustered index. Given this, we recommend creating a clustered index on all SQL Server user tables.
  3. Deleting rows from a table that has a clustered index freed up more disk space when compared to a table organized as a heap. This residual disk space could be reclaimed by a DBA by using some maintenance tasks; however, this requires additional work and processing.

What is a clustered Index[-][--][++]

Perfect description, having a clustered index orders the ACTUAL data in that order. If you don't have a clustered index, it's a heap table, meaning the data itself is not ordered in any fasion.

Clustered indexes do NOT have to be unique, but generally should be.

So for tableX if your biggest query is by username, then date and the date may be a range, then it would be best to have to actual physical data ordered by username/date, then a query of WHERE username=x and date between '2005-01-01' and '2005-01-02' would be the fastest it could possibly be.

If you table uses a primary, auto increment key it is more efficient to make that column your clustered index because clustered indexes mean the order of the actual data. So if the auto increment is clustered that means every new record simply goes to the bottom, no "reordering" required. If it were NOT on the primary key, it would have to reorder the data on every insert so that was always physically sorted by that particular column... So if clustered is not the auto increment, there may be overhead on inserts and that needs to be taken into account when deciding the cluster.


http://msdn.microsoft.com/en-us/library/aa933131(v=sql.80).aspx

A clustered index determines the physical order of data in a table. A clustered index is analogous to a telephone directory, which arranges data by last name. Because the clustered index dictates the physical storage order of the data in the table, a table can contain only one clustered index. However, the index can comprise multiple columns (a composite index), like the way a telephone directory is organized by last name and first name.

A clustered index is particularly efficient on columns that are often searched for ranges of values. After the row with the first value is found using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent. For example, if an application frequently executes a query to retrieve records between a range of dates, a clustered index can quickly locate the row containing the beginning date, and then retrieve all adjacent rows in the table until the last date is reached. This can help increase the performance of this type of query. Also, if there is a column(s) that is used frequently to sort the data retrieved from a table, it can be advantageous to cluster (physically sort) the table on that column(s) to save the cost of a sort each time the column(s) is queried.

Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column.

It is important to define the clustered index key with as few columns as possible. If a large clustered index key is defined, any nonclustered indexes that are defined on the same table will be significantly larger because the nonclustered index entries contain the clustering key.

Consider using a clustered index for:
Columns that are frequently accessed by queries involving join or GROUP BY clauses; typically these are foreign key columns
OLTP-type applications where very fast single row lookup is required, typically by means of the primary key. Create a clustered index on the primary key.

Clustered indexes are not a good choice for: Columns that undergo changes (IE, not the primary keys)

Index Maintenance [-][--][++]

Exclamation3
Very nice automatic index script based on table fragmentation level, amazing!
http://sqlfool.com/2009/03/automated-index-defrag-script/

Find all heap tables[-][--][++]

More heap queries
http://sqlsolace.blogspot.com/2011/04/tsql-tables-without-clustered-indexes.html

This works in mssql 2005 and above

SELECT SCHEMA_NAME(o.schema_id) AS [schema]
    ,object_name(i.object_id ) AS [table]
    ,p.rows
    ,user_seeks
    ,user_scans
    ,user_lookups
    ,user_updates
    ,last_user_seek
    ,last_user_scan
    ,last_user_lookup
FROM sys.indexes i 
    INNER JOIN sys.objects o ON i.object_id = o.object_id
    INNER JOIN sys.partitions p ON i.object_id = p.object_id AND i.index_id = p.index_id
    LEFT OUTER JOIN sys.dm_db_index_usage_stats ius ON i.object_id = ius.object_id AND i.index_id = ius.index_id
WHERE i.type_desc = 'HEAP'
ORDER BY rows desc

This works in 2000

SELECT DISTINCT   
  o.name  AS TableName  
FROM sysindexes i  
INNER JOIN sysobjects o  
ON i.id = o.id   
WHERE indid = 0  
AND o.type = 'U'  
ORDER BY 1 

Index Stats[-][--][++]

DBCC SHOWCONTIG

Show contig on it's own will show results for all tables in the database, this could take some time. I don't recommend doing so in the production environment, since DBCC SHOWCONTIG has to scan a table/index to determine the fragmentation level and is therefore a time-consuming operation.

You can also export the results to a database table for further script parsing

DBCC SHOWCONTIG (tabletoshowcontig) WITH TABLERESULTS

Note: Not shure what it names the database, will have to test

To show contig on one table use DBCC SHOWCONTIG ('table')

DBCC SHOWCONTIG scanning 'authors' table...
Table: 'authors' (1977058079); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned................................: 1
- Extents Scanned..............................: 1
- Extent Switches..............................: 0
- Avg. Pages per Extent........................: 1.0
- Scan Density [Best Count:Actual Count].......: 100.00% [1:1]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 0.00%
- Avg. Bytes Free per Page.....................: 6009.0
- Avg. Page Density (full).....................: 25.76%
DBCC execution completed. If DBCC printed error messages, contact your system...

The most important values are logical fragmentation and scan density. The scan density value shows the ratio of best count of pages (the ideal scenario) over the actual count of pages in all extents (an extent is a group of eight data or index pages). Therefore, if the scan density returns 100%, there is no fragmentation. The logical fragmentation value, on the other hand, returns the percentage of pages that are out of order in the leaf level of the index. I won't to go into details of what this can mean, but the short version is that if logical fragmentation is other than zero or scan density is less than 100%, your index has some level of fragmentation and needs help.

Re-Indexing[-][--][++]

As data is added, modified, or removed from the table, indexes get fragmented. As the level of index fragmentation grows, the effectiveness of the index decreases \
proportionately. To keep your data-retrieval operations as efficient as possible, it's crucial to keep index fragmentation to a minimum. There are a couple of ways of removing fragmentation

  1. Drop and re-create an index. If multiple indexes are affected on the same table, you can drop and re-create the clustered index, which will cause all non-clustered indexes to be rebuilt as well. You can use DBCC DBREINDEX with a table name to rebuild one or multiple indexes on the specified table. Alternatively, you can use CREATE INDEX ... WITH DROP EXISTING syntax to rebuild an index. All three of these methods provide comparable performance.
  2. Run the DBCC INDEXDEFRAG statement against the affected index. This statement reorders the leaf-level pages of the index to remove fragmentation. DBCC INDEXDEFRAG is an online operation-in other words, it doesn't prevent users from reading and writing to a table. On the other hand, dropping and re-creating an index locks the table and prevents anyone else from using it. The drawback of DBCC INDEXDEFRAG is that it doesn't do as good a job as the complete rebuild of an index.

Summary[-][--][++]

As I mentioned earlier, there are few ways to get rid of fragmentation. Some environments allow the luxury of a maintenance window in which you can drop and re-create indexes on a regular basis. In a 24/7 shop you try to avoid such activities, since dropping and re-creating indexes takes a long time and uses much transaction log space.

Instead it is recommended to re-creating only those indexes that have a high level of fragmentation (over 40%), defragmenting the ones that are moderately fragmented (15-40%) and not touching those that are less than 15% fragmented.

To automate the process of index maintenance, write a procedure that does the following:

  1. Populate a temporary table with all user table names.
  2. Loop over the temp table with a cursor to get a table name.
  3. Get the names and keys of each index on this table.
  4. Run DBCC SHOWCONTIG ... WITH TABLERESULTS on each index and store the results of this statement in a second temporary table.
  5. Examine the values of scan density and logical fragmentation in the second temporary table.
  6. Apply the logic in the paragraph above to rebuild or defragment each index.

This procedure will save tons of transaction log space and much maintenance time versus rebuilding all indexes from scratch.

Updating Statistics[-][--][++]

Minimizing index fragmentation will help you to maintain a high level of performance. However, such time-consuming tasks should only be performed during the off-peak hours. In addition, if you're trying to provide a high level of uptime, you probably can't afford to run such jobs more than once a week. Fortunately, you can perform one activity on a regular basis to help your indexes: updating the statistics.

The index statistics inform SQL Server about the distribution of key values in the index. If statistics are outdated, SQL Server might not be able to evaluate index effectiveness appropriately, and might choose to scan a table instead of scanning the index. Up-to-date \
statistics guarantee effective use of indexes. SQL Server can update statistics automatically, but doing so slows down performance a bit. Alternatively, you can \
schedule a job that executes the UPDATE STATISTICS statement or executes the system procedure sp_updatestats. The difference between the two is that UPDATE STATISTICS requires the table (view) or index name and performs an update on the specified object. The sp_updatestats procedure simply runs through a cursor containing all user tables and runs UPDATE STATISTICS on each. Updating statistics on large tables with multiple indexes might take a long time, so generally it's better to run UPDATE STATISTICS on such tables separately from the rest of the tables. For instance, run UPDATE STATISTICS on small tables on Monday, Wednesday, and Friday and do the same on large tables on Tuesday, Thursday, Saturday, and Sunday.

Auto Shrink and Auto Grow[-][--][++]

CAUTION[-][--][++]

You might think that you should keep the AUTOSHRINK option turned on. Please don't try this in a production environment! The AUTOSHRINK option slows down database performance significantly, since SQL Server has to continuously monitor the disk space usage and free up the space every time data is deleted. You're much better off turning off this option and shrinking database files manually as needed.

View and Change Settings[-][--][++]

You can view the current settings through the database properties in SQL Enterprise Manager (SEM) or you can run this Transact-SQL command:

sp_helpdb [ [ @dbname= ] 'name' ] 

Info about autogrow and autoshrink[-][--][++]

If you combine the autogrow and autoshrink options, you might create unnecessary overhead. Make sure that the thresholds that trigger the growth and shrink operations will not cause frequent up and down size changes. For example, you may run a transaction that causes the transaction log to grow by 100 MB by the time it commits. Some time after that the autoshrink starts and shrinks the transaction log by 100 MB. Then, you run the same transaction and it causes the transaction log to grow by 100 MB again. In that example, you are creating unnecessary overhead and potentially creating fragmentation of the log file, either of which can negatively affect performance.

Physical fragmentation from changing the size of the data or log files can have a severe affect on your performance. This is true whether you use the automatic settings or whether you manually grow and shrink the files frequently.

If you grow your database by small increments, or if you grow it and then shrink it, you can end up with disk fragmentation. Disk fragmentation can cause performance issues in some circumstances. A scenario of small growth increments can also reduce the performance on your system.

Best Practices (autogrow and autoshrink)[-][--][++]

For a managed production system, you must consider autogrow to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-to-day basis with autogrow.

You can use alerts or monitoring programs to monitor file sizes and grow files proactively. This helps you avoid fragmentation and permits you to shift these maintenance activities to non-peak hours.

AutoShrink and autogrow must be carefully evaluated by a trained Database Administrator (DBA); they must not be left unmanaged.

Your autogrow increment must be large enough to avoid the performance penalties listed in the previous section. The exact value to use in your configuration setting and the choice between a percentage growth and a specific MB size growth depends on many factors in your environment. A general rule of thumb to you can use for testing is to set your autogrow setting to about one-eight the size of the file.

Turn on the <MAXSIZE> setting for each file to prevent any one file from growing to a point where it uses up all available disk space.

Keep the size of your transactions as small as possible to prevent unplanned file growth.

Why do I have to worry about disk space if size settings are automatically controlled?[-][--][++]

The autogrow setting cannot grow the database size beyond the limits of the available disk space on the drives for which files are defined. Therefore, if you rely on the autogrow functionality to size your databases, you must still independently check your available hard disk space. The autogrow setting is also limited by the MAXSIZE parameter you select for each file. To reduce the possibility of running out of space, you can monitor the Performance Monitor counter SQL Server: Databases Object : Data File(s) Size (KB) and set up an alert for when the database reaches a certain size.

Unplanned growth of data or log files can take space that other applications expect to be available and might cause those other
applications to experience problems.

The growth increment of your transaction log must be large enough to stay ahead of the needs of your transaction units. Even with autogrow turned on, you can receive a message that the transaction log is full, if it cannot grow fast enough to satisfy the needs of your query.

SQL Server does not constantly test for databases that have hit the configured threshold for autoshrink. Instead, it looks at the available databases and finds the first one that is configured to autoshrink. It checks that database and shrinks that database if needed. Then, it waits several minutes before checking the next database that is configured for autoshrink. In other words, SQL Server does not check all databases at once and shrink them all at once. It will work through the databases in a round robin fashion to stagger the load out over a period of time. Therefore, depending on how many databases on a particular SQL Server instance you have configured to autoshrink, it might take several hours from the time the database hits the threshold until it actually shrinks.

Commands[-][--][++]

SHRINKDATABASE[-][--][++]

http://msdn.microsoft.com/en-us/library/ms190488.aspx

This will perform a NOTRUNCATE then a TRUNCATEONLY. This is what you want to move around free space and shrink the physical database size

DBCC SHRINKDATABASE('Ebis_Prod', 10)

Or do them individually with

DBCC SHRINKDATABASE('Ebis_Prod', 10, NOTRUNCATE)
DBCC SHRINKDATABASE('Ebis_Prod', 10, TRUNCATEONLY)

Note the 10 represents 10%. So shrink the database, leaving 10% of the total database size free. So if your database is 100gb, the final size will be 110gb with 100g of actual data, and 10g free space (space allocated in the physical file, but no data occupied in the database)

INDEXDEFRAG[-][--][++]

This can be done with the database online, and will not lock the database.
Is not as effective as droping and re-building the index, infact I did this on a table whos scan density was 96.55% and logical scan was 0.45% and it changed nothing (DBREINDEX worked, it set density to 100% and logical to 0%)
http://msdn.microsoft.com/en-us/library/aa258286(SQL.80).aspx

DBCC INDEXDEFRAG ('ebis_prod', 'tblApplications')
or
DBCC INDEXDEFRAG ('ebis_prod', 'tblApplications', 'PK_tblApplications')

where PK_tblApplications is the index name

DBREINDEX[-][--][++]

This will lock the table, so do it off hours
This is more effective that INDEXDEFRAG. BUT Microsoft is going to discontinue this feature, so use ALTER INDEX instead.

DBCC DBREINDEX ('tblApplications')
or
DBCC DBREINDEX ('tblApplications', 'PK_tblApplications')

ALTER INDEX[-][--][++]

The Prefered method, This is what I use
http://msdn.microsoft.com/en-us/library/ms188388.aspx

Degrag Index
ALTER INDEX ALL ON tba_ReportOptions_OPCbyCategory ReOrganize

Rebuild index
ALTER INDEX ALL ON tba_ReportOptions_OPCbyCategory Rebuild With (Online = Off)

There are tons of options for this, see the msdn, but I just rebuild ALL index on a table.

SHRINKFILE[-][--][++]

Used to shrink a log file

DBCC SHRINKFILE(Ebis_Prod_log)

Alter Index Speed Test[-][--][++]

Here is a good speed test example, I have a table with 5 million (5,092,196) records (1,420MB in table size), here is the initial contig info (which took 31 seconds to generate)
Server is a intel core 2 duo, 1g ram, sataII 250g HD windows 2003 server (standard pc really)

DBCC SHOWCONTIG scanning 'tblVehicles' table...
Table: 'tblVehicles' (1837965624); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 181393
- Extents Scanned..............................: 22756
- Extent Switches..............................: 23569
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 96.20% [22675:23570]
- Logical Scan Fragmentation ..................: 100.00%
- Extent Scan Fragmentation ...................: 3.65%
- Avg. Bytes Free per Page.....................: 748.0
- Avg. Page Density (full).....................: 90.76%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Then a ALTER INDEX ALL finished in 6 min 38 sec and another SHOWCONTIG ran in 40 sec and gave scan dinsity as 100% and logical fragmentation as 0.01%, good! Not bad for 5 million records.

Stats before Index[-][--][++]

SQL1 4:44[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 1992244
- Extents Scanned..............................: 251891
- Extent Switches..............................: 282645
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 88.11% [249031:282646]
- Logical Scan Fragmentation ..................: 0.56%
- Extent Scan Fragmentation ...................: 8.56%
- Avg. Bytes Free per Page.....................: 334.5
- Avg. Page Density (full).....................: 95.87%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

SQL2 4:12[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 3004760
- Extents Scanned..............................: 383710
- Extent Switches..............................: 510622
- Avg. Pages per Extent........................: 7.8
- Scan Density [Best Count:Actual Count].......: 73.56% [375595:510623]
- Logical Scan Fragmentation ..................: 0.98%
- Extent Scan Fragmentation ...................: 9.28%
- Avg. Bytes Free per Page.....................: 348.8
- Avg. Page Density (full).....................: 95.69%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

SQL3 2:37[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 2694561
- Extents Scanned..............................: 339026
- Extent Switches..............................: 403431
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 83.49% [336821:403432]
- Logical Scan Fragmentation ..................: 1.86%
- Extent Scan Fragmentation ...................: 6.55%
- Avg. Bytes Free per Page.....................: 342.6
- Avg. Page Density (full).....................: 95.77%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

SQL4 4:54[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 3401664
- Extents Scanned..............................: 429328
- Extent Switches..............................: 568901
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 74.74% [425208:568902]
- Logical Scan Fragmentation ..................: 1.92%
- Extent Scan Fragmentation ...................: 5.80%
- Avg. Bytes Free per Page.....................: 364.2
- Avg. Page Density (full).....................: 95.50%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

SQL5 2:07[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 1457586
- Extents Scanned..............................: 183692
- Extent Switches..............................: 288227
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 63.21% [182199:288228]
- Logical Scan Fragmentation ..................: 3.71%
- Extent Scan Fragmentation ...................: 8.67%
- Avg. Bytes Free per Page.....................: 397.2
- Avg. Page Density (full).....................: 95.09%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

SQL6 0:58[-][--][++]

DBCC SHOWCONTIG scanning 'tblROTotal' table...
Table: 'tblROTotal' (10483116); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 853350
- Extents Scanned..............................: 107629
- Extent Switches..............................: 121484
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 87.80% [106669:121485]
- Logical Scan Fragmentation ..................: 0.80%
- Extent Scan Fragmentation ...................: 5.30%
- Avg. Bytes Free per Page.....................: 343.6
- Avg. Page Density (full).....................: 95.75%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Fix Consistency Errors[-][--][++]

  • First check entire database to see what the actual error is. DBCC CHECKDB ('yourdb') WITH NO_INFOMSGS, ALL_ERRORMSGS
  • Based on the error message you can repair single tables, see Fix Table below
  • If you wanted to attempt to repair the entire database you can use checkdb in 3 modes (http://msdn.microsoft.com/en-us/library/aa258278%28v=sql.80%29.aspx) which must be run in single user mode. These range from fast/safe to slow/potential data loss. Start with REPAIR_FAST and see if it fixes the issue.
ALTER DATABASE Ebis_Prod set SINGLE_USER
DBCC CHECKDB ('Ebis_Prod', REPAIR_FAST)
ALTER DATABASE Ebis_Prod set MULTI_USER

ALTER DATABASE Ebis_Prod set SINGLE_USER
DBCC CHECKDB ('Ebis_Prod', REPAIR_REBUILD)
ALTER DATABASE Ebis_Prod set MULTI_USER

ALTER DATABASE Ebis_Prod set SINGLE_USER
DBCC CHECKDB ('Ebis_Prod', REPAIR_ALLOW_DATA_LOSS)
ALTER DATABASE Ebis_Prod set MULTI_USER

Again this is fixing the entire DB, if you know the error is only with one table you can just see Fix Table below

Fix Table[-][--][++]

We had an error in tblROTotal on server 2 one time. A dbcc checktable without data loss mode (REPAIR_REBUILD) failed to fix the problem. Had to run in maximum REPAIR_ALLOW_DATA_LOSS mode which did fix the issue

ALTER DATABASE Ebis_Prod set SINGLE_USER
DBCC CHECKTABLE ('tblROTotal', REPAIR_ALLOW_DATA_LOSS) 
ALTER DATABASE Ebis_Prod set MULTI_USER

See http://msdn.microsoft.com/en-us/library/ms174338.aspx for details on checktable, if can run in 3 fix modes like checkdb. Use REPAIR_FAST for quick safe fixes, if that doesn't do it use REPAIR_REBUILD, if that doesn't fix it use REPAIR_ALLOW_DATA_LOSS.

Recover Database from Suspect Mode[-][--][++]

This page has the exact instructions, worked great for me http://www.gerixsoft.com/blog/mssql/recovering-mssql-suspect-mode-emergency-mode-error-1813.

Basically when you see your database is in suspect mode DO NOT detach it. If you detach it you will not be able to get it reattached without some hackery. The above articles tells how to put a suspect database into emergency mode. After that it is read-only. You can then use a DTS transfer to copy its contents to a new database. Note when in emergency mode, enterprise manager will still show nothing in it, but you can use query analyzer (F8 opens the object browser), you can expand your database and you will see the tables there, read-only!

To re-attach a detached suspect database you much move the mdf and ldf files out of the normal mssql directory. Goto enterprise manager and create a new database with the same name/filenames/databasenames/lognames... For me database is Ebis_Prod, data file name is Ebis_Demo_Data but filename is Ebis_Prod.ldf, Log data file name was Ebis_Demo_Log but filename was Ebis_Prod.ldf.

Now shutdown the mssqlserver service, and move the original database mdf and ldf back, replacing the blank database files you just made. Fire up mssqlserver service and you will see the database is back in suspect mode!! Now you can move the emergency mode and DTS!!

Note about DTS. After I had the database in emergency mode. I tried a DTS of every object and it failed about invalid chained index stuff, seems the index file was corrupt for some tables. So I started over, fresh database and used DTS to only copy over the table definitions (no data) with their relationships and indexes. So I had all blank tables now (with proper indexes, foreign keys..). Then I used DTS to only copy the data, left out indexes...just data for each table (uncheck Create Destination objects, just check copy data in replace mode, only selected tables, do not use default options, use copy trigger/primary foreight..), worked great.

Resources[-][--][++]