SQL Server 2000 Maintenance
The database administrator works like a doctor, keeping databases healthy. Learn the routine maintenance tasks that can help your databases live a long and active life.
Very nice automatic index script based on table fragmentation level, amazing!http://sqlfool.com/2009/03/automated-index-defrag-script/
</box> Find all heap tables[-][--]More heap querieshttp://sqlsolace.blogspot.com/2011/04/tsql-tables-without-clustered-indexes.htmlThis works in mssql 2005 and above
Index Stats[-][--]DBCC SHOWCONTIGShow contig on it's own will show results for all tables in the database, this could take some time. I don't recommend doing so in the production environment, since DBCC SHOWCONTIG has to scan a table/index to determine the fragmentation level and is therefore a time-consuming operation.You can also export the results to a database table for further script parsingDBCC SHOWCONTIG (tabletoshowcontig) WITH TABLERESULTSNote: Not shure what it names the database, will have to testTo show contig on one table use DBCC SHOWCONTIG ('table')
Re-Indexing[-][--]As data is added, modified, or removed from the table, indexes get fragmented. As the level of index fragmentation grows, the effectiveness of the index decreases \proportionately. To keep your data-retrieval operations as efficient as possible, it's crucial to keep index fragmentation to a minimum. There are a couple of ways of removing fragmentation
Summary[-][--]As I mentioned earlier, there are few ways to get rid of fragmentation. Some environments allow the luxury of a maintenance window in which you can drop and re-create indexes on a regular basis. In a 24/7 shop you try to avoid such activities, since dropping and re-creating indexes takes a long time and uses much transaction log space.Instead it is recommended to re-creating only those indexes that have a high level of fragmentation (over 40%), defragmenting the ones that are moderately fragmented (15-40%) and not touching those that are less than 15% fragmented. To automate the process of index maintenance, write a procedure that does the following:
Updating Statistics[-][--]Minimizing index fragmentation will help you to maintain a high level of performance. However, such time-consuming tasks should only be performed during the off-peak hours. In addition, if you're trying to provide a high level of uptime, you probably can't afford to run such jobs more than once a week. Fortunately, you can perform one activity on a regular basis to help your indexes: updating the statistics.The index statistics inform SQL Server about the distribution of key values in the index. If statistics are outdated, SQL Server might not be able to evaluate index effectiveness appropriately, and might choose to scan a table instead of scanning the index. Up-to-date \statistics guarantee effective use of indexes. SQL Server can update statistics automatically, but doing so slows down performance a bit. Alternatively, you can \schedule a job that executes the UPDATE STATISTICS statement or executes the system procedure sp_updatestats. The difference between the two is that UPDATE STATISTICS requires the table (view) or index name and performs an update on the specified object. The sp_updatestats procedure simply runs through a cursor containing all user tables and runs UPDATE STATISTICS on each. Updating statistics on large tables with multiple indexes might take a long time, so generally it's better to run UPDATE STATISTICS on such tables separately from the rest of the tables. For instance, run UPDATE STATISTICS on small tables on Monday, Wednesday, and Friday and do the same on large tables on Tuesday, Thursday, Saturday, and Sunday. Auto Shrink and Auto Grow[-][--]CAUTION[-][--]You might think that you should keep the AUTOSHRINK option turned on. Please don't try this in a production environment! The AUTOSHRINK option slows down database performance significantly, since SQL Server has to continuously monitor the disk space usage and free up the space every time data is deleted. You're much better off turning off this option and shrinking database files manually as needed. View and Change Settings[-][--]You can view the current settings through the database properties in SQL Enterprise Manager (SEM) or you can run this Transact-SQL command:
Info about autogrow and autoshrink[-][--]If you combine the autogrow and autoshrink options, you might create unnecessary overhead. Make sure that the thresholds that trigger the growth and shrink operations will not cause frequent up and down size changes. For example, you may run a transaction that causes the transaction log to grow by 100 MB by the time it commits. Some time after that the autoshrink starts and shrinks the transaction log by 100 MB. Then, you run the same transaction and it causes the transaction log to grow by 100 MB again. In that example, you are creating unnecessary overhead and potentially creating fragmentation of the log file, either of which can negatively affect performance.Physical fragmentation from changing the size of the data or log files can have a severe affect on your performance. This is true whether you use the automatic settings or whether you manually grow and shrink the files frequently.If you grow your database by small increments, or if you grow it and then shrink it, you can end up with disk fragmentation. Disk fragmentation can cause performance issues in some circumstances. A scenario of small growth increments can also reduce the performance on your system. Best Practices (autogrow and autoshrink)[-][--]For a managed production system, you must consider autogrow to be merely a contingency for unexpected growth. Do not manage your data and log growth on a day-to-day basis with autogrow.You can use alerts or monitoring programs to monitor file sizes and grow files proactively. This helps you avoid fragmentation and permits you to shift these maintenance activities to non-peak hours.AutoShrink and autogrow must be carefully evaluated by a trained Database Administrator (DBA); they must not be left unmanaged.Your autogrow increment must be large enough to avoid the performance penalties listed in the previous section. The exact value to use in your configuration setting and the choice between a percentage growth and a specific MB size growth depends on many factors in your environment. A general rule of thumb to you can use for testing is to set your autogrow setting to about one-eight the size of the file.Turn on the <MAXSIZE> setting for each file to prevent any one file from growing to a point where it uses up all available disk space.Keep the size of your transactions as small as possible to prevent unplanned file growth. Why do I have to worry about disk space if size settings are automatically controlled?[-][--]The autogrow setting cannot grow the database size beyond the limits of the available disk space on the drives for which files are defined. Therefore, if you rely on the autogrow functionality to size your databases, you must still independently check your available hard disk space. The autogrow setting is also limited by the MAXSIZE parameter you select for each file. To reduce the possibility of running out of space, you can monitor the Performance Monitor counter SQL Server: Databases Object : Data File(s) Size (KB) and set up an alert for when the database reaches a certain size.Unplanned growth of data or log files can take space that other applications expect to be available and might cause those other applications to experience problems.The growth increment of your transaction log must be large enough to stay ahead of the needs of your transaction units. Even with autogrow turned on, you can receive a message that the transaction log is full, if it cannot grow fast enough to satisfy the needs of your query.SQL Server does not constantly test for databases that have hit the configured threshold for autoshrink. Instead, it looks at the available databases and finds the first one that is configured to autoshrink. It checks that database and shrinks that database if needed. Then, it waits several minutes before checking the next database that is configured for autoshrink. In other words, SQL Server does not check all databases at once and shrink them all at once. It will work through the databases in a round robin fashion to stagger the load out over a period of time. Therefore, depending on how many databases on a particular SQL Server instance you have configured to autoshrink, it might take several hours from the time the database hits the threshold until it actually shrinks. Commands[-][--]SHRINKDATABASE[-][--]http://msdn.microsoft.com/en-us/library/ms190488.aspxThis will perform a NOTRUNCATE then a TRUNCATEONLY. This is what you want to move around free space and shrink the physical database size
INDEXDEFRAG[-][--]This can be done with the database online, and will not lock the database.Is not as effective as droping and re-building the index, infact I did this on a table whos scan density was 96.55% and logical scan was 0.45% and it changed nothing (DBREINDEX worked, it set density to 100% and logical to 0%)http://msdn.microsoft.com/en-us/library/aa258286(SQL.80).aspx
DBREINDEX[-][--]This will lock the table, so do it off hoursThis is more effective that INDEXDEFRAG. BUT Microsoft is going to discontinue this feature, so use ALTER INDEX instead.
ALTER INDEX[-][--]The Prefered method, This is what I usehttp://msdn.microsoft.com/en-us/library/ms188388.aspx
SHRINKFILE[-][--]Used to shrink a log file
Alter Index Speed Test[-][--]Here is a good speed test example, I have a table with 5 million (5,092,196) records (1,420MB in table size), here is the initial contig info (which took 31 seconds to generate)Server is a intel core 2 duo, 1g ram, sataII 250g HD windows 2003 server (standard pc really)
Stats before Index[-][--]SQL1 4:44[-][--]
SQL2 4:12[-][--]
SQL3 2:37[-][--]
SQL4 4:54[-][--]
SQL5 2:07[-][--]
SQL6 0:58[-][--]
Fix Consistency Errors[-][--]
Fix Table[-][--]We had an error in tblROTotal on server 2 one time. A dbcc checktable without data loss mode (REPAIR_REBUILD) failed to fix the problem. Had to run in maximum REPAIR_ALLOW_DATA_LOSS mode which did fix the issue
Recover Database from Suspect Mode[-][--]This page has the exact instructions, worked great for me http://www.gerixsoft.com/blog/mssql/recovering-mssql-suspect-mode-emergency-mode-error-1813.Basically when you see your database is in suspect mode DO NOT detach it. If you detach it you will not be able to get it reattached without some hackery. The above articles tells how to put a suspect database into emergency mode. After that it is read-only. You can then use a DTS transfer to copy its contents to a new database. Note when in emergency mode, enterprise manager will still show nothing in it, but you can use query analyzer (F8 opens the object browser), you can expand your database and you will see the tables there, read-only!To re-attach a detached suspect database you much move the mdf and ldf files out of the normal mssql directory. Goto enterprise manager and create a new database with the same name/filenames/databasenames/lognames... For me database is Ebis_Prod, data file name is Ebis_Demo_Data but filename is Ebis_Prod.ldf, Log data file name was Ebis_Demo_Log but filename was Ebis_Prod.ldf.Now shutdown the mssqlserver service, and move the original database mdf and ldf back, replacing the blank database files you just made. Fire up mssqlserver service and you will see the database is back in suspect mode!! Now you can move the emergency mode and DTS!!Note about DTS. After I had the database in emergency mode. I tried a DTS of every object and it failed about invalid chained index stuff, seems the index file was corrupt for some tables. So I started over, fresh database and used DTS to only copy over the table definitions (no data) with their relationships and indexes. So I had all blank tables now (with proper indexes, foreign keys..). Then I used DTS to only copy the data, left out indexes...just data for each table (uncheck Create Destination objects, just check copy data in replace mode, only selected tables, do not use default options, use copy trigger/primary foreight..), worked great. Resources[-][--] |