Thursday, January 27, 2011

Clean Cache Process from Phisical Memory in Linux OS


As already known, Linux operating system knows with mature memory management facility, where the system will use maximum extra space in the memory as cache storage. The free area in memory should be accessible to be used by running processes in the system. But unfortunately there are some Linux distributions which doesn’t automatically include this facility, as now I use openSUSE 11.2 32bit and 64bit, maybe the developers are thinking cache memory eliminate is not very important compare to provide an empty memory space for running system optimally.

Currently I’m maintaining a server with 8GB memory, when the server first time startup or restarts, it can be seen by put "top" command to shell, then it will be see very much availability of space in physical memory. It could be said from 8GB available memory is only use 500MB, but when the operating system has been running for more than 2 days with applications that run on the Java platforms and MySQL database it has to be ascertained the server will crash because it ran out of empty space in memory while the JVM cache management and database cache system already sett to maximum configuration in order not to spend to much space in physical memory, and at last it requires to manual restart periodically.

But this is not a concern anymore now, when I found a command to clear the cache from memory. When I search it in Google and will be discussed in the following note.

There are 3 kinds of cleaning cache methods in memory, which can only be distinguished by the numbers 1, 2 or 3, the first is to clean pagecache (pagecache is information stored in memory which result from process of reading and writing data to disk), the second clean dentries and inodes (dentries and inodes are part of the Virtual File System in Linux), the third cleaning pagecache, dentries and inodes:

echo 1 > /proc/sys/vm/drop_caches
echo 2 > /proc/sys/vm/drop_caches
echo 3 > /proc/sys/vm/drop_caches

But do not forget to write "sync" command in the shell before writing the above three kinds of commands. From the three script mentioned above to obtain maximum results we recommend use the "echo 3 > /proc/sys/vm/drop_caches", and by using "top" command we can see that there is impairment results in "Memory Used" section from generated "top" command interface.

After knowing the useful of these commands then we can use it when the memory full in server, but this would be done manually? The response is "It can!”, imagine if it can not be done automatically it will lead to people who served as server administrator to suffer. Therefore it's good to use crontab in scheduling command is executed. Simply by inserting the commands written below to crontab editor then the function to clear the memory cache has to run every hour 6 am and 6 pm.

0 6,18 * * * sync
1 6,18 * * * echo 3 > /proc/sys/vm/drop_caches

In this article did not explain more about crontab and how to edit this function, because to know more about this can be found in article http://gienvision-en.blogspot.com/2011/01/periodic-backups-mysql-database-server.html.

Maybe for now, only this knowledge can be shared with readers, and hopefully may be useful, if there is a shortage or entries which can improve please feel free to add comments.


Read More...

Wednesday, January 19, 2011

Periodic backups MySQL Database Server in Solaris 10


After several years struggling with the Solaris server, I am grateful much new knowledge that can digging during that period, and it makes me want to share with others. And now I would like to share how MySQL database backup periodically, into compressed *. tar.gz file, this can be done by using 'time scheduler' if used windows OS, but what would we use if it works in Solaris environment as a server, and now we going to discuss it.

The first thing to know is how the process of 'dump and restore' in MySQL database, as has been stated in the manual doc, process can be done by execution of command as follows:

mysqldump –u username --password=pass_user --databases schema_db > one_file.sql

The script above is showing database command for backup into *.sql file, for a glance we can describe this commands as follows: 'mysqldump' is one of few basic commands in MySQL database to save the database to a file, '-u' is one of property command in mysql to access database using username, which is an important component to access MySQL database, 'username' is used to authorize users to access and manipulate commands in the database, for this we will use 'root' as user, '- password = pass_user ', while MySQL passwords are properties that must be included when users want to access the database as an authentication key, while 'schema_db' is a name of database schema to be data loaded into SQL dump file.

After 'dump' the database schema into a file 'one_file.sql' finished, the next process is to change the file into more compact file by using '*. tar.gz' extension, in the following manner:

tar –cf one_file.sql | gzip –c > /destination_folder/one_file_tar.tar.gz

And now we get sql dump files that have been zipped into tar.gz file, then what else to do? Next is to remove sql file from temporary folder so that left behind is only tar.gz file using 'rm' command.

rm *.sql

And now we know some major orders in Autobackup process on server, now we're going to combine a collection of several commands into 'sh' file which can be executed. For example if we send a command to database server use Solaris platform with database schema named 'gienet' that will be saved to a folder '/export/home0/gienet_backup', backup file with naming format 'gienet-yyyy-MM-dd-HH-mm-ss.sql', which will be save in zip format with naming convention 'gienet-yyyy-MM-dd-HH-mm-ss.tar.gz', this backup process will execute when the clock system shows at 00:30.

Now lets create a file named backup-script.sh (names not required to use this name), use command 'touch backup-script.sh' in the shell, then do editing with this command 'vi backup-script.sh', begins with the first line contains '#! / bin / sh' with a few lines of comment followed by examples like show below:

#!/bin/sh
#
# This is example executable file for mysql database backup
#

Continue with add a declaration variable for date format on the file name, and also folder name variable that will become a place to store backup files.

NEW_DATE=’date +%Y-%m-%d-%T’
NEW_DIR=’/export/home0/gienet_backup’

Then proceed with the database script backup, assuming we use a root as user to perform backups and 'admin' as password, then the next script which should be written like this:

mysqldump –u root –password=admin –-databases gienet > $NEW_DIR/gienet-$NEW_DATE.sql

Once the mysqldump process completed and produces 'gienet-yyyy-MM-dd-HH-mm-ss.sql' file continue with a compress the files to smaller sizes into a tar.gz file with the command like show below:

cd $NEW_DIR
tar –cf gienet-$NEW_DATE.sql | gzip –c > $NEW_DIR/gienet-$NEW_DATE.tar.gz

Perform delete sql files in a folder that was setting in variable '$ NEW_DIR' with the aim of remaining files in the folder only tar.gz file and maintain disk space not full by ambiguous file contents, with this script:

rm $NEW_DIR/*.sql

It's finished to make MySQL database automatic backups scripts in the file 'backup-script.sh', do the saving process on the active editor 'vi' with command ':wq!'. The process followed by move the file to the folder '/usr/bin' and give the file system permissions with this command 'chmod + x backup-script.sh', and then register the file to Solaris scheduler use 'crontab', you must first do some setting editor default in shell. With this command sequence:

#export DISPLAY=vi
#crontab -e

So the shell will produce output like shows below:

#ident  "@(#)root       1.21    04/03/23 SMI"
#
# The root crontab should be used to perform accounting data collection.
#
#
10 3 * * * /usr/sbin/logadm
15 3 * * 0 /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
#
# The rtc command is run to adjust the real time clock if and when
# daylight savings time changes.
#
1 2 * * * [ -x /usr/sbin/rtc ] && /usr/sbin/rtc -c > /dev/null 2>&1
#10 3 * * * /usr/lib/krb5/kprop_script ___slave_kdcs___

Point the cursor on last row and last column then press enter, do some input script to run the backup-script.sh file for every 0:30 AM every day in server with the script as follows:

#
# Script for autobackup database
#
30 0 * * * sh /usr/bin/backup-script.sh

To learn more about crontab script, it can be seen in http://adminschoice.com/crontab-quick-reference, to know the script running as desired, the next morning you can see to the folder '/export/home0/gienet_backup', if there any 'gienet-yyyy-MM-dd-HH-mm-ss.tar.gz' files, so the script is successfully without any error, if not meaning there still have errors in script. Do some check to that script and repeat the process from above.

Maybe for now, only this knowledge can be shared with readers, and hopefully may be useful, if there is a shortage or entries which can improve please feel free to add comments.


Read More...