rsync from one server to another


$ rsync options source destination

Synchronize Files From Local to Remote (Plesk to cPanel)

$ rsync -avz /var/www/vhosts/ sshuser@

Synchronize Files From Remote to Local )cPanel to Plesk)

$ rsync -avz sshuser@ /var/www/vhosts/

Here is a short summary of the options available in rsync. Please refer to the detailed description below for a complete description.

-v, –verbose increase verbosity
-q, –quiet suppress non-error messages
–no-motd suppress daemon-mode MOTD (see caveat)
-c, –checksum skip based on checksum, not mod-time & size
-a, –archive archive mode; equals -rlptgoD (no -H,-A,-X)
–no-OPTION turn off an implied OPTION (e.g. –no-D)
-r, –recursive recurse into directories
-R, –relative use relative path names
–no-implied-dirs don’t send implied dirs with –relative
-b, –backup make backups (see –suffix & –backup-dir)
–backup-dir=DIR make backups into hierarchy based in DIR
–suffix=SUFFIX backup suffix (default ~ w/o –backup-dir)
-u, –update skip files that are newer on the receiver
–inplace update destination files in-place
–append append data onto shorter files
–append-verify –append w/old data in file checksum
-d, –dirs transfer directories without recursing
-l, –links copy symlinks as symlinks
-L, –copy-links transform symlink into referent file/dir
–copy-unsafe-links only “unsafe” symlinks are transformed
–safe-links ignore symlinks that point outside the tree
-k, –copy-dirlinks transform symlink to dir into referent dir
-K, –keep-dirlinks treat symlinked dir on receiver as dir
-H, –hard-links preserve hard links
-p, –perms preserve permissions
-E, –executability preserve executability
–chmod=CHMOD affect file and/or directory permissions
-A, –acls preserve ACLs (implies -p)
-X, –xattrs preserve extended attributes
-o, –owner preserve owner (super-user only)
-g, –group preserve group
–devices preserve device files (super-user only)
–specials preserve special files
-D same as –devices –specials
-t, –times preserve modification times
-O, –omit-dir-times omit directories from –times
–super receiver attempts super-user activities
–fake-super store/recover privileged attrs using xattrs
-S, –sparse handle sparse files efficiently
-n, –dry-run perform a trial run with no changes made
-W, –whole-file copy files whole (w/o delta-xfer algorithm)
-x, –one-file-system don’t cross filesystem boundaries
-B, –block-size=SIZE force a fixed checksum block-size
-e, –rsh=COMMAND specify the remote shell to use
–rsync-path=PROGRAM specify the rsync to run on remote machine
–existing skip creating new files on receiver
–ignore-existing skip updating files that exist on receiver
–remove-source-files sender removes synchronized files (non-dir)
–del an alias for –delete-during
–delete delete extraneous files from dest dirs
–delete-before receiver deletes before transfer (default)
–delete-during receiver deletes during xfer, not before
–delete-delay find deletions during, delete after
–delete-after receiver deletes after transfer, not before
–delete-excluded also delete excluded files from dest dirs
–ignore-errors delete even if there are I/O errors
–force force deletion of dirs even if not empty
–max-delete=NUM don’t delete more than NUM files
–max-size=SIZE don’t transfer any file larger than SIZE
–min-size=SIZE don’t transfer any file smaller than SIZE
–partial keep partially transferred files
–partial-dir=DIR put a partially transferred file into DIR
–delay-updates put all updated files into place at end
-m, –prune-empty-dirs prune empty directory chains from file-list
–numeric-ids don’t map uid/gid values by user/group name
–timeout=SECONDS set I/O timeout in seconds
–contimeout=SECONDS set daemon connection timeout in seconds
-I, –ignore-times don’t skip files that match size and time
–size-only skip files that match in size
–modify-window=NUM compare mod-times with reduced accuracy
-T, –temp-dir=DIR create temporary files in directory DIR
-y, –fuzzy find similar file for basis if no dest file
–compare-dest=DIR also compare received files relative to DIR
–copy-dest=DIR … and include copies of unchanged files
–link-dest=DIR hardlink to files in DIR when unchanged
-z, –compress compress file data during the transfer
–compress-level=NUM explicitly set compression level
–skip-compress=LIST skip compressing files with suffix in LIST
-C, –cvs-exclude auto-ignore files in the same way CVS does
-f, –filter=RULE add a file-filtering RULE
-F same as –filter=’dir-merge /.rsync-filter’
repeated: –filter=’- .rsync-filter’
–exclude=PATTERN exclude files matching PATTERN
–exclude-from=FILE read exclude patterns from FILE
–include=PATTERN don’t exclude files matching PATTERN
–include-from=FILE read include patterns from FILE
–files-from=FILE read list of source-file names from FILE
-0, –from0 all *from/filter files are delimited by 0s
-s, –protect-args no space-splitting; wildcard chars only
–address=ADDRESS bind address for outgoing socket to daemon
–port=PORT specify double-colon alternate port number
–sockopts=OPTIONS specify custom TCP options
–blocking-io use blocking I/O for the remote shell
–stats give some file-transfer stats
-8, –8-bit-output leave high-bit chars unescaped in output
-h, –human-readable output numbers in a human-readable format
–progress show progress during transfer
-P same as –partial –progress
-i, –itemize-changes output a change-summary for all updates
–out-format=FORMAT output updates using the specified FORMAT
–log-file=FILE log what we’re doing to the specified FILE
–log-file-format=FMT log updates using the specified FMT
–password-file=FILE read daemon-access password from FILE
–list-only list the files instead of copying them
–bwlimit=KBPS limit I/O bandwidth; KBytes per second
–write-batch=FILE write a batched update to FILE
–only-write-batch=FILE like –write-batch but w/o updating dest
–read-batch=FILE read a batched update from FILE
–protocol=NUM force an older protocol version to be used
–iconv=CONVERT_SPEC request charset conversion of filenames
–checksum-seed=NUM set block/file checksum seed (advanced)
-4, –ipv4 prefer IPv4
-6, –ipv6 prefer IPv6
–version print version number
(-h) –help show this help (see below for -h comment)

Create or Extract a tar.gz File

Make an archive of a plesk httpdocs directory

tar -cvzf domain.com_backup_7.17.14.tar.gz /var/www/vhosts/


-rw-r--r-- 1 root root 14482441 Jul 17 14:09 domain.com_backup_7.17.14.tar.gz

Extract the same file

tar -xf domain.com_backup_7.17.14.tar.gz

Export/Import Mysql Database

Export a database mysql

# mysqldump -u -p username database_name > dbname.sql

Plesk Server

# mysqldump  -u admin -p`cat /etc/psa/.psa.shadow` wordpress_database > domain_backup_7.16.14.sql

To export a single table from your database you would use the following command:

# mysqldump -p --user=username database_name tableName > tableName.sql

Import a database or table

# mysql -p -u username database_name < file.sql 

For Plesk server

# mysql -u admin -p`cat /etc/psa/.psa.shadow` database < /tmp/database.sql

To import a single table into an existing database you would use the following command:

#mysql -u username -p -D database_name < tableName.sql

Find All The Files Owned By a Particular User / Group

Find file owned by a group

find directory-location -group {group-name} -name {file-name}

directory-location : directory path.
-group {group-name} : group-name.
-name {file-name} : The file name or a search pattern

Issue: Plesk server will not allow updating plugins and the site is running as fastcgi – which uses the ftp user as the root user ( coldriverw:psacln) is the user:group for this account.

In this example, locate or find all files belongs to a group called “apache” in the /var/www/vhosts/ directory:

[root@austin plugins]# find /var/www/vhosts/ -group apache

Change the files to the correct user:

 [root@austin plugins]# chown -R domain:psacln /var/www/vhosts/

Retry the wordpress upload

Update Plugin
Downloading update from…

Unpacking the update…

Installing the latest version…

Removing the old version of the plugin…

Plugin updated successfully.

More information:

Find reason for linux server crash

Check the logs

/var/log/messages, which stores logs from many native CentOS services, such as the kernel logger, the network manager, and many other services that don’t have their own log files. This log file tells you if there are kernel problems (kernel panic messages) or kernel limits violations, such as the number of currently open files, which can cause system problems. You can fix kernel misconfigurations by editing the file /etc/sysctl.conf and changing the value for the corresponding error.

/var/log/dmesg, which contains information about hardware found by the kernel drivers. It can help you troubleshoot hardware problems and missing drivers. You can also use the command /bin/dmesg for similar purposes. /bin/dmesg provides more detailed information in real time, while the log file keeps less information for historical purposes.

/var/log/audit/audit.log, which is the file in which the Linux Auditing System (auditd) writes its logs, including all SELinux information. If auditd is disabled, SELinux sends its logs to /var/log/messages. SELinux is a common suspect for any strange behavior and problems in CentOS. It is enabled by default in CentOS 6 and should not be frivolously disabled, as it is important for security. You can check its status with the command sestatus. A Wazi article about Linux server hardening covers the basics of SELinux, including how to adjust its policies in order to avoid problems.

Service- and application-specific logs – Many applications create logs in other places, and have options that control where and what to log. By default in CentOS the Apache web server logs in the directory /var/log/httpd/, mail servers log in /var/log/maillog, and MySQL logs in /var/log/mysqld.log. However, not all logs are located in the logs directory. Some applications, such as user-space programs, may not have privileges to write there. Others prefer to log inside their own root directory. You may need to consult an application’s manual to learn where it writes its logs.

If it’s gone down without logging anything, it might be power related so it’s not had the chance to log anything.

Who is Logged-In on Your Linux System

w command is used to show logged-in user names and what they are doing. The information will be read from /var/run/utmp file. The output of the w command contains the following columns:

Name of the user
User’s machine number or tty number
Remote machine address
User’s Login time
Idle time (not usable time)
Time used by all processes attached to the tty (JCPU time)
Time used by the current process (PCPU time)
Command currently getting executed by the users

Following options can be used for the w command:

-h Ignore the header information
-u Display the load average (uptime output)
-s Remove the JCPU, PCPU, and login time.

[root@austin ~]# w
 15:46:21 up 23 days, 1 min,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    66.226.xx.x    15:46    0.00s  0.06s  0.00s w

[root@austin ~]# w -h
root     pts/0      15:46    0.00s  0.06s  0.00s w -h

[root@austin ~]# w -u
 15:47:05 up 23 days, 2 min,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0      15:46    0.00s  0.06s  0.00s w -u

[root@austin ~]# w -s
 15:47:23 up 23 days, 2 min,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM               IDLE WHAT
root     pts/0       0.00s w -s

2. Get the user name and process of logged in user using who and users command

who command is used to get the list of the usernames who are currently logged in. Output of the who command contains the following columns: user name, tty number, date and time, machine address.

[root@austin ~]# who
root     pts/0        2014-07-11 15:46 (

To get a list of all usernames that are currently logged in, use the following:

[root@austin ~]# who | cut -d' ' -f1 | sort | uniq

Users Command

users command is used to print the user name who are all currently logged in the current host. It is one of the command don’t have any option other than help and version. If the user using, ‘n’ number of terminals, the user name will shown in ‘n’ number of time in the output.

[root@austin ~]# users

3. Get the username you are currently logged in using whoami. whoami command is used to print the loggedin user name.

[root@austin ~]# whoami

whoami command gives the same output as id -un as shown below:

[root@austin ~]# id -un

who am i command will display the logged-in user name and current tty details. The output of this command contains the following columns: logged-in user name, tty name, current time with date and ip-address from where this users initiated the connection.

[root@austin ~]# who am i
root     pts/0        2014-07-11 15:46 (

[root@austin ~]# who mom likes
root     pts/0        2014-07-11 15:46 (

Warning: Don’t try “who mom hates” command.

Also, if you do su to some other user, this command will give the information about the logged in user name details.

4. Get the user login history at any time

last command will give login history for a specific username. If we don’t give any argument for this command, it will list login history for all users. By default this information will read from /var/log/wtmp file. The output of this command contains the following columns:

User name
Tty device number
Login date and time
Logout time
Total working time

[root@austin ~]# last
root     pts/0        10.1.xx.x    Sat Aug  3 06:49 - down   (00:01)

Jitsi Install

Instructions for 32 bit systems:

$ wget -c
$ sudo rpm -i jitsi-2.2-latest.i386.rpm

Instructions for 64 bit systems:

$ wget -c
$ sudo rpm -i jitsi-2.2-latest.x86_64.rpm

Find All Files and Directories that are permissions 777

For directories

[root@server]# find /var/www/vhosts/ -type d -perm 777 -print

Set to 755:

root@server ]# find /var/www/vhosts/ -type d -perm 777 -exec chmod 755 {} \;

For Files

[root@server]# find /var/www/vhosts/ -type f -perm 777 -print

Set to 644:

[root@server]# find /var/www/vhosts/ -type f -perm 777 -exec chmod 644 {} \;


Root Cause Analysis

Root cause analysis (RCA) is a method of problem solving that tries to identify the root causes of faults or problems.

RCA practice tries to solve problems by attempting to identify and correct the root causes of events, as opposed to simply addressing their symptoms. Focusing correction on root causes has the goal of preventing problem recurrence. RCFA (Root Cause Failure Analysis) recognizes that complete prevention of recurrence by one corrective action is not always possible.