Suppose you have windows 2k clients, that needs an older version of the avamar client.
If your are like me and want to keep the file easily accessible, you can simple add them to the "Documents and Downloads" page accessible from http://utility_node_ip by uploading the client to the utility node.
Simple use "winscp" or any other tool to transfer the file, and upload them to:
/data01/avamar/src/downloads/
For example, if I want to add the client under Windows 32 bits, I'll place the executable here:
/data01/avamar/src/downloads/WIN32/AvamarClient-windows-x86-4.1.106-27.msi
That's it, simply reload the page and you will see the client appear.
Wednesday, 4 December 2013
Monday, 18 November 2013
Avamar - How to query the PostgreSQL MCS database
Here is how to connect to the DB if you need custom reports and want to create custom queries:
1.) SSH to the Utility node
root@dpn:~/#: su - admin
admin@dpn:~/>: psql -p 5555 -U viewuser mcdb
Welcome to psql 8.2.3, the PostgreSQL interactive terminal.
mcdb=>
Note: Please use "viewuser" (read only) account for your queries !
2.) Here is the example for a simple SQL query, but you can build your own queries and export it as plain text .
SELECT * FROM "public"."v_clients";
SELECT v_clients.cid, v_clients.client_name, v_clients.client_addr,v_clients.os_type FROM "public"."v_clients";
select client_name,status_code,domain FROM v_activities_2 WHERE (status_code=30901 or status_code=30915) and recorded_date_time > current_timestamp - interval '24 hours';
1.) SSH to the Utility node
root@dpn:~/#: su - admin
admin@dpn:~/>: psql -p 5555 -U viewuser mcdb
Welcome to psql 8.2.3, the PostgreSQL interactive terminal.
2.) Here is the example for a simple SQL query, but you can build your own queries and export it as plain text .
SELECT v_clients.cid, v_clients.client_name, v_clients.client_addr,v_clients.os_type FROM "public"."v_clients";
select client_name,status_code,domain FROM v_activities_2 WHERE (status_code=30901 or status_code=30915) and recorded_date_time > current_timestamp - interval '24 hours';
Wednesday, 13 November 2013
avagent Error 5320- The client named: client1 is not registered as CID: NO LOCATION - RECORD MAY BE CORRUPT
While I was trying to fix a corrupted install I got the following error:
2013-11-13
09:40:40 avagent Error <5320>: Received from MCS#103:The client named: client1 is not registered as CID: NO LOCATION - RECORD MAY BE CORRUPT
2013-11-13
09:40:40 avagent Error <6210>: Unable to register 'client1' with MCS
at utilitynode_addr:28001
Fix:
Delete de client ID file (cid.bin), and re-register.
Monday, 11 November 2013
Avamar - How to get around the 1 schedule repetition per hour maximum
I recently stumbled on the need to take SQL transactional backups every 15 minutes for a specific project (RAID protection on an active-active array was not enough it seem), I came up with a solution using crontabinf the EMC community forums.
1. Create a Dataset such as SQL1hrinc
a. Select the database to backup
b. Choose incremental as backup type
2. Under Policy, create a New Group i.e., SQL1hrinc
a. Add Dataset SQL1hrinc to the group
b. Select Retention, client etc
3. Under Policy, click on Group and run backup to make sure group meets the requirements
Now, please login to Avamar utility node via Putty as root user
1. Create a batch file in the /usr/local/avamar/bin such as sqlinc and add the following:
/usr/local/avamar/bin/mccli client backup-group-dataset --xml --name=/DOMAIN_NAME/CLIENT_NAME --group-name=/SQL1hrinc
2. Save file and set the execute permissions i.e., chmod 755 sqlinc
3. Run the file and make sure that the backup is performed.
4. As root, type crontab -e and add the following:
*/60 7-19 * * 1-5 /usr/local/avamar/bin/sqlinc >/dev/null 2>&1
5. Save and exit
(As seen on the Avamar forums, thanks Sandeep Sinha)
a. Select the database to backup
b. Choose incremental as backup type
2. Under Policy, create a New Group i.e., SQL1hrinc
a. Add Dataset SQL1hrinc to the group
b. Select Retention, client etc
3. Under Policy, click on Group and run backup to make sure group meets the requirements
/usr/local/avamar/bin/mccli client backup-group-dataset --xml --name=/DOMAIN_NAME/CLIENT_NAME --group-name=/SQL1hrinc
(As seen on the Avamar forums, thanks Sandeep Sinha)
Thursday, 17 October 2013
Avamar - How to verify your backup nodes connection speed
I recently suspected one of my storage node was not connected at 1 Gbps, this can be verified quickly from the utility node using the "mapall" and "ethtool" commands.
mappall send a command to all the storages nodes, and ethtool is a linux command.
Here you go :
su - admin
mappall send a command to all the storages nodes, and ethtool is a linux command.
Here you go :
su - admin
ssh-agent bash
ssh-add ~/.ssh/dpnid
mapall --noerror --user=root 'ethtool eth0' | grep "Speed"
mapall --noerror --user=root 'ethtool eth2' | grep "Speed"
These are the NICs that face your network
To check the internal NICs:
mapall --noerror --user=root 'ethtool eth1' | grep "Speed"
mapall --noerror --user=root 'ethtool eth3' | grep "Speed"
Now you can verify that the "Speed" fields all say 1000Mb/s.
Mines did not:
(0.0) ssh -x -o GSSAPIAuthentication=no root@192.168.255.2 'ethtool eth2'
Speed: 1000Mb/s
(0.1) ssh -x -o GSSAPIAuthentication=no root@192.168.255.3 'ethtool eth2'
Speed: 100Mb/s
(0.2) ssh -x -o GSSAPIAuthentication=no root@192.168.255.4 'ethtool eth2'
Speed: 100Mb/s
(0.3) ssh -x -o GSSAPIAuthentication=no root@192.168.255.5 'ethtool eth2'
Speed: 1000Mb/s
(0.4) ssh -x -o GSSAPIAuthentication=no root@192.168.255.6 'ethtool eth2'
Speed: 1000Mb/s
(0.5) ssh -x -o GSSAPIAuthentication=no root@192.168.255.7 'ethtool eth2'
Speed: 1000Mb/s
(0.6) ssh -x -o GSSAPIAuthentication=no root@192.168.255.8 'ethtool eth2'
Speed: 100Mb/s
(0.7) ssh -x -o GSSAPIAuthentication=no root@192.168.255.9 'ethtool eth2'
Speed: 1000Mb/s
(0.8) ssh -x -o GSSAPIAuthentication=no root@192.168.255.10 'ethtool eth2'
Speed: 1000Mb/s
(0.9) ssh -x -o GSSAPIAuthentication=no root@192.168.255.11 'ethtool eth2'
Speed: 1000Mb/s
You can then work with support to solve the issue, or check your cables / switchs, etc.
Now you can verify that the "Speed" fields all say 1000Mb/s.
Mines did not:
(0.0) ssh -x -o GSSAPIAuthentication=no root@192.168.255.2 'ethtool eth2'
Speed: 1000Mb/s
(0.1) ssh -x -o GSSAPIAuthentication=no root@192.168.255.3 'ethtool eth2'
Speed: 100Mb/s
(0.2) ssh -x -o GSSAPIAuthentication=no root@192.168.255.4 'ethtool eth2'
Speed: 100Mb/s
(0.3) ssh -x -o GSSAPIAuthentication=no root@192.168.255.5 'ethtool eth2'
Speed: 1000Mb/s
(0.4) ssh -x -o GSSAPIAuthentication=no root@192.168.255.6 'ethtool eth2'
Speed: 1000Mb/s
(0.5) ssh -x -o GSSAPIAuthentication=no root@192.168.255.7 'ethtool eth2'
Speed: 1000Mb/s
(0.6) ssh -x -o GSSAPIAuthentication=no root@192.168.255.8 'ethtool eth2'
Speed: 100Mb/s
(0.7) ssh -x -o GSSAPIAuthentication=no root@192.168.255.9 'ethtool eth2'
Speed: 1000Mb/s
(0.8) ssh -x -o GSSAPIAuthentication=no root@192.168.255.10 'ethtool eth2'
Speed: 1000Mb/s
(0.9) ssh -x -o GSSAPIAuthentication=no root@192.168.255.11 'ethtool eth2'
Speed: 1000Mb/s
You can then work with support to solve the issue, or check your cables / switchs, etc.
Tuesday, 27 August 2013
Avamar and large dataset with multiples files
Here is a gem I found in the Avamar forums, from a "Ask the expert" session answered by Ian anderson concerning large file systems and Avamar. Please share your experiences if you have any.
The
most important thing to do on a client with so many files is to make
sure that the file cache is sized appropriately. The file cache is
responsible for the vast majority (>90%) of the performance of the
Avamar client. If there's a file cache miss, the client has to go and
thrash your disk for a while chunking up a file that may already be on
the server.
So how to tune the file cache size?
The
file cache starts at 22MB in size and doubles in size each time it
grows. Each file on a client will use 44 bytes of space in the file
cache (two SHA-1 hashes consuming 20 bytes each and 4 bytes of
metadata). For 25 million files, the client will generate just over 1GB
of cache data.
Doubling from 22MB, we get a minimum required cache size of:
22MB => 44MB => 88MB => 176MB => 352MB => 704MB => 1408MB
The
naive approach would be to set the filecachemax in the dataset to 1500.
However, unless you have an awful lot of memory, you probably don't
want to do that since the file cache must stay loaded in memory for the
entire run of the backup.
Fortunately
there is a feature called "cache prefixing" that can be used to set up a
unique pair of cache files for a specific dataset. Since there are so
many files, you will likely want to work with support to set up cache
prefixing for this client and break the dataset up into more manageable
pieces.
One
quick word of warning -- as the saying goes, if you have a hammer,
everything starts to look like a nail. Cache prefixing is the right tool
for this job because of the large dataset but it shouldn't be the first
thing you reach for whenever there is client performance tuning to be
done.
On to the initial backup.
If
you plan to have this client run overtime during its initial, you will
have to make sure that there is enough free capacity on the server to
allow garbage collection to be skipped for a few days while the initial
backup completes.
If
there is not enough free space on the server, the client will have to
be allowed to time out each day and create partials. Make sure the
backup schedule associated with the client is configured to end no later
than the start of the blackout window. If a running backup is killed by
garbage collection, no partial will be created.
You
will probably want to start with a small dataset (one that will
complete within a few days) and gradually increase the size of the
dataset (or add more datasets if using cache prefixing) to get more new
data written to the server each day. The reason for this is that partial
backups that will only be retained on the server for 7 days. Unless a
backup completes successfully within 7 days of the first partial, any
progress made by the backup will be lost when the first partial expires.
After
the initial backup completes, typical filesystem backup performance for
an Avamar client is about 1 million files per hour. You will likely
have to do some tuning to get this client to complete on a regular
basis, even doing incrementals. The speed of an incremental Avamar
backup is generally limited by the disk performance of the
client itself but it's important to run some performance testing to
isolate the bottleneck before taking corrective action. If we're being
limited by the network performance, obviously we don't want to try to
tweak disk performance first.
The
support team L2s from the client teams have a good deal of experience
with performance tuning and can work with you to run some testing. The
tests that are normally run are:
Re: Garbage collection does not reclaim expected amount of space
lmorris99 wrote:
We have one server with 25 million files, scattered through directories six levels deep.
We'd like to throw it at our test Avamar grid; any tuning I should look at on the client (or server) side before we set it up for its first backup?
22MB => 44MB => 88MB => 176MB => 352MB => 704MB => 1408MB
- An iperf test to measure raw network throughput between client and server
- A "randchunk" test, which generates a set of random chunks and sends them to the grid in order to test network backup performance
- A "degenerate" test which, as I mentioned previously, processes the filesystem and discards the results in order to measure disk I/O performance
- OS performance monitoring to ensure we are not being bottlenecked by system resource availability (CPU cycles, memory, etc.)
Re: Garbage collection does not reclaim expected amount of space
Friday, 26 July 2013
1,23996,CLI failed to connect to MCS
I tried to install a remote MCCLI instance as I do not like to work directly from the utility node, and I kept getting this error and could not find the problem.
Support told me running MCCLI from a remote host was not supported (of course it is).
When I finally looked up the config file "/root/.avamardata/6.1.1-87/var/mc/cli_data/prefs/mcclimcs.xml"
I noticed I had the "mcsaddr" filed wrong.
Make sure you don't have any configuration problems...
Support told me running MCCLI from a remote host was not supported (of course it is).
When I finally looked up the config file "/root/.avamardata/6.1.1-87/var/mc/cli_data/prefs/mcclimcs.xml"
I noticed I had the "mcsaddr" filed wrong.
Make sure you don't have any configuration problems...
Symcli on Linux
Having problems making symcli work in a Server/Client (remote server) model ?
Try the following:
Try the following:
- Do a clean reinstall: rpm -ef symcli-data-V7.5.0-0.i386 symcli-master-V7.5.0-0.i386 symcli-symcli-V7.5.0-0.i386 symcli-thincore-V7.5.0-0.i386 symcli-64bit-V7.5.0-0.x86_64 symcli-cert-V7.5.0-0.i386 symcli-base-V7.5.0-0.i386 symcli-symrecover-V7.5.0-0.i386 symcli-master-V7.5.0-0.i386
- Resinstall: #> tar xvzf se7500-Linux-i386-ni.tar.gz
- Then you have to use the supplied install script: #> ./se7500_install.sh
- Make sure /usr/emc/API/symapi/config/netcnfg is properly configured
- Make sure the connection name defined in the above file is exported as an environment variable on your shell: #> export SYMCLI_CONNECT=SYMAPI_SERVER (in that case "SYMCLI_CONNECT" is the name of the first field in the netcfg file.
- Restart the daemon: #> stordaemon shutdown storsrvd && stordaemon start storsrvd
- Make sure it's running: #> /opt/emc/SYMCLI/bin/symcfg list -service
- Check that the logs are clean: #> /opt/emc/SYMCLI/bin/stordaemon showlog storsrvd
- Check the currentlly running deamons: #> /opt/emc/SYMCLI/bin/stordaemon list (storapid, storwatchd and stordrvd should be running)
- Export your path: #>export PATH=$PATH:/opt/emc/SYMCLI
- Test your config: #> symcfg list
- In last resort, read the following file found on powerlink (when all else fail read the instructions): "Solution Enabler 7.X installation guide)
Tuesday, 23 July 2013
VMAX Device types
TDAT:
TDAT, or thin data device, is an internal LUN which is assigned to a thin pool. Each TDAT LUN is comprised of multiple physical drives configured to provide a specific data protection type. An example of a TDAT device might be a Raid-5(3+P)LUN.
Another common example would be a Raid–1 mirrored LUN. Multiple TDAT devices are assigned to a pool. When creating a thin pool LUNs for presentation to a host the data is striped across all of the TDAT devices in the pool. The pool can be enlarged by adding devices and rebalancing data placement (background operations with no impact to the host).
Thin Devices (TDEVs):
They consume no disk space. These are only pointers residing in memory. Allocation is done in 768 KB increment (12 tracks) when they are "bound". It is a host accessible(redundantlypresented to an FA port) back-end LUN device that is “bound” to a thin device pool (TDATs) for its capacity needs.
As stated above, a TDEV is a host presentable device that is striped across the back end TDAT devices. The stripe size is 768K.
Each TDEV is presented to an FA port for host server allocation When utilizing thin provisioning, Thin Pool LUNs are employed. The utilization of TDEVs is required to use EMC ®Fully Automated Storage Tiering Virtual Provisioning (FAST/VP features.
Meta Devices (aka Meta Volumes:
They allow to increase the size of device presented to a host. Symmetrix Device size maximum is 240 GB,
Data devices:
Non adressable Symmetrix private device. (Cannot be mapped to a front end port)
Provide space to thin devices.
You "add" data devices to thin pool, but you "bind" thin devices to a thin pool.
TDAT, or thin data device, is an internal LUN which is assigned to a thin pool. Each TDAT LUN is comprised of multiple physical drives configured to provide a specific data protection type. An example of a TDAT device might be a Raid-5(3+P)LUN.
Another common example would be a Raid–1 mirrored LUN. Multiple TDAT devices are assigned to a pool. When creating a thin pool LUNs for presentation to a host the data is striped across all of the TDAT devices in the pool. The pool can be enlarged by adding devices and rebalancing data placement (background operations with no impact to the host).
Thin Devices (TDEVs):
They consume no disk space. These are only pointers residing in memory. Allocation is done in 768 KB increment (12 tracks) when they are "bound". It is a host accessible(redundantlypresented to an FA port) back-end LUN device that is “bound” to a thin device pool (TDATs) for its capacity needs.
As stated above, a TDEV is a host presentable device that is striped across the back end TDAT devices. The stripe size is 768K.
Each TDEV is presented to an FA port for host server allocation When utilizing thin provisioning, Thin Pool LUNs are employed. The utilization of TDEVs is required to use EMC ®Fully Automated Storage Tiering Virtual Provisioning (FAST/VP features.
Meta Devices (aka Meta Volumes:
They allow to increase the size of device presented to a host. Symmetrix Device size maximum is 240 GB,
Data devices:
Non adressable Symmetrix private device. (Cannot be mapped to a front end port)
Provide space to thin devices.
You "add" data devices to thin pool, but you "bind" thin devices to a thin pool.
Avamar support
Here is a list of commands Avamar support use to diagnose a problem on the utility node:
ssh to a storage node:
ssn 0.8
Using /usr/local/avamar/var/probe.xml
ssh -x admin@192.168.255.10 ''
su - admin
ssh-agent bash
ssh-add .ssh/dpnid
Send a command to each and every one of the storage nodes (mapall)
mapall --noerror 'grep -i "error" /var/log/messages*'
cd proactive_check/
head hc_results.txt
chmod +x proactive_check.pl
./proactive_check.pl
mccli event show --unack=true| grep Module
status.dpn
Bios version:
mapall --noerror --all 'omreport system summary |grep -A4 BIOS'
ssh to a storage node:
ssn 0.8
Using /usr/local/avamar/var/probe.xml
ssh -x admin@192.168.255.10 ''
su - admin
ssh-agent bash
ssh-add .ssh/dpnid
Send a command to each and every one of the storage nodes (mapall)
mapall --noerror 'grep -i "error" /var/log/messages*'
cd proactive_check/
head hc_results.txt
chmod +x proactive_check.pl
./proactive_check.pl
mccli event show --unack=true| grep Module
status.dpn
Bios version:
mapall --noerror --all 'omreport system summary |grep -A4 BIOS'
Wednesday, 15 May 2013
Avamar nodes Gen4s hardware
Avamar recently introduced the gen4s line to it's hardware based grids offering.
Here is the information I was able to gather so far regarding gen 4s hardware:
Hardware Specifications
M600 (2.0 TB licensed capacity)
Six 3.5” hard drives
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
M1200 (3.9 TB licensed capacity)
Six 3.5” hard drives
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
M2400 (7.8 TB licensed capacity)
Twelve 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Avamar Business Edition/S2400 node (7.8 TB licensed capacity)
Available as single node server only
Eight 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
No replication required
Extended Retention ADS Gen4S Media Access node
Twelve 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
Two 8 Gbps fiber channel ports
Two 10 GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Note:
For installation instructions and hardware specifications for the Gen4S Media
Access node, see the
Avamar 6.1 Extended Retention Media Access Node Customer
Hardware Installation Guide
(P/N 300-013-367).
ADS Accelerator node
Two 2.5” hard drives
Dual 750W power supplies
Four 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Source: "EMC Avamar Datastore Gen4S Single Node Customer Installation Guide"
P/N 300-999-651
Here is the information I was able to gather so far regarding gen 4s hardware:
- They have discontinued 1.3tb and 2.6Tb Nodes, they only make 2Tb, 3.9Tb and 7.8 Tb Nodes. The nodes are now manufatured by Intel, not Dell.
- All Gen4S nodes require Avamar software version 6.1 SP1 or later
- Gen4S nodes are compatible with existing Gen4 systems (provided nodes are running a compatible Avamar software version.
Hardware Specifications
M600 (2.0 TB licensed capacity)
Six 3.5” hard drives
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
M1200 (3.9 TB licensed capacity)
Six 3.5” hard drives
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
M2400 (7.8 TB licensed capacity)
Twelve 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Avamar Business Edition/S2400 node (7.8 TB licensed capacity)
Available as single node server only
Eight 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
No replication required
Extended Retention ADS Gen4S Media Access node
Twelve 3.5” hard drives
One 2.5” SSD drive (internal mounting)
Dual 750W power supplies
Eight 10/100/1000baseT GbE ports
Two 8 Gbps fiber channel ports
Two 10 GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Note:
For installation instructions and hardware specifications for the Gen4S Media
Access node, see the
Avamar 6.1 Extended Retention Media Access Node Customer
Hardware Installation Guide
(P/N 300-013-367).
ADS Accelerator node
Two 2.5” hard drives
Dual 750W power supplies
Four 10/100/1000baseT GbE ports
RMM4 management port
Avamar service port (hardware: NIC8, software: eth7)
SuSE Linux Enterprise Server v11 sp1 operating system
Source: "EMC Avamar Datastore Gen4S Single Node Customer Installation Guide"
P/N 300-999-651
Sunday, 28 April 2013
VMAX Links
Eric Stephani's VMAX Training summary:
http://ericstephani.com/?p=230
IBM SVC manual explanation regarding VMAX devices type:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_symmetrixcontlucreation_1ev5ds.html
VMAX architecture explained:
http://www.emcsaninfo.com/2012/11/emc-vmax-architecture-detailed-explanation.html
http://ericstephani.com/?p=230
IBM SVC manual explanation regarding VMAX devices type:
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_symmetrixcontlucreation_1ev5ds.html
VMAX architecture explained:
http://www.emcsaninfo.com/2012/11/emc-vmax-architecture-detailed-explanation.html
Thursday, 31 January 2013
Avamar Capacity limits and thresholds
80% — capacity warning issued
95% — the “health check limit”
100% — the “server read-only limit”
- When server utilization reaches 80%, a pop-up notification informs you that the server has consumed 80% of its available storage capacity. Avamar Enterprise Manager capacity state icons are yellow.
95% — the “health check limit”
- This is the amount of storage capacity that can be consumed and still have a “healthy” server. Backups that are in progress are allowed to complete, but all new backup activity is suspended. A notification is sent in the form of a pop-up alert when you log in to Avamar Administrator. That system event must be acknowledged before future backup activity can resume.
100% — the “server read-only limit”
- When server utilization reaches 100% of total storage capacity, it automatically becomes read-only.
Thursday, 24 January 2013
Avamar Lexicon
dataset: Avamar datasets are a list of directories and files to back up from a client. Assigning a
dataset to a client or group enables you to save backup selections.
group policy: Group policy controls backup behavior for all members of the group unless you override these settings at the client level. It is comprised of: Datasets, Schedules and Retention Policies.
dataset to a client or group enables you to save backup selections.
group policy: Group policy controls backup behavior for all members of the group unless you override these settings at the client level. It is comprised of: Datasets, Schedules and Retention Policies.
Tuesday, 22 January 2013
Avamar client files f_cache.dat and p_cache.dat
At the beginning of a backup, the Avamar client process, avtar, loads two cache files from
the var directory into RAM
The f_cache.dat cache file stores a 20-byte SHA-1 hash of the file attributes, and is used to
quickly identify which files have previously been backed up to the Avamar server.
The p_cache.dat hash cache stores the hashes of the chunks and composites that have
been sent to the Avamar server. The hash cache is used to quickly identify which chunks
or composites have previously been backed up to the Avamar server. The hash cache is
very important when backing up databases.
By default, the maximum File Cache size can be 1/8 of the RAM and the maximum Hash Cache size will 1/16 of the RAM.
Because the avtar program is a 32-bit application, the maximum file cache size that avtar
can use is limited to less than 2 GB. In an example where a client has 4 GB of RAM, the
maximum size of the file cache is 352 MB.
Each entry in a file cache comprises a 4-byte header plus two 20-byte SHA-1 hashes (44
bytes total):
the var directory into RAM
- f_cache.dat (a.k.a "File Cache")
- p_cache.dat (a.k.a "Hash Cache")
The f_cache.dat cache file stores a 20-byte SHA-1 hash of the file attributes, and is used to
quickly identify which files have previously been backed up to the Avamar server.
The p_cache.dat hash cache stores the hashes of the chunks and composites that have
been sent to the Avamar server. The hash cache is used to quickly identify which chunks
or composites have previously been backed up to the Avamar server. The hash cache is
very important when backing up databases.
By default, the maximum File Cache size can be 1/8 of the RAM and the maximum Hash Cache size will 1/16 of the RAM.
Because the avtar program is a 32-bit application, the maximum file cache size that avtar
can use is limited to less than 2 GB. In an example where a client has 4 GB of RAM, the
maximum size of the file cache is 352 MB.
Each entry in a file cache comprises a 4-byte header plus two 20-byte SHA-1 hashes (44
bytes total):
- SHA-1 hash entry of the file attributes. The file attributes include: file name, file path, modification time, file size, owner,group, and permissions.
- SHA-1 hash entry for the hash of the actual file content, independent of the file attributes.
Thursday, 17 January 2013
Wednesday, 16 January 2013
Avamar Gen4 nodes cabling
Utility node Cabling:
Storage Node Cabling:
GB1==eth0
GB2==eth1
GB3==eth2
GB4==eth3
GB1==eth0------>Customer_Switch (BOND1)
GB2==eth1------>Switch_A (BOND0)
GB3==eth2------>Customer_Switch (BOND1)
GB4==eth3------>Switch_B (BOND0)
Basic networking configuration
ADS Gen4 systems follow these configuration principles:
• All nodes are plugged in to internal dedicated ADS switches through Gb2
(eth1) and Gb4 (eth3) as primary and secondary interfaces of bond1.
• The internal network is a redundant, high availability, fault-tolerant network connecting all nodes in the cluster for RAIN, rebuilding, and maintenance functions. It carries all Avamar internal operations and data management traffic.
• All nodes are also plugged in to an external customer switch through Gb1 (eth0). If high availability is desired, all nodes can also be connected to a different customer switch through Gb3 (eth2), in which case eth0 and eth2 would be primary and secondary interfaces of bond0.
• All networking ports on all nodes are bonded in pairs, by default.
• Gb1 (eth0) and Gb3 (eth2) port bonding on storage nodes can be broken to facilitate incoming replication, which is delivered directly to storage nodes, or for node management. If this bond is broken, high availability backup capability is not possible. Refer to ADS Gen4 replication (page 83)
• The utility node can also be plugged in to an external customer switch for optional outgoing replication and node management through its additional four network interfaces, Gb5 (eth4), Gb6 (eth5), Gb7 (eth6), and Gb8 (eth7). These NICs are also bonded in pairs for high availability configuration.
• The internal dedicated ADS switches are not connected to the customer network. They have redundant power to support fault tolerance. Network configurations supported by ADS Gen4
AVAMAR DATA STORE NETWORKING EMC AVAMAR DATA STORE • GEN4 MULTI-NODE SYSTEM INSTALLATION GUIDE 47
• If the customer’s network environment is segregated using VLANs, corresponding VLANs must be configured on the storage nodes. Consult with the customer’s network administrator to obtain a list of VLAN IDs to configure when running the dpnnetutil utility.
• For ADS networks that do not require advanced configuration like VLAN support, the network configuration workflow included in the standard software installation process is used. There is no need to define IP addresses in advance except for the utility node. See Installing Avamar server software (page 98). In advanced configuration scenarios (running dpnnetutil), all nodes must have an initial IP address that can be accessed from the utility node by SSH command. That means a combination of the following conditions: being properly kickstarted and having SSH connectivity between each other.
• All node ports autonegotiate to 1 Gb Ethernet (full duplex) on the external customer network.
• These networking principles end previous ADS system requirements for port trunking, spanning tree or bonding between the ADS and customer switches.
• All connections to the customer network are standard leaf connections directly with individual nodes.
• The external customer network switch can be shared with other
applications. See ADS Gen4 hardware (page 180) for descriptions and images related to node and switch networking components. See the following images for further clarification of possible ADS Gen4 networking configurations.
Subscribe to:
Posts (Atom)