Contact for queries :

Login

  UpComing Live WebEx Workshop Series

Interview Preparation : Solaris Administration Roles

[toc]

Solaris patching with Alternative boot environment

# lucreate -n SOL_2012Q1
bash Global> lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
s10x_u8wos_08a yes yes yes no –
SOL_2012Q1 yes no no yes –
# lumount SOL_2012Q1 <– mount ABE for testing
/.alt.SOL_2012Q1
# cat /.alt.SOL_2012Q1/etc/index <– verify global zone in installed state
# luumount SOL_2012Q1 <– unmount ABE
# ./installpatchset –s10patchset -B SOL_2012Q1
# luactivate SOL_2012Q1
# init 6
to rollback:
#luactivate s10x_u8wos_08a
# init 6

Solaris ZONES and ZFS

Command to know the errors on ZFS disks, before replacing any faulted disk:

Before we start with ZFS, there is a command I love using in order to know if and how many error we have on my disks; it’s smartctl and I look for the “Reallocated_Sector_Ct” line it correspond to the number of disk sector error. Since I have several disk I also use parallel to get all dis status at the same time. The -k option allow me to keep it in order and nl will show me a simple line number. Here disk 2 as 2 errors.
% parallel -k smartctl -a ::: /dev/rdsk/c0t*d0s0 | grep _Ct | nl
1 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always – 0
2 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always – 2
3 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0
4 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0
5 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0
6 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0
7 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0
8 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always – 0

Installing GRUB on ZFS rpool:

% zpool status rpool
…. Let disk resilver before installing the boot blocks… Then
On SPARC systems do:
% installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t0d0s0
On x86 systems do:
% installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0

Testing booting from the second disk of rpool

# zpool status rpool
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 25.47% done, 0h4m to go
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t10d0s0 ONLINE 0 0 0
c1t9d0s0 ONLINE 0 0 0
errors: No known data errors
for Sparc : # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0
for x 86: # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
and then boot from OK> prompt
ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0

Procedures to Add VxFS, ZFS, SVM, UFS, lofs, Raw volumes and disk devices to non-global zones.

Raw disk device:

  • zonecfg ->

  • add device ->

  • set match=<rdskpath> ->

  • end,commit,verify,exit

VXFS filesystes

  • Zonecfg ->

  • add fs ->

  • set type=vxfs ->

  • set special=<vx-volume-path> ->

  • set raw=<vx-raw-volume-path> ->

  • set dir=<mountpoint> ->

  • end,commit,verify,exit

VxVM raw volume

  • zonecfg ->

  • add device ->

  • set match=<vx-raw-volume-path> ->

  • end,commit,verify,exit

UFS filesystems

  • zonecfg ->

  • add fs ->

  • set type=ufs ->

  • set special=<ufs-device-path> ->

  • set raw=<ufs-raw-device-path> ->

  • end,verify, commit, exit

ZFS Filesystem

  • zonecfg ->

  • add fs ->

  • set special=<zpool fs path> ->

  • set dir=<mountpoint> ->

  • set type=lofs ->

  • end,verify, commit, exit

Delegating Data set

  • zonecfg ->

  • add dataset ->

  • set name=<datasetname> ->

  • end,verify, commit, exit

ZFS volume

  • zonecfg ->

  • add device ->

  • set match=<Zpool volume path> ->

  • end,commit,verify,exit

cdrom

  • zonecfg ->

  • add fs ->

  • set dir=/cdrom ->

  • set special=/cdrom ->

  • set type=lofs ->

  • end,verify, commit, exit

Creating CPU resource pool and assigning it to Zone:

# svcadm enable /system/pools/dynamic <– Enable Service
# pooladm -e <– start poold
# pooladm <– Verify default pool info
# pooladm -s <– save current pool configuration to /etc/pooladm.conf
# cat /var/tmp/newpool.conf <– create a pool configuration file for the below expected configuration
——————————-
create pset pset01 (uint pset.min = 1; uint pset.max = 1)
create pool pool01
associate pool pool01 (pset pset01)
create pset pset02 (uint pset.min = 1; uint pset.max = 2)
create pool pool02
associate pool pool02 (pset pset02)
——————————–
Expected Configuration:
Pool name – pool01
processor set name – pset01
Min. Processors – 1
Max Processors – 1
Pool name – pool02
processor set name – pset02
Min. Processors – 1
Max Processors – 2
—————–
# pooladm -x <– flush in memory configuration
# pooladm -f /var/tmp/newpool.conf <– create new pool configuration using the newpool.conf
# pooladm -c –> Activate the new pooladm.conf
# pooladm –> verify new pool sets pset01 and pset02
# zonecfg -z zone -> set pool=<poolname> -> verify, commit, exit <– pool binding to the zone, effective after zone reboot
# poolbind -p pool01 -i zoneid zone01 <– activate pool binding without zone reboot
# zlogin zone01 psrinfo -p <– verify number of processors allocate dfor the zone

Dry run before actually creating a ZFS pool using “-n” option

# zpool create -n geekpool mirror c1t0d0 c1t1d0
would create ‘geekpool’ with the following layout:
tank
mirror
c1t0d0
c1t1d0

ZFS volume and mount point resize

# zfs set volsize=2g fort/geekvol <– volume resize
# zfs set reservation=10g tank/geek <– mountpoint resize

ZFS volume as Swap

# zfs create -V 1g rpool/swapvol
# swap -a /dev/zvol/dsk/rpool/swapvol

Diff ZFS quota and reservation

Quota limits the amount of space a dataset and all its children can consume
Reservation sets the minimum amount of space that is guaranteed to a dataset and all its child datasets.

What happens if the mountpoint property of a ZFS dataset is set to legacy ?

To prevent zfs dataset mount automatically at boot time or using zfs mount command.
# zfs set mountpoint=legacy tank/home/geek
# mount -F zfs tank/home/geek /geek
 

ZFS way of sharing filesystem 

# zfs set sharenfs=on tank/home/geek <– filsystem will be shared as RW to all

Use scrub to check the integrity of zpool

# zpool scrub mypool <– to start scrub
# zpool scrub -s mypool <– to stop scrub if it required in exceptional cse
 
 

August 13, 2015

0 responses on "Interview Preparation : Solaris Administration Roles"

Leave a Message

Your email address will not be published. Required fields are marked *

About iGURKUL

IGURKUL I.T. Training Hub offering various Career Certification courses in Computer Networking, Unix, Linux, Cloud Computing and DevOps Technologies. With its rich experience in IT training service sector, iGURKUL has been able to set Industry best practices in IT Training for the past five years.

In Past five years, more than 5000 professionals have been trained by iGURKUL for System administration, Cloud Computing and DevOps Skill set through our Online Training portal www.unixadminschool.com. And , each day , more than 10000 working professionals from all over the globe visiting our knowledge base www.unixadminschool.com/blog for the best practices and Knowledge learning.

top
copyright protected - 2011 © igurkul I.T. solutions. All rights reserved.