Wednesday, May 4, 2011

How to Create Zones and allocate Resources in Solaris 10





root@sqazone01# zonecfg -z zone1
zone3: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net

Use this command if you want to have your own /usr, /lib, /platform or /sbin
zonecfg:zone1> 
remove inherit-pkg-dir dir=/usr

zonecfg:zone1:net> set address=
zonecfg:zone1:net> set physical=ipge0:01
zonecfg:zone1:net> end
zonecfg:zone1> verify
zonecfg:zone1> exit


root@sqazone01 # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
3 zone1 configured /zones/zone1
- my-zone incomplete /export/home/my-zone

root@sqazone01 # zoneadm -z zone1 install ( will take 10 - 15 mins)

root@sqazone01 # zoneadm -z zone1 boot


The following command will ask you couple of questions like local and all...and this command will actually
configure your OS.
root@sqazone01 # zlogin -C zone1


root@sqazone01 # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
3 zone1 running /zones/zone1
- my-zone incomplete /export/home/my-zone



How to create resource pools for ZONES configured above:


root@sqazone01 # pgrep -l poold
root@sqazone01 # pooladm
pooladm: couldn't open pools state file: Facility is not active

root@sqazone01 # pooladm -e

root@sqazone01 # pgrep -l poold
1429 poold


root@sqazone01 #
root@sqazone01 # pooladm -x
root@sqazone01 # pooladm -s
root@sqazone01 # poolcfg -f pool-script
and the pool script contains

create pset zone1-pset ( uint pset.min = 2; uint pset.max = 2 )
create pool zone1-pool
associate pool zone1-pool ( pset zone1-pset )

root@sqazone01 # pooladm -c
root@sqazone01 # psrset
user processor set 1: processors 0 1
root@sqazone01 # pooladm

system sqazone01
string system.comment
int system.version 1
boolean system.bind-default true
int system.poold.pid 6862

pool zone1-pool
int pool.sys_id 1
boolean pool.active true
boolean pool.default false
int pool.importance 1
string pool.comment
pset zone1-pset

pool pool_default
int pool.sys_id 0
boolean pool.active true
boolean pool.default true
int pool.importance 1
string pool.comment
pset pset_default

pset zone1-pset
int pset.sys_id 1
boolean pset.default false
uint pset.min 2
uint pset.max 2
string pset.units population
uint pset.load 0
uint pset.size 2
string pset.comment

cpu
int cpu.sys_id 1
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line

pset pset_default
int pset.sys_id -1
boolean pset.default true
uint pset.min 1
uint pset.max 65536
string pset.units population
uint pset.load 30
uint pset.size 30
string pset.comment

cpu
int cpu.sys_id 21
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 20
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 23
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 22
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 17
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 16
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 19
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 18
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 29
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 28
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 31
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 30
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 25
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 24
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 27
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 26
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 5
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 4
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 7
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 6
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 3
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 2
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 13
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 12
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 15
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 14
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 9
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 8
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 11
string cpu.comment
string cpu.status on-line

cpu
int cpu.sys_id 10
string cpu.comment
string cpu.status on-line
root@sqazone01 # zonecfg -z zone1
zonecfg:zone1> set pool=zone1-pool
zonecfg:zone1> verify
zonecfg:zone1> exit

root@sqazone01 # zlogin zone1 init 6
root@sqazone01 #
root@sqazone01 #
root@sqazone01 # zoneadm list -cv
ID NAME STATUS PATH
0 global running /
3 zone1 running /zones/zone1
- my-zone incomplete /export/home/my-zone
root@sqazone01 # zlogin -C zone1
[Connected to zone 'zone1' console]

sqazone02 console login: root
Password:
Sep 28 06:02:57 sqazone02 login: ROOT LOGIN /dev/console

Last login: Thu Sep 28 03:42:07 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005

# psrinfo
0 on-line since 09/27/2006 21:07:16
1 on-line since 09/27/2006 21:07:17
#


HOW TO ALLOCATE MEMORY to zone1 :

root@sqazone01 # zloginc -C zone1
zloginc: not found
root@sqazone01 # zlogin -C zone1
[Connected to zone 'zone1' console]

sqazone02 console login: root
Password:
Sep 28 06:33:36 sqazone02 login: ROOT LOGIN /dev/console
Last login: Thu Sep 28 06:02:57 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# cp /etc/nsswitch.files /etc/nsswitch.conf
# ps -ef | grep rcapd
root 7565 7521 0 06:34:47 console 0:00 grep rcapd
# rcapadm -E
# ps -ef | grep rcapd
daemon 7573 7086 0 06:34:52 ? 0:00 /usr/lib/rcap/rcapd
root 7574 7521 0 06:34:55 console 0:00 grep rcapd
# projmod -K 'rcap.max-rss=4000000000' system
# cat /etc/project
system:0::::rcap.max-rss=4000000000
user.root:1::::
noproject:2::::
default:3::::
group.staff:10::::
# exit

# rcapadm
state: enabled

memory cap enforcement threshold: 0%
process scan rate (sec): 15
reconfiguration rate (sec): 60
report rate (sec): 5
RSS sampling rate (sec): 5

# rcapadm -c 50
# rcapadm
state: enabled
memory cap enforcement threshold: 50%
process scan rate (sec): 15
reconfiguration rate (sec): 60
report rate (sec): 5
RSS sampling rate (sec): 5
#
# rcapstat -g
id project nproc vm rss cap at avgat pg avgpg
0 system 30 112M 82M 3815M 0K 0K 0K 0K
physical memory utilization: 6% cap enforcement threshold: 50%
0 system 30 112M 82M 3815M 0K 0K 0K 0K
physical memory utilization: 6% cap enforcement threshold: 50%
0 system 30 112M 82M 3815M 0K 0K 0K 0K
physical memory utilization: 6% cap enforcement threshold: 50%

How do I specify a temporary resource cap for a zone in Solaris 10?


# rcapadm -z zone1 -m 512M

This specifies the maximum amount of memory that zone1 can consume. This change will be lost at the next reboot. To make it permanent, use zonecfg.

Zones Parallel Patching versus Update On Attach: When to use which one ?


The Zones Parallel Patching enhancement for the Solaris 10 patch utilities was released this week giving customers a choice of how to improve zones patching performance.
In the Zones "Update On Attach" section of a previous blog posting, I mentioned that the Zones "Update On Attach" feature could also be used to improve Zones patching perfomance.
Zones Parallel Patching is a true patching solution utilizing the 'patchadd' utility.  
Whereas Zones "Update On Attach" uses zones functionality similar to that used during zones creation to provide a pseudo-patching solution that does not utilize 'patchadd'. 
So which one to choose ? 
Let's look at the two options in more detail:

Zones Parallel Patching

Zones Parallel Patching is an enhancement to the standard Solaris 10 patch utilities and is delivered in the patch utilities patch, 119254-66 (SPARC) and 119255-66 (x86).
Simply install this patch, set the maximum number of non-global zones to be patched in parallel in the config file /etc/patch/pdo.conf, and away you go.
It works for all Solaris 10 systems. 
It also works well in conjunction with higher level patch automation tools such as xVM Ops Center. 
It can dramatically improve zones patching performance by patching non-global zones in parallel.  The global zone is still patched first.
While the performance gain is dependent on a number of factors, including the number of non-global zones, the number of on-line CPUs, the speed of the system, the I/O configuration of the system, etc., a performance gain of ca. 300% can typically be expected for patching the non-global zones - e.g. On a T2000 with 5 sparse root non-global zones.
See my previous Zones Parallel Patching blog entry for further information.
Since it's a pure enhancement to 'patchadd', it's normal 'patchadd' functionality.  You can subsequently remove patches using 'patchrm', etc.  Nothing has changed except that it's now much faster to patch non global Zones with Zones Parallel Patching invoked.

Zones "Update On Attach"

The primary purpose of Zones "Update on Attach" is Zones migration from one server to another.  
For example, a database instance in a non-global zone hosted on a server has grown to the extent that the Sys Admin wants to transfer it to a better spec'd server which can better handle the workload.   The Sys Admin can detach it from the old server (e.g. a Sun4u) and reattach it to the new server (e.g. a Sun4v) using Zones "Update On Attach".   This will bring the OS Software level on the non-global zone up to the same level as the new server's global zone.
Zones "Update On Attach" can certainly be used for patching but there are limitations you need to be aware of as outlined below.
For example, detach the non-global zones from a system, apply a bunch of patches to the global zone, reattach the non-global zones using "Update On Attach" and viola, the non-global zones will be brought up to the same software level as the global zone (for OS type packages), effectively patching the non-global zones without using 'patchadd' at all.   This is typically even faster than using Zones Parallel Patching.  But there are limitations to this approach which users must be aware of (see below).
My senior engineer, Enda O'Connor, has just published an interesting article on The Zones Update on Attach Feature and Patching in the Solaris 10 OS

Zones "Update On Attach" limitations as a patching aid

Zones "Update On Attach" only works for packages which are SUNW_PKG_ALLZONES=true - i.e. typically OS level packages, and not application packages.
So when to use Zones Parallel Patching in 'patchadd' and when to use Zones "Update On Attach" ?
Here's what my senior engineer, Enda O'Connor, says:
"The Zones Update on Attach Feature and Patching in the Solaris 10 OS document may help customers understand how the technology works, applying a cluster via patching and via zones Update On Attach is not quite the same really.
It really depends on the patches being applied, i.e. applying a firefox patch via Update On Attach would not work if you wanted it to apply to the global zone and all non-global zones as well.
One has to understand how Update On Attach works and then apply that to the list of patches to see if it gets them to a desirable state.
There is no black or white answer here.
I'd recommend Zones Parallel Patching using 'patchadd' as it has a known outcome all the time, whereas Update On Attach makes it's own internal determination based on a number of things, that can vary from system to system ( e.g. inherited directories ).
But if time to patch is critical then if the customer does proper testing to validate things, and are happy with the results, then by all means use Update On Attach.
But using Update On Attach without:
1. Understanding how it determines what packages to update
2. Not inspecting the patches being applied.
...will most likely lead to grief at some point."
And my other senior engineer, Ed Clark, says:
"In terms of giving guidance on which technology to use, there are a number of considerations -- two of these considerations are:
1. Using Update On Attach to update sparse zones can require significantly more disk storage space than would be needed by applying patches with 'patchadd' (3-4 times as much space would not be uncommon i think), due to Update On Attach copying fully populated global zone 'undo' files into the non-global zones, as opposed to having patchadd build sparsely populated 'undo' files in the non-global zones.
2. If a customer is really concerned about the ability to back out patches reliably, then 'patchadd' is a lower risk option than Update On Attach -- 'patchrm' of a patch from a non-global zone that has a copy of the global zones 'undo' pkg data (as is the case after Update On Attach) may potentially have unexpected side effects." [although we have yet to see any actual cases of negative results from this.]

Conclusion

In general, we recommend using the Zones Parallel Patching enhancement in the patch utilities rather than the Zones "Update On Attach" feature as Zones Parallel Patching is standard patching functionality, only faster, whereas Zones "Update On Attach" is really designed for migrating zones from one server as another and was not primarily designed to speed up patching.  
Because Zones "Update On Attach" uses Zones functionality similar to the zone creation functionality, rather than 'patchadd' functionality, limitations exist on what will be patched (typically the OS but not applications) and there's the potential for anomalies around things like the "undo" files which would be used by 'patchrm' if patches applied using Zones "Update On Attach" were subsequently removed from the non-global zones using 'patchrm' (although we have yet to see any actual cases of serious issues resulting from this).
So in patching situations where time is absolutely critical, Zones "Update On Attach" may provide a good option, as long as it's well tested in the customer environment prior to deployment on production systems.
Remember too, Live Upgrade is also your friend in such situations, enabling you to patch an inactive boot environment while the system is still in production.   So a combination of Live Upgrade and Zones Parallel Patching would be ideal.

Describe and implement use of the Fair Share Scheduler class


Describe and implement use of the Fair Share Scheduler class

You can use the fair share scheduler (FSS) to control the allocation of available CPU resources among zones, based on the importance of the workloads in the zone. This workload importance is expressed by the number of shares of CPU resources that you assign to each zone. Even if you are not using FSS to manage CPU resource allocation between zones, you can set the zone's scheduling-class to use FSS so that you can set shares on projects within the zone.
When you explicitly set the cpu-shares property, the fair share scheduler (FSS) will be used as the scheduling class for that zone. However, the preferred way to use FSS in this case is to set FSS to be the system default scheduling class with the dispadmin command. That way, all zones will benefit from getting a fair share of the system CPU resources. If cpu-shares is not set for a zone, the zone will use the system default scheduling class. The following actions set the scheduling class for a zone:
  • In the Solaris 10 8/07 release, you can use the scheduling-class property in zonecfg to set the scheduling class for the zone.
  • You can set the scheduling class for a zone through the resource pools facility. If the zone is associated with a pool that has its pool.scheduler property set to a valid scheduling class, then processes running in the zone run in that scheduling class by default. If the cpu-shares rctl is set and FSS has not been set as the scheduling class for the zone through another action, zoneadmd sets the scheduling class to FSS when the zone boots.
  • If the scheduling class is not set through any other action, the zone inherits the system default scheduling class.
Note that you can use the priocntl to move running processes into a different scheduling class without changing the default scheduling class and rebooting.

How to Associate a Pool with a Scheduling Class

You can associate a pool with a scheduling class so that all processes bound to the pool use this scheduler. To do this, set the pool.scheduler property to the name of the scheduler. This
example associates the pool pool_batch with the fair share scheduler (FSS).

1.      Become superuser, or assume a role that includes the Process Management profile.
2.      Modify pool pool_batch to be associated with the FSS.

# poolcfg -c 'modify pool pool_batch (string pool.scheduler="FSS")'
3.      Display the edited configuration.

# poolcfg -c info
system tester
        string  system.comment
        int     system.version 1
        boolean system.bind-default true
        int     system.poold.pid 177916
 
        pool pool_default
                int     pool.sys_id 0
                boolean pool.active true
                boolean pool.default true
                int     pool.importance 1
                string  pool.comment 
                pset    pset_default
 
        pset pset_default
                int     pset.sys_id -1
                boolean pset.default true
                uint    pset.min 1
                uint    pset.max 65536
                string  pset.units population
                uint    pset.load 10
                uint    pset.size 4
                string  pset.comment 
                boolean testnullchanged true
 
                cpu
                        int     cpu.sys_id 3
                        string  cpu.comment 
                        string  cpu.status on-line
 
                cpu
                        int     cpu.sys_id 2
                        string  cpu.comment 
                        string  cpu.status on-line
 
                cpu
                        int     cpu.sys_id 1
                        string  cpu.comment 
                        string  cpu.status on-line
 
                cpu
                        int     cpu.sys_id 0
                        string  cpu.comment 
                        string  cpu.status on-line
 
        pool pool_batch
                boolean pool.default false
                boolean pool.active true
                int pool.importance 1
                string pool.comment
                string pool.scheduler FSS
                pset batch
 
        pset pset_batch
                int pset.sys_id -2
                string pset.units population
                boolean pset.default true
                uint pset.max 10
                uint pset.min 2
                string pset.comment
                boolean pset.escapable false
                uint pset.load 0
                uint pset.size 0
 
                cpu
                        int     cpu.sys_id 5
                        string  cpu.comment
                        string  cpu.status on-line
 
                cpu
                        int     cpu.sys_id 4
                        string  cpu.comment
                        string  cpu.status on-line
4.      Commit the configuration at /etc/pooladm.conf:

# pooladm -c
5.      (Optional) To copy the dynamic configuration to a static configuration file called /tmp/backup, type the following:

# pooladm -s /tmp/backup

Describe new zone resource management features in the Solaris 10 OS 8/07 release


Describe new zone resource management features in the Solaris 10 OS 8/07 release

New Zones Features
On September 4, 2007, Solaris 10 8/07 became available.
This update to Solaris 10 has many new features. Of those, many enhance Solaris Containers either directly or indirectly. This update brings the most important changes to Containers since they were introduced in March of 2005. A brief introduction to them seems appropriate, but first a review of the previous update.
Solaris 10 11/06 added four features to Containers. One of them is called "configurable privileges" and allows the platform administrator to tailor the abilities of a Container to the needs of its application. 
 At least as important as that feature was the new ability to move (also called 'migrate') a Container from one Solaris 10 computer to another. This uses the 'detach' and 'attach' sub-commands to zoneadm(1M).
Other, minor new features, included:
  • rename a zone (i.e. Container)
  • move a zone to a different place in the file system on the same computer

New Features in Solaris 10 8/07 that Enhance Containers

New Resource Management Features

Solaris 10 8/07 has improved the resource management features of Containers. Some of these are new resource management features and some are improvements to the user interface. First I will describe three new "RM" features.
Earlier releases of Solaris 10 included the Resource Capping Daemon. This tool enabled you to place a 'soft cap' on the amount of RAM (physical memory) that an application, user or group of users could use. Excess usage would be detected by rcapd. When it did, physical memory pages owned by that entity would be paged out until the memory usage decreased below the cap.
Although it was possible to apply this tool to a zone, it was cumbersome and required cooperation from the administrator of the Container. In other words, the root user of a capped Container could change the cap. This made it inappropriate for potentially hostile environments, including service providers.
Solaris 10 8/07 enables the platform administrator to set a physical memory cap on a Container using an enhanced version of rcapd. Cooperation of the Container's administrator is not necessary - only the platform administrator can enable or disable this service or modify the caps. Further, usage has been greatly simplified to the following syntax:
global# zonecfg -z myzone
zonecfg:myzone> add capped-memory
zonecfg:myzone:capped-memory> set physical=500m
zonecfg:myzone:capped-memory> end
zonecfg:myzone> exit
The next time the Container boots, this cap (500MB of RAM) will be applied to it. The cap can be also be modified while the Container is running, with:
global# rcapadm -z myzone -m 600m
Because this cap does not reserve RAM, you can over-subscribe RAM usage. The only drawback is the possibility of paging.
For more details, see the online documentation.
Virtual memory (i.e. swap space) can also be capped. This is a 'hard cap.' In a Container which has a swap cap, an attempt by a process to allocate more VM than is allowed will fail. (If you are familiar with system calls: malloc() will fail with ENOMEM.)
The syntax is very similar to the physical memory cap:
global# zonecfg -z myzone
zonecfg:myzone> add capped-memory
zonecfg:myzone:capped-memory> set swap=1g
zonecfg:myzone:capped-memory> end
zonecfg:myzone> exit
This limit can also be changed for a running Container:
global# prctl -n zone.max-swap -v 2g -t privileged   -r -e deny -i zone myzone
Just as with the physical memory cap, if you want to change the setting for a running Container and for the next time it boots, you must use zonecfg and prctl or rcapadm.
The third new memory cap is locked memory. This is the amount of physical memory that a Container can lock down, i.e. prevent from being paged out. By default a Container now has the proc_lock_memory privilege, so it is wise to set this cap for all Containers.
Here is an example:
global# zonecfg -z myzone
zonecfg:myzone> add capped-memory
zonecfg:myzone:capped-memory> set locked=100m
zonecfg:myzone:capped-memory> end
zonecfg:myzone> exit

Simplified Resource Management Features

Dedicated CPUs
Many existing resource management features have a new, simplified user interface. For example, "dedicated-cpus" re-use the existing Dynamic Resource Pools features. But instead of needing many commands to configure them, configuration can be as simple as:
global# zonecfg -z myzone
zonecfg:myzone> add dedicated-cpu
zonecfg:myzone:dedicated-cpu> set ncpus=1-3
zonecfg:myzone:dedicated-cpu> end
zonecfg:myzone> exit
After using that command, when that Container boots, Solaris:
  1. removes a CPU from the default pool
  2. assigns that CPU to a newly created temporary pool
  3. associates that Container with that pool, i.e. only schedules that Container's processes on that CPU
Further, if the load on that CPU exceeds a default threshold and another CPU can be moved from another pool, Solaris will do that, up to the maximum configured amount of three CPUs. Finally, when the Container is stopped, the temporary pool is destroyed and its CPU(s) are placed back in the default pool.
Also, three existing project resource controls were applied to Containers:
global# zonecfg -z myzone
zonecfg:myzone> set max-shm-memory=100m
zonecfg:myzone> set max-shm-ids=100
zonecfg:myzone> set max-msg-ids=100
zonecfg:myzone> set max-sem-ids=100
zonecfg:myzone> exit
Fair Share Scheduler
A commonly used method to prevent "CPU hogs" from impacting other workloads is to assign a number of CPU shares to each workload, or to each zone. The relative number of shares assigned per zone guarantees a relative minimum amount of CPU power. This is less wasteful than dedicating a CPU to a Container that will not completely utilize the dedicated CPU(s).
Several steps were needed to configure this in the past. Solaris 10 8/07 simplifies this greatly: now just two steps are needed. The system must use FSS as the default scheduler. This command tells the system to use FSS as the default scheduler the next time it boots.
global# dispadmin -d FSS
Also, the Container must be assigned some shares:
global# zonecfg -z myzone
zonecfg:myzone> set cpu-shares=100
zonecfg:myzone> exit
Shared Memory Accounting
One feature simplification is not a reduced number of commands, but reduced complexity in resource monitoring. Prior to Solaris 10 8/07, the accounting of shared memory pages had an unfortunate subtlety. If two processes in a Container shared some memory, per-Container summaries counted the shared memory usage once for every process that was sharing the memory. It would appear that a Container was using more memory than it really was.
This was changed in 8/07. Now, in the per-Container usage section of prstat and similar tools, shared memory pages are only counted once per Container.

Global Zone Resource Management

Solaris 10 8/07 adds the ability to persistently assign resource controls to the global zone and its processes. These controls can be applied:
  • pool
  • cpu-shares
  • capped-memory: physical, swap, locked
  • dedicated-cpu: ncpus, importance
Example:
global# zonecfg -z global
zonecfg:myzone> set cpu-shares=100
zonecfg:myzone> set scheduling-class=FSS
zonecfg:myzone> exit
Use those features with caution. For example, assigning a physical memory cap of 100MB to the global zone will surely cause problems...

New Boot Arguments

The following boot arguments can now be used:
Argument or Option
Meaning
-s
Boot to the single-user milestone
-m
Boot to the specified milestone
-i
Boot the specified program as 'init'. This is only useful with branded zones.
Allowed syntaxes include:
global# zoneadm -z myzone boot -- -s
global# zoneadm -z yourzone reboot -- -i /sbin/myinit
ozone# reboot -- -m verbose
In addition, these boot arguments can be stored with zonecfg, for later boots.
global# zonecfg -z myzone
zonecfg:myzone> set bootargs="-m verbose"
zonecfg:myzone> exit

Configurable Privileges

Of the existing three DTrace privileges, dtrace_proc and dtrace_user can now be assigned to a Container. This allows the use of DTrace from within a Container. Of course, even the root user in a Container is still not allowed to view or modify kernel data, but DTrace can be used in a Container to look at system call information and profiling data for user processes.
Also, the privilege proc_priocntl can be added to a Container to enable the root user of that Container to change the scheduling class of its processes.

IP Instances

This is a new feature that allows a Container to have exclusive access to one or more network interfaces. No other Container, even the global zone, can send or receive packets on that NIC.
This also allows a Container to control its own network configuration, including routing, IP Filter, the ability to be a DHCP client, and others. The syntax is simple:
global# zonecfg -z myzone
zonecfg:myzone> set ip-type=exclusive
zonecfg:myzone> add net
zonecfg:myzone:net> set physical=bge1
zonecfg:myzone:net> end
zonecfg:myzone> exit

IP Filter Improvements

Some network architectures call for two systems to communicate via a firewall box or other piece of network equipment. It is often desirable to create two Containers that communicate via an external device, for similar reasons. Unfortunately, prior to Solaris 10 8/07 that was not possible. In 8/07 the global zone administrator can configure such a network architecture with the existing IP Filter commands.

Upgrading and Patching Containers with Live Upgrade

Solaris 10 8/07 adds the ability to use Live Upgrade tools on a system with Containers. This makes it possible to apply an update to a zoned system, e.g. updating from Solaris 10 11/06 to Solaris 10 8/07. It also drastically reduces the downtime necessary to apply some patches.
The latter ability requires more explanation. An existing challenge in the maintenance of zones is patching - each zone must be patched when a patch is applied. If the patch must be applied while the system is down, the downtime can be significant.
Fortunately, Live Upgrade can create an Alternate Boot Environment (ABE) and the ABE can be patched while the Original Boot Environment (OBE) is still running its Containers and their applications. After the patches have been applied, the system can be re-booted into the ABE. Downtime is limited to the time it takes to re-boot the system.
An additional benefit can be seen if there is a problem with the patch and that particular application environment. Instead of backing out the patch, the system can be re-booted into the OBE while the problem is investigated.

Branded Zones (Branded Containers)

Some times it would be useful to run an application in a Container, but the application is not yet available for Solaris, or is not available for the version of Solaris that is being run. To run an application like that, perhaps a special Solaris environment could be created that only runs applications for that version of Solaris, or for that operating system.
Solaris 10 8/07 contains a new framework called Branded Zones. This framework enables the creation and installation of Containers that are not the default 'native' type of Containers, but have been tailored to run 'non-native' applications.

Solaris Containers for Linux Applications

The first brand to be integrated into Solaris 10 is the brand called 'lx'. This brand is intended for x86 appplications which run well on CentOS 3 or Red Hat Linux 3. This brand is specific to x86 computers. The name of this feature isSolaris Containers for Linux Applications.