sandbox.hortonworks.com


Statistics generated on: Mon Jul 27 15:36:39 UTC 2015 from: /usr/local/amm by: uid=0(root) gid=0(root) groups=0(root) format: detailed

General Unix Schema: ux2html.sh v. 1.3.7 + Custom Configuration
This software is released under the GNU General Pubblic License by Meo Bogliolo. See below for more information


System


sandbox.hortonworks.com

System evaluated as: Linux / GNU

Linux sandbox.hortonworks.com 2.6.32-504.30.3.el6.x86_64 #1 SMP Wed Jul 15 10:13:09 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
	Vendor: innotek GmbH
	Manufacturer: innotek GmbH
	Product Name: VirtualBox

Go to the top


System Description


Go to the top


Users


root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
gopher:x:13:30:gopher:/var/gopher:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin
saslauth:x:499:76:Saslauthd user:/var/empty/saslauth:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
vagrant:x:500:500:vagrant:/home/vagrant:/bin/bash
dbus:x:81:81:System message bus:/:/sbin/nologin
HDP:x:501:0::/home/HDP:/bin/bash
puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin
vboxadd:x:498:1::/var/run/vboxadd:/bin/false
postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash
oozie:x:502:501::/home/oozie:/bin/bash
hive:x:503:501::/home/hive:/bin/bash
ambari-qa:x:1001:501::/home/ambari-qa:/bin/bash
flume:x:505:501::/home/flume:/bin/bash
hdfs:x:506:501::/home/hdfs:/bin/bash
knox:x:507:501::/home/knox:/bin/bash
storm:x:508:501::/home/storm:/bin/bash
spark:x:509:501::/home/spark:/bin/bash
mapred:x:510:501::/home/mapred:/bin/bash
hbase:x:1002:501::/home/hbase:/bin/bash
tez:x:512:501::/home/tez:/bin/bash
zookeeper:x:513:501::/home/zookeeper:/bin/bash
kafka:x:514:501::/home/kafka:/bin/bash
falcon:x:515:501::/home/falcon:/bin/bash
sqoop:x:516:501::/home/sqoop:/bin/bash
yarn:x:517:501::/home/yarn:/bin/bash
hcat:x:518:501::/home/hcat:/bin/bash
ams:x:519:501::/home/ams:/bin/bash
atlas:x:520:501::/home/atlas:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
hue:x:1003:490:Hue:/usr/lib/hue:/bin/bash
solr:x:1004:505::/home/solr:/bin/bash
apache:x:48:48:Apache:/var/www:/sbin/nologin
admin:x:1005:1005::/home/admin:/bin/bash
kms:x:1006:489:KMS:/var/lib/ranger:/bin/bash
ranger:x:1007:488:Ranger:/var/lib/ranger:/bin/bash
xapolicymgr:x:1008:1008::/home/xapolicymgr:/bin/bash
it1:x:1009:1009::/home/it1:/bin/bash
legal1:x:1010:1010::/home/legal1:/bin/bash
mktg1:x:1011:1011::/home/mktg1:/bin/bash
network1:x:1012:1012::/home/network1:/bin/bash
it2:x:1013:1009::/home/it2:/bin/bash
legal2:x:1014:1010::/home/legal2:/bin/bash
mktg2:x:1015:1011::/home/mktg2:/bin/bash
network2:x:1016:1012::/home/network2:/bin/bash
it3:x:1017:1009::/home/it3:/bin/bash
legal3:x:1018:1010::/home/legal3:/bin/bash
mktg3:x:1019:1011::/home/mktg3:/bin/bash
network3:x:1020:1012::/home/network3:/bin/bash
guest:x:1021:1013::/home/guest:/bin/bash
shellinabox:x:497:487:Shellinabox:/var/lib/shellinabox:/sbin/nologin

Go to the top


Groups


root:x:0:
bin:x:1:bin,daemon
daemon:x:2:bin,daemon
sys:x:3:bin,adm
adm:x:4:adm,daemon
tty:x:5:
disk:x:6:
lp:x:7:daemon
mem:x:8:
kmem:x:9:
wheel:x:10:
mail:x:12:mail,postfix
uucp:x:14:
man:x:15:
games:x:20:
gopher:x:30:
video:x:39:
dip:x:40:
ftp:x:50:
lock:x:54:
audio:x:63:
nobody:x:99:
users:x:100:oozie,ambari-qa,tez,falcon,hue,guest
floppy:x:19:
vcsa:x:69:
utmp:x:22:
utempter:x:35:
cdrom:x:11:
tape:x:33:
dialout:x:18:
saslauth:x:76:
postdrop:x:90:
postfix:x:89:
fuse:x:499:
sshd:x:74:
vagrant:x:500:vagrant
dbus:x:81:
puppet:x:52:
vboxsf:x:498:
postgres:x:26:
hadoop:x:501:hive,flume,hdfs,knox,storm,spark,mapred,hbase,zookeeper,kafka,sqoop,yarn,hcat,ams,atlas,hue,admin
knox:x:502:
spark:x:503:
hdfs:x:504:hdfs,mapred
rpc:x:32:
storm:x:497:
falcon:x:496:
flume:x:495:
hbase:x:494:
hive:x:493:
kafka:x:492:
mysql:x:27:
oozie:x:491:
rpcuser:x:29:
nfsnobody:x:65534:
hue:x:490:
solr:x:505:
apache:x:48:
admin:x:1005:
kms:x:489:
ranger:x:488:
xapolicymgr:x:1008:
IT:x:1009:
Legal:x:1010:
Marketing:x:1011:
Network:x:1012:
guest:x:1013:
shellinabox:x:487:

Go to the top


System Security Files

-rw-r--r--  1 root root 1057 2015-07-21 16:45 /etc/group
-rw-r--r--. 1 root root  370 2010-01-12 13:28 /etc/hosts.allow
-rw-r--r--  1 root root 3058 2015-07-21 16:45 /etc/passwd
----------  1 root root 2163 2015-07-21 16:45 /etc/shadow


MD5(/etc/passwd)= 43d3bbb712a1f360055814cc77ac80e6
MD5(/etc/group)= d15e6e756491a478bd31a96af0eee33e
MD5(/etc/shadow)= 33294503698a0a48005530b7a4aa9f9a
MD5(/etc/hosts.allow)= 3fb7d181e3e605ca91541c0d82753616

::::::::::::::
/etc/passwd
::::::::::::::
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
uucp:x:10:14:uucp:/var/spool/uucp:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
gopher:x:13:30:gopher:/var/gopher:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin
saslauth:x:499:76:Saslauthd user:/var/empty/saslauth:/sbin/nologin
postfix:x:89:89::/var/spool/postfix:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
vagrant:x:500:500:vagrant:/home/vagrant:/bin/bash
dbus:x:81:81:System message bus:/:/sbin/nologin
HDP:x:501:0::/home/HDP:/bin/bash
puppet:x:52:52:Puppet:/var/lib/puppet:/sbin/nologin
vboxadd:x:498:1::/var/run/vboxadd:/bin/false
postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash
oozie:x:502:501::/home/oozie:/bin/bash
hive:x:503:501::/home/hive:/bin/bash
ambari-qa:x:1001:501::/home/ambari-qa:/bin/bash
flume:x:505:501::/home/flume:/bin/bash
hdfs:x:506:501::/home/hdfs:/bin/bash
knox:x:507:501::/home/knox:/bin/bash
storm:x:508:501::/home/storm:/bin/bash
spark:x:509:501::/home/spark:/bin/bash
mapred:x:510:501::/home/mapred:/bin/bash
hbase:x:1002:501::/home/hbase:/bin/bash
tez:x:512:501::/home/tez:/bin/bash
zookeeper:x:513:501::/home/zookeeper:/bin/bash
kafka:x:514:501::/home/kafka:/bin/bash
falcon:x:515:501::/home/falcon:/bin/bash
sqoop:x:516:501::/home/sqoop:/bin/bash
yarn:x:517:501::/home/yarn:/bin/bash
hcat:x:518:501::/home/hcat:/bin/bash
ams:x:519:501::/home/ams:/bin/bash
atlas:x:520:501::/home/atlas:/bin/bash
rpc:x:32:32:Rpcbind Daemon:/var/cache/rpcbind:/sbin/nologin
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
hue:x:1003:490:Hue:/usr/lib/hue:/bin/bash
solr:x:1004:505::/home/solr:/bin/bash
apache:x:48:48:Apache:/var/www:/sbin/nologin
admin:x:1005:1005::/home/admin:/bin/bash
kms:x:1006:489:KMS:/var/lib/ranger:/bin/bash
ranger:x:1007:488:Ranger:/var/lib/ranger:/bin/bash
xapolicymgr:x:1008:1008::/home/xapolicymgr:/bin/bash
it1:x:1009:1009::/home/it1:/bin/bash
legal1:x:1010:1010::/home/legal1:/bin/bash
mktg1:x:1011:1011::/home/mktg1:/bin/bash
network1:x:1012:1012::/home/network1:/bin/bash
it2:x:1013:1009::/home/it2:/bin/bash
legal2:x:1014:1010::/home/legal2:/bin/bash
mktg2:x:1015:1011::/home/mktg2:/bin/bash
network2:x:1016:1012::/home/network2:/bin/bash
it3:x:1017:1009::/home/it3:/bin/bash
legal3:x:1018:1010::/home/legal3:/bin/bash
mktg3:x:1019:1011::/home/mktg3:/bin/bash
network3:x:1020:1012::/home/network3:/bin/bash
guest:x:1021:1013::/home/guest:/bin/bash
shellinabox:x:497:487:Shellinabox:/var/lib/shellinabox:/sbin/nologin
::::::::::::::
/etc/group
::::::::::::::
root:x:0:
bin:x:1:bin,daemon
daemon:x:2:bin,daemon
sys:x:3:bin,adm
adm:x:4:adm,daemon
tty:x:5:
disk:x:6:
lp:x:7:daemon
mem:x:8:
kmem:x:9:
wheel:x:10:
mail:x:12:mail,postfix
uucp:x:14:
man:x:15:
games:x:20:
gopher:x:30:
video:x:39:
dip:x:40:
ftp:x:50:
lock:x:54:
audio:x:63:
nobody:x:99:
users:x:100:oozie,ambari-qa,tez,falcon,hue,guest
floppy:x:19:
vcsa:x:69:
utmp:x:22:
utempter:x:35:
cdrom:x:11:
tape:x:33:
dialout:x:18:
saslauth:x:76:
postdrop:x:90:
postfix:x:89:
fuse:x:499:
sshd:x:74:
vagrant:x:500:vagrant
dbus:x:81:
puppet:x:52:
vboxsf:x:498:
postgres:x:26:
hadoop:x:501:hive,flume,hdfs,knox,storm,spark,mapred,hbase,zookeeper,kafka,sqoop,yarn,hcat,ams,atlas,hue,admin
knox:x:502:
spark:x:503:
hdfs:x:504:hdfs,mapred
rpc:x:32:
storm:x:497:
falcon:x:496:
flume:x:495:
hbase:x:494:
hive:x:493:
kafka:x:492:
mysql:x:27:
oozie:x:491:
rpcuser:x:29:
nfsnobody:x:65534:
hue:x:490:
solr:x:505:
apache:x:48:
admin:x:1005:
kms:x:489:
ranger:x:488:
xapolicymgr:x:1008:
IT:x:1009:
Legal:x:1010:
Marketing:x:1011:
Network:x:1012:
guest:x:1013:
shellinabox:x:487:
::::::::::::::
/etc/shadow
::::::::::::::
root:$6$x1QOu1PZxduvJOBU$AS2cVUYy95hOtS7bxQ5cv28shUOWiL2Ua8AoXF3SbDux0ijBWvVNIxterUP7hi6JDAyyvwQI2aScP58zCT7Br.:16637:0:99999:7:::
bin:*:15980:0:99999:7:::
daemon:*:15980:0:99999:7:::
adm:*:15980:0:99999:7:::
lp:*:15980:0:99999:7:::
sync:*:15980:0:99999:7:::
shutdown:*:15980:0:99999:7:::
halt:*:15980:0:99999:7:::
mail:*:15980:0:99999:7:::
uucp:*:15980:0:99999:7:::
operator:*:15980:0:99999:7:::
games:*:15980:0:99999:7:::
gopher:*:15980:0:99999:7:::
ftp:*:15980:0:99999:7:::
nobody:*:15980:0:99999:7:::
vcsa:!!:16637::::::
saslauth:!!:16637::::::
postfix:!!:16637::::::
sshd:!!:16637::::::
vagrant:$6$1FDlvfDbt79Ker47$xmbSybYNBX2HDr6fyz/y2xc3cwwQDM72d4PBGtbU6D9ra6HqJWb9CuUR.DvZ1HBpQ7HiRwlb2C7luyjetJlpD.:16637:0:99999:7:::
dbus:!!:16637::::::
HDP:!!:16637:0:99999:7:::
puppet:!!:16637::::::
vboxadd:!!:16637::::::
postgres:!!:16637::::::
oozie:!!:16637:0:99999:7:::
hive:!!:16637:0:99999:7:::
ambari-qa:!!:16637:0:99999:7:::
flume:!!:16637:0:99999:7:::
hdfs:!!:16637:0:99999:7:::
knox:!!:16637:0:99999:7:::
storm:!!:16637:0:99999:7:::
spark:!!:16637:0:99999:7:::
mapred:!!:16637:0:99999:7:::
hbase:!!:16637:0:99999:7:::
tez:!!:16637:0:99999:7:::
zookeeper:!!:16637:0:99999:7:::
kafka:!!:16637:0:99999:7:::
falcon:!!:16637:0:99999:7:::
sqoop:!!:16637:0:99999:7:::
yarn:!!:16637:0:99999:7:::
hcat:!!:16637:0:99999:7:::
ams:!!:16637:0:99999:7:::
atlas:!!:16637:0:99999:7:::
rpc:!!:16637:0:99999:7:::
mysql:!!:16637::::::
rpcuser:!!:16637::::::
nfsnobody:!!:16637::::::
hue:$6$RC08pRlZ$/VZUqTYOqvm6nLkHUIBd4GNM5G4Tu3krfNCd0f7ANpjOyMuEn2hxEc//QhWOhuOLRFCCdqmxApoKPrsWP.Mek0:16637:0:99999:7:::
solr:!!:16637:0:99999:7:::
apache:!!:16637::::::
admin:!!:16637:0:99999:7:::
kms:!!:16637:0:99999:7:::
ranger:!!:16637:0:99999:7:::
xapolicymgr:xapolicymgr:16637:0:99999:7:::
it1:!!:16637:0:99999:7:::
legal1:!!:16637:0:99999:7:::
mktg1:!!:16637:0:99999:7:::
network1:!!:16637:0:99999:7:::
it2:!!:16637:0:99999:7:::
legal2:!!:16637:0:99999:7:::
mktg2:!!:16637:0:99999:7:::
network2:!!:16637:0:99999:7:::
it3:!!:16637:0:99999:7:::
legal3:!!:16637:0:99999:7:::
mktg3:!!:16637:0:99999:7:::
network3:!!:16637:0:99999:7:::
guest:!!:16637:0:99999:7:::
shellinabox:!!:16637::::::
::::::::::::::
/etc/hosts.allow
::::::::::::::
#
# hosts.allow	This file contains access rules which are used to
#		allow or deny connections to network services that
#		either use the tcp_wrappers library or that have been
#		started through a tcp_wrappers-enabled xinetd.
#
#		See 'man 5 hosts_options' and 'man 5 hosts_access'
#		for information on rule syntax.
#		See 'man tcpd' for information on tcp_wrappers
#

Go to the top


Space Configuration


File Systems

Filesystem 1024-blocks Used Available Capacity Mounted_on
/dev/mapper/vg_sandbox-lv_root 44717136 9738688 32700256 23% /
tmpfs 4029672 8 4029664 1% /dev/shm
/dev/sda1 487652 25643 436409 6% /boot

Mount Options

/dev/mapper/vg_sandbox-lv_root /                       ext4    defaults        1 1
UUID=78d18683-171e-426e-91c9-8273b2b5726f /boot                   ext4    defaults        1 2
/dev/mapper/vg_sandbox-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0

Go to the top


Volumes Summary

VxVM Volume Summary

Disk_Group Total_MB Used_MB Visible_MB Avail_MB Used%
---------- ---------- ---------- ---------- ---------- ----------
---------- ---------- ---------- ---------- ----------
Total 0 0 0 0
Physical_Disks: 0
Disk_Groups: 0
Volumes: 0
Plexes: 0

Go to the top


Logical Volumes


  --- Volume group ---
  VG Name               vg_sandbox
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               48.34 GiB
  PE Size               4.00 MiB
  Total PE              12374
  Alloc PE / Size       12374 / 48.34 GiB
  Free  PE / Size       0 / 0   
  VG UUID               05zdc6-ex9m-pMEo-nGMX-HKoI-ImER-UDYrXy
   
  --- Logical volume ---
  LV Path                /dev/vg_sandbox/lv_root
  LV Name                lv_root
  VG Name                vg_sandbox
  LV UUID                H1Nwt2-K7xf-a9UA-zhBE-vJL8-8eQg-d4wV3x
  LV Write Access        read/write
  LV Creation host, time sandbox.hortonworks.com, 2015-07-21 15:18:25 +0000
  LV Status              available
  # open                 1
  LV Size                43.45 GiB
  Current LE             11124
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vg_sandbox/lv_swap
  LV Name                lv_swap
  VG Name                vg_sandbox
  LV UUID                Xr2U3Q-Lo2D-2g1o-UvxA-thHY-FnNu-W6Ucg8
  LV Write Access        read/write
  LV Creation host, time sandbox.hortonworks.com, 2015-07-21 15:18:43 +0000
  LV Status              available
  # open                 1
  LV Size                4.88 GiB
  Current LE             1250
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
   
  --- Physical volumes ---
  PV Name               /dev/sda2     
  PV UUID               v2bkpm-Lczc-kPw9-xcwT-cMjb-r9CS-BBMgKE
  PV Status             allocatable
  Total PE / Free PE    12374 / 0
   

config {
	checks=1
	abort_on_errors=0
	profile_dir="/etc/lvm/profile"
}
dmeventd {
	mirror_library="libdevmapper-event-lvm2mirror.so"
	snapshot_library="libdevmapper-event-lvm2snapshot.so"
	thin_library="libdevmapper-event-lvm2thin.so"
}
activation {
	checks=0
	udev_sync=1
	udev_rules=1
	verify_udev_operations=0
	retry_deactivation=1
	missing_stripe_filler="error"
	use_linear_target=1
	reserved_stack=64
	reserved_memory=8192
	process_priority=-18
	raid_region_size=512
	readahead="auto"
	raid_fault_policy="warn"
	mirror_log_fault_policy="allocate"
	mirror_image_fault_policy="remove"
	snapshot_autoextend_threshold=100
	snapshot_autoextend_percent=20
	thin_pool_autoextend_threshold=100
	thin_pool_autoextend_percent=20
	use_mlockall=0
	monitoring=1
	polling_interval=15
	activation_mode="degraded"
}
global {
	umask=63
	test=0
	units="h"
	si_unit_consistency=1
	suffix=1
	activation=1
	proc="/proc"
	locking_type=1
	wait_for_locks=1
	fallback_to_clustered_locking=1
	fallback_to_local_locking=1
	locking_dir="/var/lock/lvm"
	prioritise_write_locks=1
	abort_on_internal_errors=0
	detect_internal_vg_cache_corruption=0
	metadata_read_only=0
	mirror_segtype_default="mirror"
	raid10_segtype_default="mirror"
	use_lvmetad=0
}
shell {
	history_size=100
}
backup {
	backup=1
	backup_dir="/etc/lvm/backup"
	archive=1
	archive_dir="/etc/lvm/archive"
	retain_min=10
	retain_days=30
}
log {
	verbose=0
	silent=0
	syslog=1
	overwrite=0
	level=0
	indent=1
	command_names=0
	prefix="  "
	debug_classes=["memory", "devices", "activation", "allocation", "lvmetad", "metadata", "cache", "locking"]
}
allocation {
	maximise_cling=1
	use_blkid_wiping=1
	wipe_signatures_when_zeroing_new_lvs=1
	mirror_logs_require_separate_pvs=0
	cache_pool_metadata_require_separate_pvs=0
	thin_pool_metadata_require_separate_pvs=0
}
devices {
	dir="/dev"
	scan="/dev"
	obtain_device_list_from_udev=0
	preferred_names=["^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d"]
	cache_dir="/etc/lvm/cache"
	cache_file_prefix=""
	write_cache_state=1
	sysfs_scan=1
	multipath_component_detection=1
	md_component_detection=1
	md_chunk_alignment=1
	data_alignment_detection=1
	data_alignment=0
	data_alignment_offset_detection=1
	ignore_suspended_devices=0
	ignore_lvm_mirrors=1
	disable_after_error_count=0
	require_restorefile_with_uuid=1
	pv_min_size=2048
	issue_discards=0
}

VxVM Volume Details

Go to the top


Network Configuration

10.0.2.15	sandbox.hortonworks.com sandbox ambari.hortonworks.com

Network Adapters

eth0      Link encap:Ethernet  HWaddr 08:00:27:CA:F6:76  
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:227803 errors:1 dropped:0 overruns:0 frame:0
          TX packets:202626 errors:4 dropped:0 overruns:0 carrier:4
          collisions:0 txqueuelen:1000 
          RX bytes:95648453 (91.2 MiB)  TX bytes:160126443 (152.7 MiB)
          Interrupt:19 Base address:0xd020 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:5755260 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5755260 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:10084646574 (9.3 GiB)  TX bytes:10084646574 (9.3 GiB)


Settings for eth0:
	Supported ports: [ TP MII ]
	Supported link modes:   10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	Supported pause frame use: No
	Supports auto-negotiation: Yes
	Advertised link modes:  10baseT/Half 10baseT/Full 
	                        100baseT/Half 100baseT/Full 
	Advertised pause frame use: No
	Advertised auto-negotiation: Yes
	Link partner advertised link modes:  10baseT/Half 10baseT/Full 
	                                     100baseT/Half 100baseT/Full 
	Link partner advertised pause frame use: Symmetric
	Link partner advertised auto-negotiation: Yes
	Speed: 100Mb/s
	Duplex: Full
	Port: MII
	PHYAD: 0
	Transceiver: internal
	Auto-negotiation: on
	Current message level: 0x00000007 (7)
			       drv probe link
	Link detected: yes



bridge name	bridge id		STP enabled	interfaces

Host file

# File is generated from /usr/lib/hue/tools/start_scripts/gen_hosts.sh
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1		localhost.localdomain localhost
10.0.2.15	sandbox.hortonworks.com sandbox ambari.hortonworks.com

DNS Client


nameserver 8.8.8.8

DNS Server

The system is not a DNS Server

Routing

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
10.0.2.0        0.0.0.0         255.255.255.0   U         0 0          0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
0.0.0.0         10.0.2.2        0.0.0.0         UG        0 0          0 eth0

ARP table
? (10.0.2.2) at 52:54:00:12:35:02 [ether] on eth0
? (169.254.169.254) at  on eth0

IP table
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

NFS


All mount points on sandbox.hortonworks.com:

NTP


Go to the top


HW Configuration

Processors

x86_64

processor	: 0
cpu family	: 6
model name	: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
cpu MHz		: 2493.949
cache size	: 6144 KB
cpu cores	: 4
cpuid level	: 13
bogomips	: 4987.89
cache_alignment	: 64
processor	: 1
cpu family	: 6
model name	: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
cpu MHz		: 2493.949
cache size	: 6144 KB
cpu cores	: 4
cpuid level	: 13
bogomips	: 4987.89
cache_alignment	: 64
processor	: 2
cpu family	: 6
model name	: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
cpu MHz		: 2493.949
cache size	: 6144 KB
cpu cores	: 4
cpuid level	: 13
bogomips	: 4987.89
cache_alignment	: 64
processor	: 3
cpu family	: 6
model name	: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz
cpu MHz		: 2493.949
cache size	: 6144 KB
cpu cores	: 4
cpuid level	: 13
bogomips	: 4987.89
cache_alignment	: 64

Go to the top

Memory

             total       used       free     shared    buffers     cached
Mem:       8059344    5904560    2154784      10852     163752     696076
-/+ buffers/cache:    5044732    3014612
Swap:      5119996      76096    5043900
Total:    13179340    5980656    7198684
Meminfo
MemTotal:        8059344 kB
MemFree:         2154784 kB
Buffers:          163752 kB
Cached:           696076 kB
SwapCached:        18748 kB
Active:          4203256 kB
Inactive:        1383608 kB
Active(anon):    3613004 kB
Inactive(anon):  1124816 kB
Active(file):     590252 kB
Inactive(file):   258792 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:       5119996 kB
SwapFree:        5043900 kB
Dirty:               732 kB
Writeback:             0 kB
AnonPages:       4708748 kB
Mapped:            78788 kB
Shmem:             10852 kB
Slab:             208536 kB
SReclaimable:     171052 kB
SUnreclaim:        37484 kB
KernelStack:       11264 kB
PageTables:        27804 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     9149668 kB
Committed_AS:    7416968 kB
VmallocTotal:   34359738367 kB
VmallocUsed:       31300 kB
VmallocChunk:   34359697404 kB
HardwareCorrupted:     0 kB
AnonHugePages:   3932160 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:        8128 kB
DirectMap2M:     8380416 kB

NUMA

Go to the top


Devices

PCI
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:02.0 VGA compatible controller: InnoTek Systemberatung GmbH VirtualBox Graphics Adapter
00:03.0 Ethernet controller: Advanced Micro Devices, Inc. [AMD] 79c970 [PCnet32 LANCE] (rev 40)
00:04.0 System peripheral: InnoTek Systemberatung GmbH VirtualBox Guest Service
00:06.0 USB controller: Apple Inc. KeyLargo/Intrepid USB
00:07.0 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)

USB
Plug&Play
BIOS # dmidecode 2.12 SMBIOS 2.5 present. 10 structures occupying 449 bytes. Table at 0x000E1000. Handle 0x0000, DMI type 0, 20 bytes BIOS Information Vendor: innotek GmbH Version: VirtualBox Release Date: 12/01/2006 Address: 0xE0000 Runtime Size: 128 kB ROM Size: 128 kB Characteristics: ISA is supported PCI is supported Boot from CD is supported Selectable boot is supported 8042 keyboard services are supported (int 9h) CGA/mono video services are supported (int 10h) ACPI is supported Handle 0x0001, DMI type 1, 27 bytes System Information Manufacturer: innotek GmbH Product Name: VirtualBox Version: 1.2 Serial Number: 0 UUID: E86E9793-5627-4420-852A-98A14B937070 Wake-up Type: Power Switch SKU Number: Not Specified Family: Virtual Machine Handle 0x0008, DMI type 2, 15 bytes Base Board Information Manufacturer: Oracle Corporation Product Name: VirtualBox Version: 1.2 Serial Number: 0 Asset Tag: Not Specified Features: Board is a hosting board Location In Chassis: Not Specified Chassis Handle: 0x0003 Type: Motherboard Contained Object Handles: 0 Handle 0x0003, DMI type 3, 13 bytes Chassis Information Manufacturer: Oracle Corporation Type: Other Lock: Not Present Version: Not Specified Serial Number: Not Specified Asset Tag: Not Specified Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: None Handle 0x0007, DMI type 126, 42 bytes Inactive Handle 0x0005, DMI type 126, 15 bytes Inactive Handle 0x0006, DMI type 126, 28 bytes Inactive Handle 0x0002, DMI type 11, 7 bytes OEM Strings String 1: vboxVer_5.0.0 String 2: vboxRev_101573 Handle 0x0008, DMI type 128, 8 bytes OEM-specific Type Header and Data: 80 08 08 00 12 0F 26 00 Handle 0xFEFF, DMI type 127, 4 bytes End Of Table

Go to the top


Disks


Disk /dev/sda: 52.4 GB, 52428800000 bytes
255 heads, 63 sectors/track, 6374 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a8e3d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          64      512000   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              64        6375    50686976   8e  Linux LVM

Disk /dev/mapper/vg_sandbox-lv_root: 46.7 GB, 46657437696 bytes
255 heads, 63 sectors/track, 5672 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/mapper/vg_sandbox-lv_swap: 5242 MB, 5242880000 bytes
255 heads, 63 sectors/track, 637 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


0
max_active_device=1(origin 1)
 def_reserved_size=32768
32768
host	chan	id	lun	type	opens	qdepth	busy	online
0	0	0	0	0	1	1	0	1
ATA     	VBOX HARDDISK   	1.0 
30534	3.5.34 [20061027]

Jul 27 15:36:39 | DM multipath kernel driver not loaded
Jul 27 15:36:39 | /etc/multipath.conf does not exist, blacklisting all devices.
Jul 27 15:36:39 | A sample multipath.conf file is located at
Jul 27 15:36:39 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Jul 27 15:36:39 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf
Jul 27 15:36:39 | DM multipath kernel driver not loaded

Storage Extended Infos

Go to the top


Fiber Channel Adapters

WWNs

Go to the top


Disks User Info

Go to the top


System Partitioning

Partitioning Information NOT FOUND

Go to the top


SW Configuration


Swap Space

Filename				Type		Size	Used	Priority
/dev/dm-1                               partition	5119996	76096	-1

Go to the top


Directories Usage

4179408	/usr/hdp
938212	/usr/lib
324576	/usr/lib64
270788	/usr/share
56368	/usr/bin
30596	/usr/libexec
28500	/usr/sbin
8328	/usr/include
1200	/usr/local
12	/usr/src
4	/usr/games
4	/usr/etc
0	/usr/tmp
1407368	/var/lib
205384	/var/log
80904	/var/cache
1172	/var/www
344	/var/run
120	/var/spool
48	/var/tmp
20	/var/lock
8	/var/empty
8	/var/db
4	/var/yp
4	/var/preserve
4	/var/opt
4	/var/nis
4	/var/local
4	/var/games
4	/var/cvs
0	/var/mail

Go to the top


Printers

scheduler is not running
no system default destination

Go to the top


SW Packages

ranger_2_3_0_0_2557-hive-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
freetype-2.3.11-15.el6_6.1 x86_64
basesystem-10.0-4.el6 noarch
hive_2_3_0_0_2557-hcatalog-1.2.1.2.3.0.0-2557.el6 noarch
libICE-1.0.6-1.el6 x86_64
hive_2_3_0_0_2557-server-1.2.1.2.3.0.0-2557.el6 noarch
alsa-lib-1.0.22-3.el6 x86_64
bash-4.1.2-29.el6 x86_64
kafka_2_3_0_0_2557-0.8.2.2.3.0.0-2557.el6 noarch
jasper-libs-1.900.1-16.el6_6.3 x86_64
info-4.13a-8.el6 x86_64
python-devel-2.6.6-52.el6 x86_64
atk-1.30.0-1.el6 x86_64
libcom_err-1.41.12-21.el6 x86_64
ambari-metrics-hadoop-sink-2.1.0-1470 x86_64
cups-libs-1.4.2-67.el6_6.1 x86_64
mysql-server-5.1.73-5.el6_6 x86_64
libtheora-1.1.0-2.el6 x86_64
sed-4.2.1-10.el6 x86_64
oozie_2_3_0_0_2557-4.2.0.2.3.0.0-2557.el6 noarch
libstdc++-4.4.7-11.el6 x86_64
spark_2_3_0_0_2557-python-1.3.1.2.3.0.0-2557.el6 noarch
libgpg-error-1.7-4.el6 x86_64
sqoop_2_3_0_0_2557-metastore-1.4.6.2.3.0.0-2557.el6 noarch
libX11-common-1.6.0-2.2.el6 noarch
grep-2.6.3-6.el6 x86_64
keyutils-1.4-5.el6 x86_64
libvisual-0.4.0-10.el6 x86_64
libidn-1.18-2.el6 x86_64
nfs-utils-lib-devel-1.1.5-9.el6_6 x86_64
gstreamer-tools-0.10.29-1.el6 x86_64
hue-common-2.6.1.2.3.0.0-2557.el6 x86_64
mesa-private-llvm-3.4-3.el6 x86_64
libselinux-utils-2.0.94-5.8.el6 x86_64
hue-oozie-2.6.1.2.3.0.0-2557.el6 x86_64
pax-3.4-10.1.el6 x86_64
lucidworks-hdpsearch-2.3- noarch
libX11-1.6.0-2.2.el6 x86_64
libtasn1-2.3-6.el6_5 x86_64
httpd-tools-2.2.15-39.el6.centos x86_64
libXfixes-5.0.1-2.1.el6 x86_64
device-mapper-persistent-data-0.3.2-1.el6 x86_64
hue-sandbox-1.2.1-88 noarch
libXinerama-1.1.3-2.1.el6 x86_64
gmp-4.3.1-7.el6_2.2 x86_64
git-1.7.1-3.el6_4.1 x86_64
qt3-3.3.8b-30.el6 x86_64
psmisc-22.6-19.el6_5 x86_64
ranger_2_3_0_0_2557-debuginfo-0.5.0.2.3.0.0-2557.el6 x86_64
mesa-libGL-10.1.2-2.el6 x86_64
procps-3.2.8-30.el6 x86_64
shellinabox-2.14-27.git88822c1.el6 x86_64
libXcomposite-0.4.3-4.el6 x86_64
pinentry-0.7.6-6.el6 x86_64
poppler-0.12.4-4.el6_6.1 x86_64
make-3.81-20.el6 x86_64
foomatic-db-4.0-7.20091126.el6 noarch
less-436-13.el6 x86_64
xz-lzma-compat-4.999.9-0.5.beta.20091007git.el6 x86_64
cracklib-dicts-2.8.16-4.el6 x86_64
pango-1.28.1-10.el6 x86_64
hwdata-0.233-11.1.el6 noarch
ed-1.1-3.3.el6 x86_64
qt-x11-4.6.2-28.el6_5 x86_64
libedit-2.11-4.20080712cvs.1.el6 x86_64
hicolor-icon-theme-0.11-1.1.el6 noarch
gnupg2-2.0.14-8.el6 x86_64
puppetlabs-release-6-7 noarch
libcap-ng-0.6.4-3.el6_0.1 x86_64
libselinux-ruby-2.0.94-5.8.el6 x86_64
python-pycurl-7.19.0-8.el6 x86_64
ruby-1.8.7.374-4.el6_6 x86_64
python-iniparse-0.3.1-2.1.el6 noarch
rubygems-1.3.7-5.el6 noarch
ustr-1.0.4-9.1.el6 x86_64
augeas-libs-1.0.0-7.el6_6.1 x86_64
gamin-0.1.10-9.el6 x86_64
cloog-ppl-0.15.7-1.2.el6 x86_64
grubby-7.0.15-7.el6 x86_64
dbus-glib-0.86-6.el6 x86_64
libasyncns-0.8-1.1.el6 x86_64
iptables-1.4.7-14.el6 x86_64
tzdata-java-2015e-1.el6 noarch
pulseaudio-libs-0.9.21-17.el6 x86_64
GConf2-2.28.0-6.el6 x86_64
cryptsetup-luks-libs-1.2.0-11.el6 x86_64
java-1.7.0-openjdk-1.7.0.85-2.6.1.3.el6_6 x86_64
plymouth-0.8.3-27.el6.centos.1 x86_64
postgresql-server-8.4.20-3.el6_6 x86_64
cronie-anacron-1.4.4-12.el6 x86_64
hdp-select-2.3.0.0-2557.el6 noarch
perl-Module-Pluggable-3.90-136.el6_6.1 x86_64
perl-CGI-3.51-136.el6_6.1 x86_64
zookeeper_2_3_0_0_2557-3.4.6.2.3.0.0-2557.el6 noarch
glibc-headers-2.12-1.149.el6_6.9 x86_64
iscsi-initiator-utils-6.2.0.873-13.el6 x86_64
perl-ExtUtils-MakeMaker-6.55-136.el6_6.1 x86_64
efibootmgr-0.5.4-12.el6 x86_64
redhat-lsb-graphics-4.0-7.el6.centos x86_64
xfsprogs-3.1.1-16.el6 x86_64
hadoop_2_3_0_0_2557-2.7.1.2.3.0.0-2557.el6 x86_64
rootfiles-8.1-6.1.el6 noarch
hadoop_2_3_0_0_2557-hdfs-2.7.1.2.3.0.0-2557.el6 x86_64
glibc-common-2.12-1.149.el6_6.9 x86_64
libtirpc-0.2.1-10.el6 x86_64
device-mapper-libs-1.02.90-2.el6_6.3 x86_64
hadoop_2_3_0_0_2557-libhdfs-2.7.1.2.3.0.0-2557.el6 x86_64
nss-util-3.19.1-1.el6_6 x86_64
storm_2_3_0_0_2557-slider-client-0.10.0.2.3.0.0-2557.el6 x86_64
nss-sysinit-3.19.1-3.el6_6 x86_64
flume_2_3_0_0_2557-1.5.2.2.3.0.0-2557.el6 noarch
iproute-2.6.32-33.el6_6 x86_64
hbase_2_3_0_0_2557-thrift2-1.1.1.2.3.0.0-2557.el6 noarch
cyrus-sasl-lib-2.1.23-15.el6_6.2 x86_64
hbase_2_3_0_0_2557-master-1.1.1.2.3.0.0-2557.el6 noarch
dbus-libs-1.2.24-8.el6_6 x86_64
jakarta-commons-discovery-0.4-5.4.el6 noarch
libssh2-1.4.2-1.el6_6.1 x86_64
libgcj-4.4.7-11.el6 x86_64
rpm-libs-4.8.0-38.el6_6 x86_64
classpathx-jaf-1.0-15.4.el6 x86_64
selinux-policy-targeted-3.7.19-260.el6_6.5 noarch
xml-commons-resolver-1.1-4.18.el6 x86_64
cyrus-sasl-2.1.23-15.el6_6.2 x86_64
wsdl4j-1.5.2-7.8.el6 noarch
mdadm-3.3-6.el6_6.1 x86_64
geronimo-specs-compat-1.0-3.5.M2.el6 noarch
nss-tools-3.19.1-3.el6_6 x86_64
libgcc-4.4.7-11.el6 x86_64
system-config-firewall-base-1.2.27-7.2.el6_6 noarch
filesystem-2.4.30-3.el6 x86_64
fontconfig-2.8.0-5.el6 x86_64
ncurses-base-5.7-3.20090208.el6 x86_64
libpng-1.2.49-1.el6_2 x86_64
libSM-1.2.1-2.el6 x86_64
libtiff-3.9.4-10.el6_5 x86_64
ncurses-libs-5.7-3.20090208.el6 x86_64
libogg-1.1.4-2.1.el6 x86_64
libattr-2.4.44-7.el6 x86_64
libmng-1.0.10-4.1.el6 x86_64
zlib-1.2.3-29.el6 x86_64
avahi-libs-0.6.25-15.el6 x86_64
popt-1.13-7.el6 x86_64
mesa-dri-filesystem-10.1.2-2.el6 x86_64
audit-libs-2.3.7-5.el6 x86_64
foomatic-db-filesystem-4.0-7.20091126.el6 noarch
libacl-2.2.49-6.el6 x86_64
gnutls-2.8.5-14.el6_5 x86_64
libXfont-1.4.5-4.el6_6 x86_64
readline-6.0-4.el6 x86_64
ghostscript-fonts-5.50-23.2.el6 noarch
libselinux-2.0.94-5.8.el6 x86_64
libvorbis-1.2.3-4.el6_2.1 x86_64
urw-fonts-2.4-10.el6 noarch
libuuid-2.17.2-12.18.el6 x86_64
libblkid-2.17.2-12.18.el6 x86_64
file-libs-5.04-21.el6 x86_64
pcre-7.8-6.el6 x86_64
libthai-0.1.12-3.el6 x86_64
lua-5.1.4-4.1.el6 x86_64
dbus-1.2.24-8.el6_6 x86_64
bc-1.06.95-1.el6 x86_64
expat-2.0.1-11.el6_2 x86_64
iso-codes-3.16-2.el6 noarch
elfutils-libelf-0.158-3.2.el6 x86_64
gstreamer-0.10.29-1.el6 x86_64
libgcrypt-1.4.5-11.el6_4 x86_64
at-3.1.10-44.el6_6.2 x86_64
findutils-4.4.2-6.el6 x86_64
libgudev1-147-2.57.el6 x86_64
checkpolicy-2.0.22-1.el6 x86_64
which-2.19-6.el6 x86_64
tmpwatch-2.9.16-4.el6 x86_64
pth-2.0.7-9.3.el6 x86_64
libxcb-1.9.1-2.el6 x86_64
sysvinit-tools-2.87-5.dsf.el6 x86_64
libXext-1.3.2-2.1.el6 x86_64
p11-kit-0.18.5-2.el6_5.2 x86_64
libXi-1.7.2-2.2.el6 x86_64
libXcursor-1.1.14-2.1.el6 x86_64
libnih-1.0.1-7.el6 x86_64
libXft-2.3.1-2.el6 x86_64
file-5.04-21.el6 x86_64
libXdamage-1.1.3-4.el6 x86_64
libusb-0.1.12-23.el6 x86_64
gdk-pixbuf2-2.24.1-5.el6 x86_64
libutempter-1.1.5-4.1.el6 x86_64
libXtst-1.2.2-2.1.el6 x86_64
net-tools-1.60-110.el6_2 x86_64
mesa-dri-drivers-10.1.2-2.el6 x86_64
tar-1.23-11.el6 x86_64
mesa-dri1-drivers-7.11-8.el6 x86_64
libXv-1.0.9-2.1.el6 x86_64
libss-1.41.12-21.el6 x86_64
portreserve-0.0.4-9.el6 x86_64
binutils-2.20.51.0.2-5.42.el6 x86_64
poppler-data-0.4.0-1.el6 noarch
diffutils-2.8.1-28.el6 x86_64
poppler-utils-0.12.4-4.el6_6.1 x86_64
dash-0.5.5.1-4.el6 x86_64
foomatic-db-ppds-4.0-7.20091126.el6 noarch
groff-1.18.1.4-21.el6 x86_64
db4-cxx-4.7.25-19.el6_6 x86_64
coreutils-libs-8.4-37.el6 x86_64
xz-4.999.9-0.5.beta.20091007git.el6 x86_64
cracklib-2.8.16-4.el6 x86_64
man-1.6f-32.el6 x86_64
coreutils-8.4-37.el6 x86_64
cairo-1.8.8-6.el6_6 x86_64
module-init-tools-3.9-24.el6 x86_64
ghostscript-8.70-19.el6 x86_64
redhat-logos-60.0.14-12.el6.centos noarch
liboil-0.3.16-4.1.el6 x86_64
libpciaccess-0.13.3-0.1.el6 x86_64
cdparanoia-libs-10.2-5.1.el6 x86_64
phonon-backend-gstreamer-4.6.2-28.el6_5 x86_64
logrotate-3.7.8-17.el6 x86_64
gdbm-1.8.0-36.el6 x86_64
keyutils-libs-1.4-5.el6 x86_64
time-1.7-37.1.el6 x86_64
openldap-2.4.39-8.el6 x86_64
gtk2-2.24.23-6.el6 x86_64
gpgme-1.1.8-3.el6 x86_64
fipscheck-1.2.0-7.el6 x86_64
wget-1.12-5.el6_6.1 x86_64
ethtool-3.5-5.el6 x86_64
yum-utils-1.1.30-30.el6 noarch
plymouth-core-libs-0.8.3-27.el6.centos.1 x86_64
gpg-pubkey-4bd6ec30-4ff1e4fa (none)
libffi-3.0.5-3.2.el6 x86_64
virt-what-1.11-1.2.el6 x86_64
python-libs-2.6.6-52.el6 x86_64
pciutils-3.1.10-4.el6 x86_64
python-urlgrabber-3.9.1-9.el6 noarch
ruby-libs-1.8.7.374-4.el6_6 x86_64
facter-2.4.4-1.el6 x86_64
slang-2.2.1-1.el6 x86_64
ruby-rdoc-1.8.7.374-4.el6_6 x86_64
newt-python-0.52.11-3.el6 x86_64
rubygem-json-1.5.5-3.el6 x86_64
libsemanage-2.0.43-4.2.el6 x86_64
ruby-shadow-2.2.0-2.el6 x86_64
pkgconfig-0.23-9.1.el6 x86_64
ruby-augeas-0.4.1-3.el6 x86_64
glib2-2.28.8-4.el6 x86_64
ppl-0.10.2-11.el6 x86_64
libuser-0.56.13-5.el6 x86_64
mpfr-2.4.1-6.el6 x86_64
yum-metadata-parser-1.1.2-16.el6 x86_64
yum-3.2.29-60.el6.centos noarch
eggdbus-0.6-3.el6 x86_64
giflib-4.1.6-3.1.el6 x86_64
libIDL-0.8.13-2.1.el6 x86_64
pcsc-lite-libs-1.5.2-14.el6 x86_64
util-linux-ng-2.17.2-12.18.el6 x86_64
sgml-common-0.6.3-33.el6 noarch
udev-147-2.57.el6 x86_64
libsndfile-1.0.20-5.el6 x86_64
ConsoleKit-libs-0.4.1-3.el6 x86_64
ConsoleKit-0.4.1-3.el6 x86_64
ttmkfdir-3.0.9-32.1.el6 x86_64
java-1.7.0-openjdk-devel-1.7.0.85-2.6.1.3.el6_6 x86_64
libdrm-2.4.52-4.el6 x86_64
gpg-pubkey-07513cad-4fe4cf94 (none)
postgresql-8.4.20-3.el6_6 x86_64
postfix-2.6.6-6.el6_5 x86_64
ambari-server-2.1.0-1470 x86_64
cronie-1.4.4-12.el6 x86_64
unzip-6.0-2.el6_6 x86_64
iptables-ipv6-1.4.7-14.el6 x86_64
perl-Pod-Escapes-1.04-136.el6_6.1 x86_64
kbd-misc-1.15-11.el6 noarch
perl-libs-5.10.1-136.el6_6.1 x86_64
perl-Pod-Simple-3.13-136.el6_6.1 x86_64
foomatic-4.0.4-3.el6 x86_64
cvs-1.11.23-16.el6 x86_64
ranger_2_3_0_0_2557-yarn-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
nc-1.84-22.el6 x86_64
kernel-headers-2.6.32-504.30.3.el6 x86_64
glibc-devel-2.12-1.149.el6_6.9 x86_64
passwd-0.77-4.el6_2.2 x86_64
perl-ExtUtils-ParseXS-2.2003.0-136.el6_6.1 x86_64
grub-0.97-93.el6 x86_64
perl-devel-5.10.1-136.el6_6.1 x86_64
sudo-1.8.6p3-15.el6 x86_64
redhat-lsb-core-4.0-7.el6.centos x86_64
e2fsprogs-1.41.12-21.el6 x86_64
redhat-lsb-printing-4.0-7.el6.centos x86_64
acl-2.2.49-6.el6 x86_64
redhat-lsb-4.0-7.el6.centos x86_64
bridge-utils-1.2-10.el6 x86_64
hadoop_2_3_0_0_2557-yarn-2.7.1.2.3.0.0-2557.el6 x86_64
gpg-pubkey-c105b9de-4e0fd3a3 (none)
bigtop-jsvc-1.0.10.2.3.0.0-2557.el6 x86_64
tzdata-2015e-1.el6 noarch
atlas-metadata_2_3_0_0_2557-hive-plugin-0.5.0.2.3.0.0-2557.el6 noarch
nss-softokn-freebl-3.14.3-22.el6_6 x86_64
nspr-4.10.8-1.el6_6 x86_64
device-mapper-1.02.90-2.el6_6.3 x86_64
krb5-libs-1.10.3-37.el6_6 x86_64
device-mapper-event-libs-1.02.90-2.el6_6.3 x86_64
nss-softokn-3.14.3-22.el6_6 x86_64
nss-3.19.1-3.el6_6 x86_64
lvm2-libs-2.02.111-2.el6_6.3 x86_64
initscripts-9.03.46-1.el6.centos.1 x86_64
dracut-kernel-004-356.el6_6.3 noarch
db4-utils-4.7.25-19.el6_6 x86_64
kpartx-0.4.9-80.el6_6.3 x86_64
kernel-firmware-2.6.32-504.30.3.el6 noarch
openssl-1.0.1e-30.el6.11 x86_64
openssh-5.3p1-104.el6_6.1 x86_64
curl-7.19.7-40.el6_6.4 x86_64
rpm-4.8.0-38.el6_6 x86_64
selinux-policy-3.7.19-260.el6_6.5 noarch
rpm-python-4.8.0-38.el6_6 x86_64
openssh-clients-5.3p1-104.el6_6.1 x86_64
mysql-libs-5.1.73-5.el6_6 x86_64
device-mapper-multipath-0.4.9-80.el6_6.3 x86_64
rsyslog-5.8.10-10.el6_6 x86_64
lvm2-2.02.111-2.el6_6.3 x86_64
mysql-connector-java-5.1.17-6.el6 noarch
tez_2_3_0_0_2557-0.7.0.2.3.0.0-2557.el6 noarch
hive_2_3_0_0_2557-1.2.1.2.3.0.0-2557.el6 noarch
hive_2_3_0_0_2557-webhcat-1.2.1.2.3.0.0-2557.el6 noarch
hive_2_3_0_0_2557-hcatalog-server-1.2.1.2.3.0.0-2557.el6 noarch
hive_2_3_0_0_2557-server2-1.2.1.2.3.0.0-2557.el6 noarch
ranger_2_3_0_0_2557-kafka-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
ranger_2_3_0_0_2557-knox-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
ambari-metrics-collector-2.1.0-1470 x86_64
cpp-4.4.7-11.el6 x86_64
ambari-metrics-monitor-2.1.0-1470 x86_64
perl-DBI-1.609-4.el6 x86_64
mysql-5.1.73-5.el6_6 x86_64
extjs-2.2-1 noarch
bigtop-tomcat-6.0.44-1.el6 noarch
pig_2_3_0_0_2557-0.15.0.2.3.0.0-2557.el6 noarch
spark_2_3_0_0_2557-1.3.1.2.3.0.0-2557.el6 noarch
spark_2_3_0_0_2557-worker-1.3.1.2.3.0.0-2557.el6 noarch
sqoop_2_3_0_0_2557-1.4.6.2.3.0.0-2557.el6 noarch
yum-plugin-priorities-1.1.30-30.el6 noarch
libgssglue-devel-0.1-11.el6 x86_64
libevent-1.4.13-4.el6 x86_64
nfs-utils-1.2.3-54.el6 x86_64
nfs4-acl-tools-0.3.3-6.el6 x86_64
cyrus-sasl-gssapi-2.1.23-15.el6_6.2 x86_64
hue-beeswax-2.6.1.2.3.0.0-2557.el6 x86_64
hue-pig-2.6.1.2.3.0.0-2557.el6 x86_64
hue-server-2.6.1.2.3.0.0-2557.el6 x86_64
epel-release-6-8 noarch
python-lxml-2.2.3-1.1.el6 x86_64
apr-util-1.3.9-3.el6_0.1 x86_64
apr-util-ldap-1.3.9-3.el6_0.1 x86_64
httpd-2.2.15-39.el6.centos x86_64
perl-Error-0.17015-4.el6 noarch
perl-Git-1.7.1-3.el6_4.1 noarch
hue-tutorials-1.2.1-88 noarch
ranger_2_3_0_0_2557-usersync-0.5.0.2.3.0.0-2557.el6 x86_64
ranger_2_3_0_0_2557-admin-0.5.0.2.3.0.0-2557.el6 x86_64
gpg-pubkey-0608b895-4bd22942 (none)
libgssglue-0.1-11.el6 x86_64
rpcbind-0.2.0-11.el6 x86_64
snappy-devel-1.1.0-1.el6 x86_64
slider_2_3_0_0_2557-0.80.0.2.3.0.0-2557.el6 noarch
storm_2_3_0_0_2557-0.10.0.2.3.0.0-2557.el6 x86_64
hadoop_2_3_0_0_2557-client-2.7.1.2.3.0.0-2557.el6 x86_64
falcon_2_3_0_0_2557-doc-0.6.1.2.3.0.0-2557.el6 noarch
flume_2_3_0_0_2557-agent-1.5.2.2.3.0.0-2557.el6 noarch
hbase_2_3_0_0_2557-1.1.1.2.3.0.0-2557.el6 noarch
hbase_2_3_0_0_2557-rest-1.1.1.2.3.0.0-2557.el6 noarch
hbase_2_3_0_0_2557-thrift-1.1.1.2.3.0.0-2557.el6 noarch
hbase_2_3_0_0_2557-doc-1.1.1.2.3.0.0-2557.el6 noarch
jakarta-commons-logging-1.0.4-10.el6 noarch
apache-tomcat-apis-0.1-1.el6 noarch
libart_lgpl-2.3.20-5.1.el6 x86_64
java-1.5.0-gcj-1.5.0.0-29.1.el6 x86_64
sinjdoc-0.5-9.1.el6 x86_64
xml-commons-apis-1.3.04-3.6.el6 x86_64
classpathx-mail-1.1.1-9.4.el6 noarch
jakarta-commons-httpclient-3.1-0.9.el6_5 x86_64
bcel-5.2-7.2.el6 x86_64
axis-1.2.1-7.5.el6_5 noarch
geronimo-specs-1.0-3.5.M2.el6 noarch
slf4j-1.5.8-8.el6 noarch
libxml2-2.7.6-17.el6_6.1 x86_64
setup-2.8.14-20.el6_4.1 noarch
hive_2_3_0_0_2557-jdbc-1.2.1.2.3.0.0-2557.el6 noarch
libjpeg-turbo-1.2.1-3.el6_5 x86_64
hive_2_3_0_0_2557-webhcat-server-1.2.1.2.3.0.0-2557.el6 noarch
qt-4.6.2-28.el6_5 x86_64
hive_2_3_0_0_2557-metastore-1.2.1.2.3.0.0-2557.el6 noarch
lcms-libs-1.19-1.el6 x86_64
libcap-2.16-5.5.el6 x86_64
knox_2_3_0_0_2557-0.6.0.2.3.0.0-2557.el6 noarch
libfontenc-1.0.5-2.el6 x86_64
chkconfig-1.3.49.3-2.el6_4.1 x86_64
gcc-4.4.7-11.el6 x86_64
openjpeg-libs-1.3-10.el6_5 x86_64
perl-DBD-MySQL-4.013-3.el6 x86_64
xorg-x11-font-utils-7.2-11.el6 x86_64
libsepol-2.0.41-4.el6 x86_64
oozie_2_3_0_0_2557-client-4.2.0.2.3.0.0-2557.el6 noarch
qt-sqlite-4.6.2-28.el6_5 x86_64
bzip2-libs-1.0.5-7.el6_0 x86_64
datafu_2_3_0_0_2557-1.3.0.2.3.0.0-2557.el6 noarch
gawk-3.1.7-10.el6 x86_64
spark_2_3_0_0_2557-master-1.3.1.2.3.0.0-2557.el6 noarch
libudev-147-2.57.el6 x86_64
libxslt-1.1.26-2.el6_3.1 x86_64
mailx-12.4-8.el6_6 x86_64
sqlite-3.6.20-1.el6 x86_64
nfs-utils-lib-1.1.5-9.el6_6 x86_64
xml-common-0.6.3-33.el6 noarch
xz-libs-4.999.9-0.5.beta.20091007git.el6 x86_64
cyrus-sasl-plain-2.1.23-15.el6_6.2 x86_64
patch-2.6-6.el6 x86_64
bzip2-1.0.5-7.el6_0 x86_64
hue-hcatalog-2.6.1.2.3.0.0-2557.el6 x86_64
libgomp-4.4.7-11.el6 x86_64
cpio-2.10-12.el6_5 x86_64
hue-2.6.1.2.3.0.0-2557.el6 x86_64
libXau-1.0.6-4.el6 x86_64
tcp_wrappers-libs-7.6-57.el6 x86_64
apr-1.3.9-5.el6_2 x86_64
libXrender-0.9.8-2.1.el6 x86_64
p11-kit-trust-0.18.5-2.el6_5.2 x86_64
mailcap-2.1.31-2.el6 noarch
libXrandr-1.4.1-2.1.el6 x86_64
upstart-0.6.5-13.el6_5.3 x86_64
rsync-3.0.6-12.el6 x86_64
libXt-1.1.4-6.1.el6 x86_64
MAKEDEV-3.24-6.el6 x86_64
ranger_2_3_0_0_2557-kms-0.5.0.2.3.0.0-2557.el6 x86_64
libXxf86vm-1.1.3-2.1.el6 x86_64
vim-minimal-7.2.411-1.8.el6 x86_64
ranger_2_3_0_0_2557-solr-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
mesa-libGLU-10.1.2-2.el6 x86_64
e2fsprogs-libs-1.41.12-21.el6 x86_64
gdbm-devel-1.8.0-36.el6 x86_64
m4-1.4.13-5.el6 x86_64
cups-1.4.2-67.el6_6.1 x86_64
ncurses-5.7-3.20090208.el6 x86_64
db4-devel-4.7.25-19.el6_6 x86_64
gzip-1.3.12-22.el6 x86_64
pixman-0.32.4-4.el6 x86_64
pam-1.1.1-20.el6 x86_64
plymouth-scripts-0.8.3-27.el6.centos.1 x86_64
gstreamer-plugins-base-0.10.29-2.el6 x86_64
mingetty-1.08-5.el6 x86_64
fipscheck-lib-1.2.0-7.el6 x86_64
strace-4.5.19-1.19.el6 x86_64
pciutils-libs-3.1.10-4.el6 x86_64
dmidecode-2.12-5.el6_6.1 x86_64
python-2.6.6-52.el6 x86_64
compat-readline5-5.2-17.1.el6 x86_64
pygpgme-0.1-18.20090824bzr68.el6 x86_64
ruby-irb-1.8.7.374-4.el6_6 x86_64
newt-0.52.11-3.el6 x86_64
hiera-1.3.4-1.el6 noarch
libaio-0.3.107-10.el6 x86_64
puppet-3.8.1-1.el6 noarch
shared-mime-info-0.70-6.el6 x86_64
yum-plugin-fastestmirror-1.1.30-30.el6 noarch
jpackage-utils-1.7.5-3.12.el6 noarch
centos-release-6-6.el6.centos.12.2 x86_64
ORBit2-2.14.17-5.el6 x86_64
iputils-20071127-17.el6_4.2 x86_64
flac-1.2.1-7.el6_6 x86_64
polkit-0.96-7.el6 x86_64
xorg-x11-fonts-Type1-7.2-9.1.el6 noarch
postgresql-libs-8.4.20-3.el6_6 x86_64
ambari-agent-2.1.0-1470 x86_64
crontabs-1.10-33.el6 noarch
perl-version-0.77-136.el6_6.1 x86_64
kbd-1.15-11.el6 x86_64
perl-5.10.1-136.el6_6.1 x86_64
fuse-2.8.3-4.el6 x86_64
gettext-0.17-18.el6 x86_64
cryptsetup-luks-1.2.0-11.el6 x86_64
ranger_2_3_0_0_2557-hdfs-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
perl-Test-Harness-3.17-136.el6_6.1 x86_64
authconfig-6.1.12-19.el6 x86_64
perl-Test-Simple-0.92-136.el6_6.1 x86_64
audit-2.3.7-5.el6 x86_64
redhat-lsb-compat-4.0-7.el6.centos x86_64
attr-2.4.44-7.el6 x86_64
hadoop_2_3_0_0_2557-mapreduce-2.7.1.2.3.0.0-2557.el6 x86_64
dhcp-common-4.1.1-43.P1.el6.centos.1 x86_64
atlas-metadata_2_3_0_0_2557-0.5.0.2.3.0.0-2557.el6 noarch
glibc-2.12-1.149.el6_6.9 x86_64
snappy-1.1.0-1.el6 x86_64
db4-4.7.25-19.el6_6 x86_64
ranger_2_3_0_0_2557-storm-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
shadow-utils-4.1.4.2-19.el6_6.1 x86_64
falcon_2_3_0_0_2557-0.6.1.2.3.0.0-2557.el6 noarch
device-mapper-event-1.02.90-2.el6_6.3 x86_64
ranger_2_3_0_0_2557-hbase-plugin-0.5.0.2.3.0.0-2557.el6 x86_64
dracut-004-356.el6_6.3 noarch
hbase_2_3_0_0_2557-regionserver-1.1.1.2.3.0.0-2557.el6 noarch
device-mapper-multipath-libs-0.4.9-80.el6_6.3 x86_64
phoenix_2_3_0_0_2557-4.4.0.2.3.0.0-2557.el6 noarch
ca-certificates-2015.2.4-65.0.1.el6_6 noarch
zip-3.0-1.el6 x86_64
libcurl-7.19.7-40.el6_6.4 x86_64
java_cup-0.10k-5.el6 x86_64
policycoreutils-2.0.83-19.47.el6_6.1 x86_64
log4j-1.2.14-6.4.el6 x86_64
openssh-server-5.3p1-104.el6_6.1 x86_64
regexp-1.5-4.4.el6 x86_64
kernel-2.6.32-504.30.3.el6 x86_64
mx4j-3.0.1-9.13.el6 noarch
dhclient-4.1.1-43.P1.el6.centos.1 x86_64

Loaded plugins: fastestmirror, priorities
Loading mirror speeds from cached hostfile
 * base: mirrors.prometeus.net
 * epel: mirror.imt-systems.com
 * extras: mirrors.prometeus.net
 * updates: mirrors.prometeus.net
repo id                 repo name                                         status
HDP-2.3                 HDP-2.3                                              175
HDP-UTILS-1.1.0.20      HDP-UTILS-1.1.0.20                                    42
Updates-ambari-2.1.0    ambari-2.1.0 - Updates                                 8
base                    CentOS-6 - Base                                    6,518
epel                    Extra Packages for Enterprise Linux 6 - x86_64    11,753
extras                  CentOS-6 - Extras                                     38
puppetlabs-deps         Puppet Labs Dependencies El 6 - x86_64                77
puppetlabs-products     Puppet Labs Products El 6 - x86_64                   519
sandbox                 Sandbox repository (tutorials)                         2
updates                 CentOS-6 - Updates                                 1,370
repolist: 20,502

Go to the top


Licenses

No Licenses, this is a Linux box!

Go to the top


Kernel Parameters


kernel.sched_child_runs_first = 0
kernel.sched_min_granularity_ns = 3000000
kernel.sched_latency_ns = 15000000
kernel.sched_wakeup_granularity_ns = 3000000
kernel.sched_tunable_scaling = 1
kernel.sched_features = 3183
kernel.sched_migration_cost = 500000
kernel.sched_nr_migrate = 32
kernel.sched_time_avg = 1000
kernel.sched_shares_window = 10000000
kernel.timer_migration = 1
kernel.sched_rt_period_us = 1000000
kernel.sched_rt_runtime_us = 950000
kernel.sched_compat_yield = 0
kernel.sched_rr_timeslice_ms = 100
kernel.sched_autogroup_enabled = 0
kernel.sched_cfs_bandwidth_slice_us = 5000
kernel.panic = 0
kernel.exec-shield = 1
kernel.core_uses_pid = 1
kernel.core_pattern = core
kernel.core_pipe_limit = 0
kernel.tainted = 0
kernel.real-root-dev = 0
kernel.print-fatal-signals = 0
kernel.ctrl-alt-del = 0
kernel.ftrace_enabled = 1
kernel.stack_tracer_enabled = 0
kernel.ftrace_dump_on_oops = 0
kernel.modprobe = /sbin/modprobe
kernel.modules_disabled = 0
kernel.kexec_load_disabled = 0
kernel.hotplug = 
kernel.acct = 4	2	30
kernel.sysrq = 0
kernel.cad_pid = 1
kernel.threads-max = 125620
kernel.random.poolsize = 4096
kernel.random.entropy_avail = 228
kernel.random.read_wakeup_threshold = 64
kernel.random.write_wakeup_threshold = 128
kernel.random.boot_id = e3b0a58d-eb77-4359-b3db-ed00914b4f79
kernel.random.uuid = 8bfcffa9-a996-48b9-a254-fde73ab376a4
kernel.usermodehelper.bset = 4294967295	4294967295
kernel.usermodehelper.inheritable = 4294967295	4294967295
kernel.overflowuid = 65534
kernel.overflowgid = 65534
kernel.pid_max = 32768
kernel.panic_on_oops = 1
kernel.printk = 4	4	1	7
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.printk_delay = 0
kernel.dmesg_restrict = 0
kernel.kptr_restrict = 1
kernel.ngroups_max = 65536
kernel.watchdog = 1
kernel.watchdog_thresh = 60
kernel.softlockup_panic = 0
kernel.nmi_watchdog = 1
kernel.unknown_nmi_panic = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.panic_on_io_nmi = 0
kernel.bootloader_type = 113
kernel.bootloader_version = 1
kernel.kstack_depth_to_print = 12
kernel.io_delay_type = 0
kernel.randomize_va_space = 2
kernel.acpi_video_flags = 0
kernel.hung_task_panic = 0
kernel.hung_task_check_count = 4194304
kernel.hung_task_timeout_secs = 0
kernel.hung_task_warnings = 10
kernel.compat-log = 1
kernel.max_lock_depth = 1024
kernel.poweroff_cmd = /sbin/poweroff
kernel.keys.maxkeys = 200
kernel.keys.maxbytes = 20000
kernel.keys.root_maxkeys = 1000000
kernel.keys.root_maxbytes = 25000000
kernel.keys.gc_delay = 300
kernel.slow-work.min-threads = 2
kernel.slow-work.max-threads = 4
kernel.slow-work.vslow-percentage = 50
kernel.perf_event_paranoid = 1
kernel.perf_event_mlock_kb = 516
kernel.perf_event_max_sample_rate = 100000
kernel.blk_iopoll = 1
kernel.sched_domain.cpu0.domain0.min_interval = 1
kernel.sched_domain.cpu0.domain0.max_interval = 4
kernel.sched_domain.cpu0.domain0.busy_idx = 2
kernel.sched_domain.cpu0.domain0.idle_idx = 1
kernel.sched_domain.cpu0.domain0.newidle_idx = 0
kernel.sched_domain.cpu0.domain0.wake_idx = 0
kernel.sched_domain.cpu0.domain0.forkexec_idx = 0
kernel.sched_domain.cpu0.domain0.busy_factor = 64
kernel.sched_domain.cpu0.domain0.imbalance_pct = 125
kernel.sched_domain.cpu0.domain0.cache_nice_tries = 1
kernel.sched_domain.cpu0.domain0.flags = 4143
kernel.sched_domain.cpu0.domain0.name = CPU
kernel.sched_domain.cpu1.domain0.min_interval = 1
kernel.sched_domain.cpu1.domain0.max_interval = 4
kernel.sched_domain.cpu1.domain0.busy_idx = 2
kernel.sched_domain.cpu1.domain0.idle_idx = 1
kernel.sched_domain.cpu1.domain0.newidle_idx = 0
kernel.sched_domain.cpu1.domain0.wake_idx = 0
kernel.sched_domain.cpu1.domain0.forkexec_idx = 0
kernel.sched_domain.cpu1.domain0.busy_factor = 64
kernel.sched_domain.cpu1.domain0.imbalance_pct = 125
kernel.sched_domain.cpu1.domain0.cache_nice_tries = 1
kernel.sched_domain.cpu1.domain0.flags = 4143
kernel.sched_domain.cpu1.domain0.name = CPU
kernel.sched_domain.cpu2.domain0.min_interval = 1
kernel.sched_domain.cpu2.domain0.max_interval = 4
kernel.sched_domain.cpu2.domain0.busy_idx = 2
kernel.sched_domain.cpu2.domain0.idle_idx = 1
kernel.sched_domain.cpu2.domain0.newidle_idx = 0
kernel.sched_domain.cpu2.domain0.wake_idx = 0
kernel.sched_domain.cpu2.domain0.forkexec_idx = 0
kernel.sched_domain.cpu2.domain0.busy_factor = 64
kernel.sched_domain.cpu2.domain0.imbalance_pct = 125
kernel.sched_domain.cpu2.domain0.cache_nice_tries = 1
kernel.sched_domain.cpu2.domain0.flags = 4143
kernel.sched_domain.cpu2.domain0.name = CPU
kernel.sched_domain.cpu3.domain0.min_interval = 1
kernel.sched_domain.cpu3.domain0.max_interval = 4
kernel.sched_domain.cpu3.domain0.busy_idx = 2
kernel.sched_domain.cpu3.domain0.idle_idx = 1
kernel.sched_domain.cpu3.domain0.newidle_idx = 0
kernel.sched_domain.cpu3.domain0.wake_idx = 0
kernel.sched_domain.cpu3.domain0.forkexec_idx = 0
kernel.sched_domain.cpu3.domain0.busy_factor = 64
kernel.sched_domain.cpu3.domain0.imbalance_pct = 125
kernel.sched_domain.cpu3.domain0.cache_nice_tries = 1
kernel.sched_domain.cpu3.domain0.flags = 4143
kernel.sched_domain.cpu3.domain0.name = CPU
kernel.vsyscall64 = 1
kernel.ostype = Linux
kernel.osrelease = 2.6.32-504.30.3.el6.x86_64
kernel.version = #1 SMP Wed Jul 15 10:13:09 UTC 2015
kernel.hostname = sandbox.hortonworks.com
kernel.domainname = (none)
kernel.pty.max = 4096
kernel.pty.nr = 1
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.shmmni = 4096
kernel.shm_rmid_forced = 0
kernel.msgmax = 65536
kernel.msgmni = 15733
kernel.msgmnb = 65536
kernel.sem = 250	32000	32	128
kernel.auto_msgmni = 1
vm.overcommit_memory = 0
vm.panic_on_oom = 0
vm.oom_kill_allocating_task = 0
vm.extfrag_threshold = 500
vm.oom_dump_tasks = 1
vm.would_have_oomkilled = 0
vm.overcommit_ratio = 50
vm.overcommit_kbytes = 0
vm.page-cluster = 3
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000
vm.nr_pdflush_threads = 0
vm.swappiness = 60
vm.nr_hugepages = 0
vm.nr_hugepages_mempolicy = 0
vm.hugetlb_shm_group = 0
vm.hugepages_treat_as_movable = 0
vm.nr_overcommit_hugepages = 0
vm.lowmem_reserve_ratio = 256	256	32
vm.drop_caches = 0
vm.min_free_kbytes = 67584
vm.extra_free_kbytes = 0
vm.unmap_area_factor = 0
vm.meminfo_legacy_layout = 1
vm.percpu_pagelist_fraction = 0
vm.max_map_count = 65530
vm.laptop_mode = 0
vm.block_dump = 0
vm.vfs_cache_pressure = 100
vm.legacy_va_layout = 0
vm.zone_reclaim_mode = 0
vm.min_unmapped_ratio = 1
vm.min_slab_ratio = 5
vm.stat_interval = 1
vm.mmap_min_addr = 4096
vm.numa_zonelist_order = default
vm.scan_unevictable_pages = 0
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
fs.inode-nr = 109450	11
fs.inode-state = 109450	11	0	0	0	0	0
fs.file-nr = 9024	0	798424
fs.file-max = 798424
fs.nr_open = 1048576
fs.dentry-state = 226892	218182	45	0	0	0
fs.overflowuid = 65534
fs.overflowgid = 65534
fs.leases-enable = 1
fs.dir-notify-enable = 1
fs.lease-break-time = 45
fs.aio-nr = 0
fs.aio-max-nr = 65536
fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192
fs.inotify.max_queued_events = 16384
fs.epoll.max_user_watches = 1646530
fs.suid_dumpable = 0
fs.binfmt_misc.status = enabled
fs.quota.lookups = 0
fs.quota.drops = 0
fs.quota.reads = 0
fs.quota.writes = 0
fs.quota.cache_hits = 0
fs.quota.allocated_dquots = 0
fs.quota.free_dquots = 0
fs.quota.syncs = 4
fs.quota.warnings = 1
fs.mqueue.queues_max = 256
fs.mqueue.msg_max = 10
fs.mqueue.msgsize_max = 8192
fs.mqueue.msg_default = 10
fs.mqueue.msgsize_default = 8192
debug.exception-trace = 1
debug.kprobes-optimization = 1
dev.scsi.logging_level = 0
dev.raid.speed_limit_min = 1000
dev.raid.speed_limit_max = 200000
dev.hpet.max-user-freq = 64
dev.mac_hid.mouse_button_emulation = 0
dev.mac_hid.mouse_button2_keycode = 97
dev.mac_hid.mouse_button3_keycode = 100
dev.parport.default.timeslice = 200
dev.parport.default.spintime = 500
net.netfilter.nf_log.0 = NONE
net.netfilter.nf_log.1 = NONE
net.netfilter.nf_log.2 = NONE
net.netfilter.nf_log.3 = NONE
net.netfilter.nf_log.4 = NONE
net.netfilter.nf_log.5 = NONE
net.netfilter.nf_log.6 = NONE
net.netfilter.nf_log.7 = NONE
net.netfilter.nf_log.8 = NONE
net.netfilter.nf_log.9 = NONE
net.netfilter.nf_log.10 = NONE
net.netfilter.nf_log.11 = NONE
net.netfilter.nf_log.12 = NONE
net.core.somaxconn = 128
net.core.xfrm_aevent_etime = 10
net.core.xfrm_aevent_rseqth = 2
net.core.xfrm_larval_drop = 1
net.core.xfrm_acq_expires = 30
net.core.wmem_max = 124928
net.core.rmem_max = 124928
net.core.wmem_default = 124928
net.core.rmem_default = 124928
net.core.dev_weight = 64
net.core.netdev_max_backlog = 1000
net.core.message_cost = 5
net.core.message_burst = 10
net.core.optmem_max = 20480
net.core.rps_sock_flow_entries = 0
net.core.busy_poll = 0
net.core.busy_read = 0
net.core.netdev_budget = 300
net.core.warnings = 1
net.ipv4.route.gc_thresh = 262144
net.ipv4.route.max_size = 4194304
net.ipv4.route.gc_min_interval = 0
net.ipv4.route.gc_min_interval_ms = 500
net.ipv4.route.gc_timeout = 300
net.ipv4.route.gc_interval = 60
net.ipv4.route.redirect_load = 20
net.ipv4.route.redirect_number = 9
net.ipv4.route.redirect_silence = 20480
net.ipv4.route.error_cost = 1000
net.ipv4.route.error_burst = 5000
net.ipv4.route.gc_elasticity = 8
net.ipv4.route.mtu_expires = 600
net.ipv4.route.min_pmtu = 552
net.ipv4.route.min_adv_mss = 256
net.ipv4.route.secret_interval = 600
net.ipv4.neigh.default.mcast_solicit = 3
net.ipv4.neigh.default.ucast_solicit = 3
net.ipv4.neigh.default.app_solicit = 0
net.ipv4.neigh.default.retrans_time = 99
net.ipv4.neigh.default.base_reachable_time = 30
net.ipv4.neigh.default.delay_first_probe_time = 5
net.ipv4.neigh.default.gc_stale_time = 60
net.ipv4.neigh.default.unres_qlen = 3
net.ipv4.neigh.default.proxy_qlen = 64
net.ipv4.neigh.default.anycast_delay = 99
net.ipv4.neigh.default.proxy_delay = 79
net.ipv4.neigh.default.locktime = 99
net.ipv4.neigh.default.retrans_time_ms = 1000
net.ipv4.neigh.default.base_reachable_time_ms = 30000
net.ipv4.neigh.default.gc_interval = 30
net.ipv4.neigh.default.gc_thresh1 = 128
net.ipv4.neigh.default.gc_thresh2 = 512
net.ipv4.neigh.default.gc_thresh3 = 1024
net.ipv4.neigh.lo.mcast_solicit = 3
net.ipv4.neigh.lo.ucast_solicit = 3
net.ipv4.neigh.lo.app_solicit = 0
net.ipv4.neigh.lo.retrans_time = 99
net.ipv4.neigh.lo.base_reachable_time = 30
net.ipv4.neigh.lo.delay_first_probe_time = 5
net.ipv4.neigh.lo.gc_stale_time = 60
net.ipv4.neigh.lo.unres_qlen = 3
net.ipv4.neigh.lo.proxy_qlen = 64
net.ipv4.neigh.lo.anycast_delay = 99
net.ipv4.neigh.lo.proxy_delay = 79
net.ipv4.neigh.lo.locktime = 99
net.ipv4.neigh.lo.retrans_time_ms = 1000
net.ipv4.neigh.lo.base_reachable_time_ms = 30000
net.ipv4.neigh.eth0.mcast_solicit = 3
net.ipv4.neigh.eth0.ucast_solicit = 3
net.ipv4.neigh.eth0.app_solicit = 0
net.ipv4.neigh.eth0.retrans_time = 99
net.ipv4.neigh.eth0.base_reachable_time = 30
net.ipv4.neigh.eth0.delay_first_probe_time = 5
net.ipv4.neigh.eth0.gc_stale_time = 60
net.ipv4.neigh.eth0.unres_qlen = 3
net.ipv4.neigh.eth0.proxy_qlen = 64
net.ipv4.neigh.eth0.anycast_delay = 99
net.ipv4.neigh.eth0.proxy_delay = 79
net.ipv4.neigh.eth0.locktime = 99
net.ipv4.neigh.eth0.retrans_time_ms = 1000
net.ipv4.neigh.eth0.base_reachable_time_ms = 30000
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_retrans_collapse = 1
net.ipv4.ip_default_ttl = 64
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.ip_nonlocal_bind = 0
net.ipv4.tcp_syn_retries = 5
net.ipv4.tcp_synack_retries = 5
net.ipv4.tcp_max_orphans = 262144
net.ipv4.tcp_max_tw_buckets = 262144
net.ipv4.ip_dynaddr = 0
net.ipv4.tcp_keepalive_time = 7200
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_retries1 = 3
net.ipv4.tcp_retries2 = 15
net.ipv4.tcp_fin_timeout = 60
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_abort_on_overflow = 0
net.ipv4.tcp_stdurg = 0
net.ipv4.tcp_rfc1337 = 0
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.ip_local_port_range = 32768	61000
net.ipv4.ip_local_reserved_ports = 
net.ipv4.igmp_max_memberships = 20
net.ipv4.igmp_max_msf = 10
net.ipv4.inet_peer_threshold = 65664
net.ipv4.inet_peer_minttl = 120
net.ipv4.inet_peer_maxttl = 600
net.ipv4.inet_peer_gc_mintime = 10
net.ipv4.inet_peer_gc_maxtime = 120
net.ipv4.tcp_orphan_retries = 0
net.ipv4.tcp_fack = 1
net.ipv4.tcp_reordering = 3
net.ipv4.tcp_ecn = 2
net.ipv4.tcp_dsack = 1
net.ipv4.tcp_mem = 753696	1004928	1507392
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_rmem = 4096	87380	4194304
net.ipv4.tcp_app_win = 31
net.ipv4.tcp_adv_win_scale = 2
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_frto = 2
net.ipv4.tcp_frto_response = 0
net.ipv4.tcp_low_latency = 0
net.ipv4.tcp_no_metrics_save = 0
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_tso_win_divisor = 3
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_abc = 0
net.ipv4.tcp_mtu_probing = 0
net.ipv4.tcp_base_mss = 512
net.ipv4.tcp_workaround_signed_windows = 0
net.ipv4.tcp_challenge_ack_limit = 100
net.ipv4.tcp_limit_output_bytes = 131072
net.ipv4.tcp_dma_copybreak = 4096
net.ipv4.tcp_slow_start_after_idle = 1
net.ipv4.cipso_cache_enable = 1
net.ipv4.cipso_cache_bucket_size = 10
net.ipv4.cipso_rbm_optfmt = 0
net.ipv4.cipso_rbm_strictvalid = 1
net.ipv4.tcp_available_congestion_control = cubic reno
net.ipv4.tcp_allowed_congestion_control = cubic reno
net.ipv4.tcp_max_ssthresh = 0
net.ipv4.tcp_thin_linear_timeouts = 0
net.ipv4.tcp_thin_dupack = 0
net.ipv4.tcp_min_tso_segs = 2
net.ipv4.udp_mem = 753696	1004928	1507392
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
net.ipv4.conf.all.forwarding = 0
net.ipv4.conf.all.mc_forwarding = 0
net.ipv4.conf.all.accept_redirects = 1
net.ipv4.conf.all.secure_redirects = 1
net.ipv4.conf.all.shared_media = 1
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.send_redirects = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.src_valid_mark = 0
net.ipv4.conf.all.proxy_arp = 0
net.ipv4.conf.all.medium_id = 0
net.ipv4.conf.all.bootp_relay = 0
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.all.tag = 0
net.ipv4.conf.all.arp_filter = 0
net.ipv4.conf.all.arp_announce = 0
net.ipv4.conf.all.arp_ignore = 0
net.ipv4.conf.all.arp_accept = 0
net.ipv4.conf.all.arp_notify = 0
net.ipv4.conf.all.proxy_arp_pvlan = 0
net.ipv4.conf.all.disable_xfrm = 0
net.ipv4.conf.all.disable_policy = 0
net.ipv4.conf.all.force_igmp_version = 0
net.ipv4.conf.all.promote_secondaries = 0
net.ipv4.conf.all.accept_local = 0
net.ipv4.conf.all.route_localnet = 0
net.ipv4.conf.default.forwarding = 0
net.ipv4.conf.default.mc_forwarding = 0
net.ipv4.conf.default.accept_redirects = 1
net.ipv4.conf.default.secure_redirects = 1
net.ipv4.conf.default.shared_media = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.send_redirects = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.src_valid_mark = 0
net.ipv4.conf.default.proxy_arp = 0
net.ipv4.conf.default.medium_id = 0
net.ipv4.conf.default.bootp_relay = 0
net.ipv4.conf.default.log_martians = 0
net.ipv4.conf.default.tag = 0
net.ipv4.conf.default.arp_filter = 0
net.ipv4.conf.default.arp_announce = 0
net.ipv4.conf.default.arp_ignore = 0
net.ipv4.conf.default.arp_accept = 0
net.ipv4.conf.default.arp_notify = 0
net.ipv4.conf.default.proxy_arp_pvlan = 0
net.ipv4.conf.default.disable_xfrm = 0
net.ipv4.conf.default.disable_policy = 0
net.ipv4.conf.default.force_igmp_version = 0
net.ipv4.conf.default.promote_secondaries = 0
net.ipv4.conf.default.accept_local = 0
net.ipv4.conf.default.route_localnet = 0
net.ipv4.conf.lo.forwarding = 0
net.ipv4.conf.lo.mc_forwarding = 0
net.ipv4.conf.lo.accept_redirects = 1
net.ipv4.conf.lo.secure_redirects = 1
net.ipv4.conf.lo.shared_media = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.lo.send_redirects = 1
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.lo.src_valid_mark = 0
net.ipv4.conf.lo.proxy_arp = 0
net.ipv4.conf.lo.medium_id = 0
net.ipv4.conf.lo.bootp_relay = 0
net.ipv4.conf.lo.log_martians = 0
net.ipv4.conf.lo.tag = 0
net.ipv4.conf.lo.arp_filter = 0
net.ipv4.conf.lo.arp_announce = 0
net.ipv4.conf.lo.arp_ignore = 0
net.ipv4.conf.lo.arp_accept = 0
net.ipv4.conf.lo.arp_notify = 0
net.ipv4.conf.lo.proxy_arp_pvlan = 0
net.ipv4.conf.lo.disable_xfrm = 1
net.ipv4.conf.lo.disable_policy = 1
net.ipv4.conf.lo.force_igmp_version = 0
net.ipv4.conf.lo.promote_secondaries = 0
net.ipv4.conf.lo.accept_local = 0
net.ipv4.conf.lo.route_localnet = 0
net.ipv4.conf.eth0.forwarding = 0
net.ipv4.conf.eth0.mc_forwarding = 0
net.ipv4.conf.eth0.accept_redirects = 1
net.ipv4.conf.eth0.secure_redirects = 1
net.ipv4.conf.eth0.shared_media = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.eth0.send_redirects = 1
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.eth0.src_valid_mark = 0
net.ipv4.conf.eth0.proxy_arp = 0
net.ipv4.conf.eth0.medium_id = 0
net.ipv4.conf.eth0.bootp_relay = 0
net.ipv4.conf.eth0.log_martians = 0
net.ipv4.conf.eth0.tag = 0
net.ipv4.conf.eth0.arp_filter = 0
net.ipv4.conf.eth0.arp_announce = 0
net.ipv4.conf.eth0.arp_ignore = 0
net.ipv4.conf.eth0.arp_accept = 0
net.ipv4.conf.eth0.arp_notify = 0
net.ipv4.conf.eth0.proxy_arp_pvlan = 0
net.ipv4.conf.eth0.disable_xfrm = 0
net.ipv4.conf.eth0.disable_policy = 0
net.ipv4.conf.eth0.force_igmp_version = 0
net.ipv4.conf.eth0.promote_secondaries = 0
net.ipv4.conf.eth0.accept_local = 0
net.ipv4.conf.eth0.route_localnet = 0
net.ipv4.ip_forward = 0
net.ipv4.xfrm4_gc_thresh = 2097152
net.ipv4.ipfrag_high_thresh = 4194304
net.ipv4.ipfrag_low_thresh = 3145728
net.ipv4.ipfrag_time = 30
net.ipv4.icmp_echo_ignore_all = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.icmp_errors_use_inbound_ifaddr = 0
net.ipv4.icmp_ratelimit = 1000
net.ipv4.icmp_ratemask = 6168
net.ipv4.rt_cache_rebuild_count = 4
net.ipv4.ping_group_range = 1	0
net.ipv4.ipfrag_secret_interval = 600
net.ipv4.ipfrag_max_dist = 64
net.ipv6.neigh.default.mcast_solicit = 3
net.ipv6.neigh.default.ucast_solicit = 3
net.ipv6.neigh.default.app_solicit = 0
net.ipv6.neigh.default.delay_first_probe_time = 5
net.ipv6.neigh.default.gc_stale_time = 60
net.ipv6.neigh.default.unres_qlen = 3
net.ipv6.neigh.default.proxy_qlen = 64
net.ipv6.neigh.default.anycast_delay = 99
net.ipv6.neigh.default.proxy_delay = 79
net.ipv6.neigh.default.locktime = 0
net.ipv6.neigh.default.retrans_time_ms = 1000
net.ipv6.neigh.default.base_reachable_time_ms = 30000
net.ipv6.neigh.default.gc_interval = 30
net.ipv6.neigh.default.gc_thresh1 = 128
net.ipv6.neigh.default.gc_thresh2 = 512
net.ipv6.neigh.default.gc_thresh3 = 1024
net.ipv6.neigh.lo.mcast_solicit = 3
net.ipv6.neigh.lo.ucast_solicit = 3
net.ipv6.neigh.lo.app_solicit = 0
net.ipv6.neigh.lo.delay_first_probe_time = 5
net.ipv6.neigh.lo.gc_stale_time = 60
net.ipv6.neigh.lo.unres_qlen = 3
net.ipv6.neigh.lo.proxy_qlen = 64
net.ipv6.neigh.lo.anycast_delay = 99
net.ipv6.neigh.lo.proxy_delay = 79
net.ipv6.neigh.lo.locktime = 0
net.ipv6.neigh.lo.retrans_time_ms = 1000
net.ipv6.neigh.lo.base_reachable_time_ms = 30000
net.ipv6.neigh.eth0.mcast_solicit = 3
net.ipv6.neigh.eth0.ucast_solicit = 3
net.ipv6.neigh.eth0.app_solicit = 0
net.ipv6.neigh.eth0.delay_first_probe_time = 5
net.ipv6.neigh.eth0.gc_stale_time = 60
net.ipv6.neigh.eth0.unres_qlen = 3
net.ipv6.neigh.eth0.proxy_qlen = 64
net.ipv6.neigh.eth0.anycast_delay = 99
net.ipv6.neigh.eth0.proxy_delay = 79
net.ipv6.neigh.eth0.locktime = 0
net.ipv6.neigh.eth0.retrans_time_ms = 1000
net.ipv6.neigh.eth0.base_reachable_time_ms = 30000
net.ipv6.xfrm6_gc_thresh = 2048
net.ipv6.conf.all.forwarding = 0
net.ipv6.conf.all.hop_limit = 64
net.ipv6.conf.all.mtu = 1280
net.ipv6.conf.all.accept_ra = 1
net.ipv6.conf.all.accept_redirects = 1
net.ipv6.conf.all.autoconf = 1
net.ipv6.conf.all.dad_transmits = 1
net.ipv6.conf.all.router_solicitations = 3
net.ipv6.conf.all.router_solicitation_interval = 4
net.ipv6.conf.all.router_solicitation_delay = 1
net.ipv6.conf.all.force_mld_version = 0
net.ipv6.conf.all.use_tempaddr = 0
net.ipv6.conf.all.temp_valid_lft = 604800
net.ipv6.conf.all.temp_prefered_lft = 86400
net.ipv6.conf.all.regen_max_retry = 5
net.ipv6.conf.all.max_desync_factor = 600
net.ipv6.conf.all.max_addresses = 16
net.ipv6.conf.all.accept_ra_defrtr = 1
net.ipv6.conf.all.accept_ra_pinfo = 1
net.ipv6.conf.all.accept_ra_rtr_pref = 1
net.ipv6.conf.all.router_probe_interval = 60
net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.all.proxy_ndp = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.all.optimistic_dad = 0
net.ipv6.conf.all.mc_forwarding = 0
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.all.accept_dad = 1
net.ipv6.conf.default.forwarding = 0
net.ipv6.conf.default.hop_limit = 64
net.ipv6.conf.default.mtu = 1280
net.ipv6.conf.default.accept_ra = 1
net.ipv6.conf.default.accept_redirects = 1
net.ipv6.conf.default.autoconf = 1
net.ipv6.conf.default.dad_transmits = 1
net.ipv6.conf.default.router_solicitations = 3
net.ipv6.conf.default.router_solicitation_interval = 4
net.ipv6.conf.default.router_solicitation_delay = 1
net.ipv6.conf.default.force_mld_version = 0
net.ipv6.conf.default.use_tempaddr = 0
net.ipv6.conf.default.temp_valid_lft = 604800
net.ipv6.conf.default.temp_prefered_lft = 86400
net.ipv6.conf.default.regen_max_retry = 5
net.ipv6.conf.default.max_desync_factor = 600
net.ipv6.conf.default.max_addresses = 16
net.ipv6.conf.default.accept_ra_defrtr = 1
net.ipv6.conf.default.accept_ra_pinfo = 1
net.ipv6.conf.default.accept_ra_rtr_pref = 1
net.ipv6.conf.default.router_probe_interval = 60
net.ipv6.conf.default.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.default.proxy_ndp = 0
net.ipv6.conf.default.accept_source_route = 0
net.ipv6.conf.default.optimistic_dad = 0
net.ipv6.conf.default.mc_forwarding = 0
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.default.accept_dad = 1
net.ipv6.conf.lo.forwarding = 0
net.ipv6.conf.lo.hop_limit = 64
net.ipv6.conf.lo.mtu = 65536
net.ipv6.conf.lo.accept_ra = 1
net.ipv6.conf.lo.accept_redirects = 1
net.ipv6.conf.lo.autoconf = 1
net.ipv6.conf.lo.dad_transmits = 1
net.ipv6.conf.lo.router_solicitations = 3
net.ipv6.conf.lo.router_solicitation_interval = 4
net.ipv6.conf.lo.router_solicitation_delay = 1
net.ipv6.conf.lo.force_mld_version = 0
net.ipv6.conf.lo.use_tempaddr = -1
net.ipv6.conf.lo.temp_valid_lft = 604800
net.ipv6.conf.lo.temp_prefered_lft = 86400
net.ipv6.conf.lo.regen_max_retry = 5
net.ipv6.conf.lo.max_desync_factor = 600
net.ipv6.conf.lo.max_addresses = 16
net.ipv6.conf.lo.accept_ra_defrtr = 1
net.ipv6.conf.lo.accept_ra_pinfo = 1
net.ipv6.conf.lo.accept_ra_rtr_pref = 1
net.ipv6.conf.lo.router_probe_interval = 60
net.ipv6.conf.lo.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.lo.proxy_ndp = 0
net.ipv6.conf.lo.accept_source_route = 0
net.ipv6.conf.lo.optimistic_dad = 0
net.ipv6.conf.lo.mc_forwarding = 0
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.lo.accept_dad = -1
net.ipv6.conf.eth0.forwarding = 0
net.ipv6.conf.eth0.hop_limit = 64
net.ipv6.conf.eth0.mtu = 1500
net.ipv6.conf.eth0.accept_ra = 1
net.ipv6.conf.eth0.accept_redirects = 1
net.ipv6.conf.eth0.autoconf = 1
net.ipv6.conf.eth0.dad_transmits = 1
net.ipv6.conf.eth0.router_solicitations = 3
net.ipv6.conf.eth0.router_solicitation_interval = 4
net.ipv6.conf.eth0.router_solicitation_delay = 1
net.ipv6.conf.eth0.force_mld_version = 0
net.ipv6.conf.eth0.use_tempaddr = 0
net.ipv6.conf.eth0.temp_valid_lft = 604800
net.ipv6.conf.eth0.temp_prefered_lft = 86400
net.ipv6.conf.eth0.regen_max_retry = 5
net.ipv6.conf.eth0.max_desync_factor = 600
net.ipv6.conf.eth0.max_addresses = 16
net.ipv6.conf.eth0.accept_ra_defrtr = 1
net.ipv6.conf.eth0.accept_ra_pinfo = 1
net.ipv6.conf.eth0.accept_ra_rtr_pref = 1
net.ipv6.conf.eth0.router_probe_interval = 60
net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 0
net.ipv6.conf.eth0.proxy_ndp = 0
net.ipv6.conf.eth0.accept_source_route = 0
net.ipv6.conf.eth0.optimistic_dad = 0
net.ipv6.conf.eth0.mc_forwarding = 0
net.ipv6.conf.eth0.disable_ipv6 = 1
net.ipv6.conf.eth0.accept_dad = 1
net.ipv6.ip6frag_high_thresh = 4194304
net.ipv6.ip6frag_low_thresh = 3145728
net.ipv6.ip6frag_time = 60
net.ipv6.route.gc_thresh = 1024
net.ipv6.route.max_size = 16384
net.ipv6.route.gc_min_interval = 0
net.ipv6.route.gc_timeout = 60
net.ipv6.route.gc_interval = 30
net.ipv6.route.gc_elasticity = 0
net.ipv6.route.mtu_expires = 600
net.ipv6.route.min_adv_mss = 1
net.ipv6.route.gc_min_interval_ms = 500
net.ipv6.icmp.ratelimit = 1000
net.ipv6.bindv6only = 0
net.ipv6.ip6frag_secret_interval = 600
net.ipv6.mld_max_msf = 64
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-filter-vlan-tagged = 0
net.bridge.bridge-nf-filter-pppoe-tagged = 0
net.unix.max_dgram_qlen = 10
abi.vsyscall32 = 1
crypto.fips_enabled = 0

Go to the top


Installed Patches

2.6.32-504.30.3.el6.x86_64 #1 SMP Wed Jul 15 10:13:09 UTC 2015

CentOS release 6.6 (Final) LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch CentOS release 6.6 (Final) Kernel \r on an \m To login to the the shell, use: username: root password: hadoop
kernel-2.6.32-504.30.3.el6.x86_64 puppetlabs-release-6-7.noarch epel-release-6-8.noarch centos-release-6-6.el6.centos.12.2.x86_64

Go to the top


Boot scripts

default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-504.30.3.el6.x86_64)
	root (hd0,0)
	kernel /vmlinuz-2.6.32-504.30.3.el6.x86_64 ro root=/dev/mapper/vg_sandbox-lv_root rd_NO_LUKS rd_LVM_LV=vg_sandbox/lv_root LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_sandbox/lv_swap consoleblank=0 quiet
	initrd /initramfs-2.6.32-504.30.3.el6.x86_64.img

# inittab is only used by upstart for the default runlevel.
#
# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
#
# System initialization is started by /etc/init/rcS.conf
#
# Individual runlevels are started by /etc/init/rc.conf
#
# Ctrl-Alt-Delete is handled by /etc/init/control-alt-delete.conf
#
# Terminal gettys are handled by /etc/init/tty.conf and /etc/init/serial.conf,
# with configuration in /etc/sysconfig/init.
#
# For information on how to write upstart event handlers, or how
# upstart works, see init(5), init(8), and initctl(8).
#
# Default runlevel. The runlevels used are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
#
id:3:initdefault:

RC2
total 276
-rwxr-xr-x 1 hue root 925 2015-03-05 14:09 K01ambari
-rwxr-xr-x 1 hue root 1229 2015-03-05 14:09 K01hbase-starter
-rwxr-xr-x 1 vagrant vagrant 2777 2015-07-21 16:17 K01startup_script
-rwxr-xr-x 1 root root 2649 2015-05-21 20:54 K02puppet
-rwxr-xr-x 1 root root 2062 2014-10-17 22:55 K05atd
-rwxr-xr-x 1 root root 3034 2015-06-17 17:23 K10cups
-rwxr-xr-x 1 root root 2056 2015-02-27 15:57 K10saslauthd
-rwxr-xr-x 1 root root 2001 2014-10-16 14:49 K15htcacheclean
-rwxr-xr-x 1 root root 1624 2015-07-20 04:16 K20ambari-agent
-rwxr-xr-x 1 root root 4436 2015-07-20 04:14 K20ambari-server
-rwxr-xr-x 1 root root 2295 2014-06-11 09:46 K20shellinaboxd
-rwxr-xr-x 1 root root 7026 2015-06-22 13:08 K36mysqld
-rwxr-xr-x 1 root root 5383 2015-06-29 15:59 K36postgresql
-rwxr-xr-x 1 root root 2989 2014-07-22 13:56 K50netconsole
-r-xr-xr-- 1 root root 2192 2015-07-21 20:15 K50ranger-admin
-r-xr-x--- 1 root root 2411 2015-07-21 20:16 K50ranger-usersync
-rwxr-xr-x 1 root root 6878 2014-10-16 11:48 K60nfs
-rwxr-xr-x 1 root root 2464 2014-10-16 11:48 K69rpcsvcgssd
-rwxr-xr-x 1 root root 21411 2015-07-21 15:41 K70vboxadd-x11
-rwxr-xr-x 1 root root 6064 2014-07-22 13:56 K75netfs
-rwxr-xr-x 1 root root 2518 2014-10-16 11:48 K85rpcgssd
-rwxr-xr-x 1 root root 3526 2014-10-16 11:48 K86nfslock
-rwxr-xr-x 1 root root 2523 2015-02-05 09:21 K87multipathd
-rwxr-xr-x 1 root root 1822 2014-10-17 23:50 K87restorecond
-rwxr-xr-x 1 root root 2073 2013-02-22 01:19 K87rpcbind
-rwxr-xr-x. 1 root root 3580 2014-10-15 12:54 K88auditd
-rwxr-xr-x. 1 root root 4535 2014-09-10 18:54 K88iscsi
-rwxr-xr-x. 1 root root 3990 2014-09-10 18:54 K89iscsid
-rwxr-xr-x. 1 root root 1513 2013-09-17 07:35 K89rdisc
-rwxr-xr-x. 1 root root 10688 2014-10-15 14:30 K92iptables
-r-xr-xr-x 1 root root 2757 2015-05-20 11:54 S02lvm2-monitor
-rwxr-xr-x. 1 root root 10804 2014-10-15 14:30 S08ip6tables
-rwxr-xr-x 1 root root 6334 2014-07-22 13:56 S10network
-rwxr-xr-x 1 root root 2023 2012-04-03 15:33 S11portreserve
-rwxr-xr-x 1 root root 2011 2014-12-10 10:05 S12rsyslog
-rwxr-xr-x 1 root root 2571 2014-11-12 14:58 S15mdmonitor
-rwxr-xr-x 1 hue root 2000 2015-03-05 14:09 S20tutorials
-rwxr-xr-x 1 root root 2200 2015-04-22 10:52 S22messagebus
-r-xr-xr-x 1 root root 1340 2015-05-20 11:54 S25blk-availability
-rwxr-xr-x. 1 root root 2294 2014-10-15 17:42 S26udev-post
-rwxr-xr-x 1 root root 15764 2015-07-21 15:40 S30vboxadd
-rwxr-xr-x 1 root root 5378 2015-07-21 15:41 S35vboxadd-service
-rwxr-xr-x 1 root root 4621 2014-11-13 08:54 S55sshd
-rwxr-xr-x. 1 root root 3912 2014-02-20 10:07 S80postfix
-rwxr-xr-x 1 root root 3371 2014-10-16 14:49 S85httpd
-rwxr-xr-x. 1 root root 2826 2013-11-23 12:43 S90crond
-rwxr-xr-x 1 root root 3748 2015-07-14 16:56 S90hue
-rwxr-xr-x 1 root root 220 2014-11-04 12:17 S99local

RC3
total 276
-rwxr-xr-x 1 hue root 925 2015-03-05 14:09 K01ambari
-rwxr-xr-x 1 hue root 1229 2015-03-05 14:09 K01hbase-starter
-rwxr-xr-x 1 root root 2649 2015-05-21 20:54 K02puppet
-rwxr-xr-x 1 root root 3034 2015-06-17 17:23 K10cups
-rwxr-xr-x 1 root root 2056 2015-02-27 15:57 K10saslauthd
-rwxr-xr-x 1 root root 2001 2014-10-16 14:49 K15htcacheclean
-rwxr-xr-x 1 root root 1624 2015-07-20 04:16 K20ambari-agent
-rwxr-xr-x 1 root root 4436 2015-07-20 04:14 K20ambari-server
-rwxr-xr-x 1 root root 7026 2015-06-22 13:08 K36mysqld
-rwxr-xr-x 1 root root 5383 2015-06-29 15:59 K36postgresql
-rwxr-xr-x 1 root root 2989 2014-07-22 13:56 K50netconsole
-r-xr-xr-- 1 root root 2192 2015-07-21 20:15 K50ranger-admin
-r-xr-x--- 1 root root 2411 2015-07-21 20:16 K50ranger-usersync
-rwxr-xr-x 1 root root 6878 2014-10-16 11:48 K60nfs
-rwxr-xr-x 1 root root 2464 2014-10-16 11:48 K69rpcsvcgssd
-rwxr-xr-x 1 root root 2518 2014-10-16 11:48 K85rpcgssd
-rwxr-xr-x 1 root root 3526 2014-10-16 11:48 K86nfslock
-rwxr-xr-x 1 root root 2523 2015-02-05 09:21 K87multipathd
-rwxr-xr-x 1 root root 1822 2014-10-17 23:50 K87restorecond
-rwxr-xr-x 1 root root 2073 2013-02-22 01:19 K87rpcbind
-rwxr-xr-x. 1 root root 3580 2014-10-15 12:54 K88auditd
-rwxr-xr-x. 1 root root 1513 2013-09-17 07:35 K89rdisc
-rwxr-xr-x. 1 root root 10688 2014-10-15 14:30 K92iptables
-r-xr-xr-x 1 root root 2757 2015-05-20 11:54 S02lvm2-monitor
-rwxr-xr-x. 1 root root 3990 2014-09-10 18:54 S07iscsid
-rwxr-xr-x. 1 root root 10804 2014-10-15 14:30 S08ip6tables
-rwxr-xr-x 1 root root 6334 2014-07-22 13:56 S10network
-rwxr-xr-x 1 root root 2023 2012-04-03 15:33 S11portreserve
-rwxr-xr-x 1 root root 2011 2014-12-10 10:05 S12rsyslog
-rwxr-xr-x. 1 root root 4535 2014-09-10 18:54 S13iscsi
-rwxr-xr-x 1 root root 2571 2014-11-12 14:58 S15mdmonitor
-rwxr-xr-x 1 hue root 2000 2015-03-05 14:09 S20tutorials
-rwxr-xr-x 1 root root 2200 2015-04-22 10:52 S22messagebus
-r-xr-xr-x 1 root root 1340 2015-05-20 11:54 S25blk-availability
-rwxr-xr-x 1 root root 6064 2014-07-22 13:56 S25netfs
-rwxr-xr-x. 1 root root 2294 2014-10-15 17:42 S26udev-post
-rwxr-xr-x 1 root root 15764 2015-07-21 15:40 S30vboxadd
-rwxr-xr-x 1 root root 21411 2015-07-21 15:41 S30vboxadd-x11
-rwxr-xr-x 1 root root 5378 2015-07-21 15:41 S35vboxadd-service
-rwxr-xr-x 1 root root 4621 2014-11-13 08:54 S55sshd
-rwxr-xr-x. 1 root root 3912 2014-02-20 10:07 S80postfix
-rwxr-xr-x 1 root root 2295 2014-06-11 09:46 S80shellinaboxd
-rwxr-xr-x 1 root root 3371 2014-10-16 14:49 S85httpd
-rwxr-xr-x 1 vagrant vagrant 2777 2015-07-21 16:17 S89startup_script
-rwxr-xr-x. 1 root root 2826 2013-11-23 12:43 S90crond
-rwxr-xr-x 1 root root 3748 2015-07-14 16:56 S90hue
-rwxr-xr-x 1 root root 2062 2014-10-17 22:55 S95atd
-rwxr-xr-x 1 root root 220 2014-11-04 12:17 S99local

ambari 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ambari-agent 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ambari-server 0:off 1:off 2:off 3:off 4:off 5:off 6:off
atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
auditd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
blk-availability 0:off 1:on 2:on 3:on 4:on 5:on 6:off
crond 0:off 1:off 2:on 3:on 4:on 5:on 6:off
cups 0:off 1:off 2:off 3:off 4:off 5:off 6:off
hbase-starter 0:off 1:off 2:off 3:off 4:off 5:off 6:off
htcacheclean 0:off 1:off 2:off 3:off 4:off 5:off 6:off
httpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
hue 0:off 1:off 2:on 3:on 4:on 5:on 6:off
ip6tables 0:off 1:off 2:on 3:on 4:on 5:on 6:off
iptables 0:off 1:off 2:off 3:off 4:off 5:off 6:off
iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off
iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
lvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:off
mdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:off
messagebus 0:off 1:off 2:on 3:on 4:on 5:on 6:off
multipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
mysqld 0:off 1:off 2:off 3:off 4:off 5:off 6:off
netconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:off
netfs 0:off 1:off 2:off 3:on 4:on 5:on 6:off
network 0:off 1:off 2:on 3:on 4:on 5:on 6:off
nfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off
nfslock 0:off 1:off 2:off 3:off 4:off 5:off 6:off
portreserve 0:off 1:off 2:on 3:on 4:on 5:on 6:off
postfix 0:off 1:off 2:on 3:on 4:on 5:on 6:off
postgresql 0:off 1:off 2:off 3:off 4:off 5:off 6:off
puppet 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ranger-admin 0:off 1:off 2:off 3:off 4:off 5:off 6:off
ranger-usersync 0:off 1:off 2:off 3:off 4:off 5:off 6:off
rdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:off
restorecond 0:off 1:off 2:off 3:off 4:off 5:off 6:off
rpcbind 0:off 1:off 2:off 3:off 4:off 5:off 6:off
rpcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
rpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
rsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:off
saslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:off
shellinaboxd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
sshd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
startup_script 0:off 1:off 2:off 3:on 4:on 5:on 6:off
tutorials 0:off 1:off 2:on 3:on 4:on 5:on 6:off
udev-post 0:off 1:on 2:on 3:on 4:on 5:on 6:off
vboxadd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
vboxadd-service 0:off 1:off 2:on 3:on 4:on 5:on 6:off
vboxadd-x11 0:off 1:off 2:off 3:on 4:off 5:on 6:off

Go to the top


Last Boot


system boot 2015-07-26 13:31

reboot system boot 2.6.32-504.30.3. Sun Jul 26 13:31 - 15:36 (1+02:04)
reboot system boot 2.6.32-504.30.3. Wed Jul 22 00:06 - 00:08 (00:02)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 23:58 - 00:05 (00:07)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 23:52 - 23:58 (00:05)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 19:31 - 20:19 (00:47)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 17:48 - 17:54 (00:06)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 16:55 - 16:56 (00:00)
reboot system boot 2.6.32-504.30.3. Tue Jul 21 15:37 - 16:54 (01:16)
reboot system boot 2.6.32-504.el6.x Tue Jul 21 15:21 - 15:37 (00:15)

wtmp begins Tue Jul 21 15:21:34 2015

Go to the top


Cluster Configuration



Go to the top


Cluster Status



Cluster Logs NOT FOUND

Go to the top


Activity Log

Go to the top


User Log

vagrant  pts/0        2015-07-21 15:21 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:21 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:38 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
vagrant  pts/0        2015-07-21 15:41 (10.0.2.2)
root     tty1         2015-07-21 16:17
vagrant  pts/0        2015-07-21 16:45 (10.0.2.2)
vagrant  pts/0        2015-07-21 16:45 (10.0.2.2)
vagrant  pts/0        2015-07-21 16:45 (10.0.2.2)
vagrant  pts/0        2015-07-21 16:45 (10.0.2.2)
vagrant  pts/0        2015-07-21 16:54 (10.0.2.2)
vagrant  pts/0        2015-07-21 16:54 (10.0.2.2)
vagrant  pts/1        2015-07-21 16:55 (10.0.2.2)
vagrant  pts/1        2015-07-21 16:55 (10.0.2.2)
vagrant  pts/1        2015-07-21 16:56 (10.0.2.2)
root     tty1         2015-07-21 17:49
root     tty2         2015-07-21 17:49
root     tty1         2015-07-21 19:33
root     pts/0        2015-07-21 19:41 (10.0.2.2)
root     pts/0        2015-07-21 19:41 (10.0.2.2)
root     pts/1        2015-07-21 23:52 (10.0.2.2)
root     tty1         2015-07-21 23:53
root     tty1         2015-07-21 23:59
root     pts/0        2015-07-22 00:00 (10.0.2.2)
root     tty1         2015-07-22 00:07
root     pts/0        2015-07-22 00:07 (10.0.2.2)
root     tty1         2015-07-26 13:33
root     pts/0        2015-07-26 15:19 (10.0.2.2)
root     pts/0        2015-07-26 15:22 (10.0.2.2)
root     pts/0        2015-07-26 18:56 (10.0.2.2)
root     pts/0        2015-07-27 08:38 (10.0.2.2)
root     pts/0        2015-07-27 14:54 (10.0.2.2)


root ssh:notty 10.0.2.2 Sun Jul 26 15:17 - 15:17 (00:00) root ssh:notty 10.0.2.2 Sun Jul 26 15:17 - 15:17 (00:00) root ssh:notty 10.0.2.2 Sun Jul 26 15:16 - 15:16 (00:00) root ssh:notty 10.0.2.2 Wed Jul 22 00:00 - 00:00 (00:00) btmp begins Wed Jul 22 00:00:05 2015
Summary Report ====================== Range of time in logs: 07/21/2015 15:21:38.529 - 07/21/2015 16:53:35.016 Selected time for report: 07/21/2015 15:21:38 - 07/21/2015 16:53:35.016 Number of changes in configuration: 2 Number of changes to accounts, groups, or roles: 232 Number of logins: 44 Number of failed logins: 0 Number of authentications: 602 Number of failed authentications: 0 Number of users: 3 Number of terminals: 6 Number of host names: 2 Number of executables: 10 Number of files: 0 Number of AVC's: 0 Number of MAC events: 0 Number of failed syscalls: 0 Number of anomaly events: 1 Number of responses to anomaly events: 0 Number of crypto events: 26 Number of keys: 0 Number of process IDs: 723 Number of events: 4227 Authentication Report ============================================ # date time acct host term exe success event ============================================

Go to the top


SW Diagnostics

Jul 26 14:16:01 sandbox rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="932" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
Jul 26 15:36:23 sandbox kernel: Bridge firewalling registered
Jul 26 15:36:23 sandbox kernel: ip_tables: (C) 2000-2006 Netfilter Core Team
Jul 26 22:37:08 sandbox dhclient[887]: DHCPREQUEST on eth0 to 10.0.2.2 port 67 (xid=0x6e389962)
Jul 26 22:37:08 sandbox dhclient[887]: DHCPACK from 10.0.2.2 (xid=0x6e389962)
Jul 26 22:37:10 sandbox dhclient[887]: bound to 10.0.2.15 -- renewal in 35734 seconds.
Jul 27 00:36:02 sandbox dhclient[1628]: DHCPREQUEST on eth0 to 10.0.2.2 port 67 (xid=0x72fcd39e)
Jul 27 00:36:02 sandbox dhclient[1628]: DHCPACK from 10.0.2.2 (xid=0x72fcd39e)
Jul 27 00:36:04 sandbox dhclient[1628]: bound to 10.0.2.15 -- renewal in 32438 seconds.
Jul 27 06:24:36 sandbox kernel: eth0: link down
Jul 27 06:52:22 sandbox kernel: eth0: link up, 100Mbps, full-duplex
Jul 27 06:52:48 sandbox kernel: eth0: link down
Jul 27 06:52:52 sandbox kernel: eth0: link up, 100Mbps, full-duplex
Jul 27 09:00:23 sandbox dhclient[887]: DHCPREQUEST on eth0 to 10.0.2.2 port 67 (xid=0x6e389962)
Jul 27 09:00:23 sandbox dhclient[887]: DHCPACK from 10.0.2.2 (xid=0x6e389962)
Jul 27 09:00:25 sandbox dhclient[887]: bound to 10.0.2.15 -- renewal in 41891 seconds.
Jul 27 09:36:42 sandbox dhclient[1628]: DHCPREQUEST on eth0 to 10.0.2.2 port 67 (xid=0x72fcd39e)
Jul 27 09:36:42 sandbox dhclient[1628]: DHCPACK from 10.0.2.2 (xid=0x72fcd39e)
Jul 27 09:36:44 sandbox dhclient[1628]: bound to 10.0.2.15 -- renewal in 37511 seconds.
Jul 27 10:58:51 sandbox kernel: usb 1-1: USB disconnect, device number 2
Jul 27 10:58:52 sandbox kernel: usb 1-1: new full speed USB device number 3 using ohci_hcd
Jul 27 10:58:52 sandbox kernel: eth0: link down
Jul 27 10:58:52 sandbox kernel: usb 1-1: New USB device found, idVendor=80ee, idProduct=0021
Jul 27 10:58:52 sandbox kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=0
Jul 27 10:58:52 sandbox kernel: usb 1-1: Product: USB Tablet
Jul 27 10:58:52 sandbox kernel: usb 1-1: Manufacturer: VirtualBox
Jul 27 10:58:52 sandbox kernel: usb 1-1: configuration #1 chosen from 1 choice
Jul 27 10:58:52 sandbox kernel: input: VirtualBox USB Tablet as /devices/pci0000:00/0000:00:06.0/usb1/1-1/1-1:1.0/input/input8
Jul 27 10:58:52 sandbox kernel: generic-usb 0003:80EE:0021.0002: input,hidraw0: USB HID v1.10 Mouse [VirtualBox USB Tablet] on usb-0000:00:06.0-1/input0
Jul 27 10:58:58 sandbox kernel: eth0: link up, 100Mbps, full-duplex

Go to the top


HW Diagnostics

Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-504.30.3.el6.x86_64 (mockbuild@c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Wed Jul 15 10:13:09 UTC 2015
Command line: ro root=/dev/mapper/vg_sandbox-lv_root rd_NO_LUKS rd_LVM_LV=vg_sandbox/lv_root LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_NO_DM  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_sandbox/lv_swap consoleblank=0 quiet
KERNEL supported cpus:
  Intel GenuineIntel
  AMD AuthenticAMD
  Centaur CentaurHauls
BIOS-provided physical RAM map:
 BIOS-e820: 0000000000000000 - 000000000009fc00 (usable)
 BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved)
 BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved)
 BIOS-e820: 0000000000100000 - 00000000dfff0000 (usable)
 BIOS-e820: 00000000dfff0000 - 00000000e0000000 (ACPI data)
 BIOS-e820: 00000000fffc0000 - 0000000100000000 (reserved)
 BIOS-e820: 0000000100000000 - 0000000220000000 (usable)
DMI 2.5 present.
SMBIOS version 2.5 @ 0xFFF60
DMI: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
e820 update range: 0000000000000000 - 0000000000001000 (usable) ==> (reserved)
e820 remove range: 00000000000a0000 - 0000000000100000 (usable)
last_pfn = 0x220000 max_arch_pfn = 0x400000000
MTRR default type: uncachable
MTRR variable ranges disabled:
x86 PAT enabled: cpu 0, old 0x7040600070406, new 0x7010600070106
CPU MTRRs all blank - virtualized system.
last_pfn = 0xdfff0 max_arch_pfn = 0x400000000
initial memory mapped : 0 - 20000000
init_memory_mapping: 0000000000000000-00000000dfff0000
 0000000000 - 00dfe00000 page 2M
 00dfe00000 - 00dfff0000 page 4k
kernel direct mapping tables up to dfff0000 @ 8000-e000
Use unified mapping for non-reserved e820 regions.
init_memory_mapping: 0000000100000000-0000000220000000
 0100000000 - 0220000000 page 2M
kernel direct mapping tables up to 220000000 @ c000-12000
RAMDISK: 37057000 - 37fef94e
ACPI: RSDP 00000000000e0000 00024 (v02 VBOX  )
ACPI: XSDT 00000000dfff0030 0003C (v01 VBOX   VBOXXSDT 00000001 ASL  00000061)
ACPI: FACP 00000000dfff00f0 000F4 (v04 VBOX   VBOXFACP 00000001 ASL  00000061)
ACPI: DSDT 00000000dfff0480 01BF1 (v01 VBOX   VBOXBIOS 00000002 INTL 20100528)
ACPI: FACS 00000000dfff0200 00040
ACPI: APIC 00000000dfff0240 0006C (v02 VBOX   VBOXAPIC 00000001 ASL  00000061)
ACPI: SSDT 00000000dfff02b0 001CC (v01 VBOX   VBOXCPUT 00000002 INTL 20100528)
ACPI: Local APIC address 0xfee00000
Setting APIC routing to flat.
No NUMA configuration found
Faking a node at 0000000000000000-0000000220000000
Bootmem setup node 0 0000000000000000-0000000220000000
  NODE_DATA [0000000000011000 - 0000000000044fff]
  bootmap [0000000000045000 -  0000000000088fff] pages 44
(8 early reservations) ==> bootmem [0000000000 - 0220000000]
  #0 [0000000000 - 0000001000]   BIOS data page ==> [0000000000 - 0000001000]
  #1 [0000006000 - 0000008000]       TRAMPOLINE ==> [0000006000 - 0000008000]
  #2 [0001000000 - 0002029c24]    TEXT DATA BSS ==> [0001000000 - 0002029c24]
  #3 [0037057000 - 0037fef94e]          RAMDISK ==> [0037057000 - 0037fef94e]
  #4 [000009fc00 - 0000100000]    BIOS reserved ==> [000009fc00 - 0000100000]
  #5 [000202a000 - 000202a13c]              BRK ==> [000202a000 - 000202a13c]
  #6 [0000008000 - 000000c000]          PGTABLE ==> [0000008000 - 000000c000]
  #7 [000000c000 - 0000011000]          PGTABLE ==> [000000c000 - 0000011000]
found SMP MP-table at [ffff88000009fff0] 9fff0
Reserving 129MB of memory at 48MB for crashkernel (System RAM: 8704MB)
 [ffffea0000000000-ffffea00077fffff] PMD -> [ffff880028600000-ffff88002f7fffff] on node 0
Zone PFN ranges:
  DMA      0x00000001 -> 0x00001000
  DMA32    0x00001000 -> 0x00100000
  Normal   0x00100000 -> 0x00220000
Movable zone start PFN for each node
early_node_map[3] active PFN ranges
    0: 0x00000001 -> 0x0000009f
    0: 0x00000100 -> 0x000dfff0
    0: 0x00100000 -> 0x00220000
On node 0 totalpages: 2097038
  DMA zone: 56 pages used for memmap
  DMA zone: 108 pages reserved
  DMA zone: 3834 pages, LIFO batch:0
  DMA32 zone: 14280 pages used for memmap
  DMA32 zone: 899112 pages, LIFO batch:31
  Normal zone: 16128 pages used for memmap
  Normal zone: 1163520 pages, LIFO batch:31
ACPI: PM-Timer IO Port: 0x4008
ACPI: Local APIC address 0xfee00000
Setting APIC routing to flat.
ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0])
IOAPIC[0]: apic_id 4, version 17, address 0xfec00000, GSI 0-23
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
ACPI: IRQ0 used by override.
ACPI: IRQ2 used by override.
ACPI: IRQ9 used by override.
Using ACPI (MADT) for SMP configuration information
SMP: Allowing 4 CPUs, 0 hotplug CPUs
nr_irqs_gsi: 24
PM: Registered nosave memory: 000000000009f000 - 00000000000a0000
PM: Registered nosave memory: 00000000000a0000 - 00000000000f0000
PM: Registered nosave memory: 00000000000f0000 - 0000000000100000
PM: Registered nosave memory: 00000000dfff0000 - 00000000e0000000
PM: Registered nosave memory: 00000000e0000000 - 00000000fffc0000
PM: Registered nosave memory: 00000000fffc0000 - 0000000100000000
Allocating PCI resources starting at e0000000 (gap: e0000000:1ffc0000)
Booting paravirtualized kernel on bare hardware
NR_CPUS:4096 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
PERCPU: Embedded 30 pages/cpu @ffff880028200000 s90968 r8192 d23720 u524288
pcpu-alloc: s90968 r8192 d23720 u524288 alloc=1*2097152
pcpu-alloc: [0] 0 1 2 3 
Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 2066466
Policy zone: Normal
Kernel command line: ro root=/dev/mapper/vg_sandbox-lv_root rd_NO_LUKS rd_LVM_LV=vg_sandbox/lv_root LANG=en_US.UTF-8 rd_NO_MD SYSFONT=latarcyrheb-sun16 crashkernel=129M@0M rd_NO_DM  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=vg_sandbox/lv_swap consoleblank=0 quiet
PID hash table entries: 4096 (order: 3, 32768 bytes)
xsave/xrstor: enabled xstate_bv 0x7, cntxt size 0x340
Checking aperture...
No AGP bridge found
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Placing 64MB software IO TLB between ffff880020000000 - ffff880024000000
software IO TLB at phys 0x20000000 - 0x24000000
Memory: 8039732k/8912896k available (5336k kernel code, 524744k absent, 348420k reserved, 7017k data, 1288k init)
Hierarchical RCU implementation.
NR_IRQS:33024 nr_irqs:440
Console: colour VGA+ 80x25
console [tty0] enabled
allocated 33554432 bytes of page_cgroup
please try 'cgroup_disable=memory' option if you don't want memory cgroups
Fast TSC calibration using PIT
Detected 2493.949 MHz processor.
Calibrating delay loop (skipped), value calculated using timer frequency.. 4987.89 BogoMIPS (lpj=2493949)
pid_max: default: 32768 minimum: 301
Security Framework initialized
SELinux:  Initializing.
SELinux:  Starting in permissive mode
Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
Mount-cache hash table entries: 256
Initializing cgroup subsys ns
Initializing cgroup subsys cpuacct
Initializing cgroup subsys memory
Initializing cgroup subsys devices
Initializing cgroup subsys freezer
Initializing cgroup subsys net_cls
Initializing cgroup subsys blkio
Initializing cgroup subsys perf_event
Initializing cgroup subsys net_prio
CPU: Physical Processor ID: 0
CPU: Processor Core ID: 0
mce: CPU supports 0 MCE banks
ACPI: Core revision 20090903
ftrace: converting mcount calls to 0f 1f 44 00 00
ftrace: allocating 21923 entries in 86 pages
APIC routing finalized to flat.
..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
CPU0: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz stepping 01
Performance Events: unsupported p6 CPU model 70 no PMU driver, software events only.
NMI watchdog disabled (cpu0): hardware events not enabled
Booting Node   0, Processors  #1
mce: CPU supports 0 MCE banks
 #2
mce: CPU supports 0 MCE banks
 #3 Ok.
mce: CPU supports 0 MCE banks
Brought up 4 CPUs
Total of 4 processors activated (19951.59 BogoMIPS).
sizeof(vma)=200 bytes
sizeof(page)=56 bytes
sizeof(inode)=592 bytes
sizeof(dentry)=192 bytes
sizeof(ext3inode)=800 bytes
sizeof(buffer_head)=104 bytes
sizeof(skbuff)=232 bytes
sizeof(task_struct)=2672 bytes
devtmpfs: initialized
regulator: core version 0.5
NET: Registered protocol family 16
ACPI: bus type pci registered
PCI: Using configuration type 1 for base access
bio: create slab  at 0
ACPI: EC: Look up EC in DSDT
ACPI: Executed 1 blocks of module-level executable AML code
ACPI: Interpreter enabled
ACPI: (supports S0 S5)
ACPI: Using IOAPIC for interrupt routing
ACPI: No dock devices found.
PCI: Ignoring host bridge windows from ACPI; if necessary, use "pci=use_crs" and report a bug
ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7] (ignored)
pci_root PNP0A03:00: host bridge window [io  0x0d00-0xffff] (ignored)
pci_root PNP0A03:00: host bridge window [mem 0x000a0000-0x000bffff] (ignored)
pci_root PNP0A03:00: host bridge window [mem 0xe0000000-0xffdfffff] (ignored)
PCI: root bus 00: using default resources
PCI host bridge to bus 0000:00
pci_bus 0000:00: root bus resource [io  0x0000-0xffff]
pci_bus 0000:00: root bus resource [mem 0x00000000-0x7fffffffff]
pci 0000:00:01.1: reg 20: [io  0xd000-0xd00f]
pci 0000:00:02.0: reg 10: [mem 0xe0000000-0xe07fffff pref]
pci 0000:00:03.0: reg 10: [io  0xd020-0xd03f]
pci 0000:00:03.0: reg 14: [mem 0xf0000000-0xf0000fff]
pci 0000:00:04.0: reg 10: [io  0xd040-0xd05f]
pci 0000:00:04.0: reg 14: [mem 0xf0400000-0xf07fffff]
pci 0000:00:04.0: reg 18: [mem 0xf0800000-0xf0803fff pref]
pci 0000:00:06.0: reg 10: [mem 0xf0804000-0xf0804fff]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 5 9 10 11) *0, disabled.
ACPI: PCI Interrupt Link [LNKB] (IRQs 5 9 10 *11)
ACPI: PCI Interrupt Link [LNKC] (IRQs 5 9 *10 11)
ACPI: PCI Interrupt Link [LNKD] (IRQs 5 *9 10 11)
vgaarb: device added: PCI:0000:00:02.0,decodes=io+mem,owns=io+mem,locks=none
vgaarb: loaded
vgaarb: bridge control possible 0000:00:02.0
SCSI subsystem initialized
libata version 3.00 loaded.
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
PCI: Using ACPI for IRQ routing
PCI: old code would have set cacheline size to 32 bytes, but clflush_size = 64
PCI: pci_cache_line_size set to 64 bytes
NetLabel: Initializing
NetLabel:  domain hash size = 128
NetLabel:  protocols = UNLABELED CIPSOv4
NetLabel:  unlabeled traffic allowed by default
Switching to clocksource jiffies
pnp: PnP ACPI init
ACPI: bus type pnp registered
pnp 00:00: [io  0x0cf8-0x0cff]
pnp 00:00: Plug and Play ACPI device, IDs PNP0a03 (active)
pnp 00:01: [io  0x0060]
pnp 00:01: [io  0x0064]
pnp 00:01: [irq 1]
pnp 00:01: Plug and Play ACPI device, IDs PNP0303 (active)
pnp 00:02: [io  0x0000-0x000f]
pnp 00:02: [io  0x0080-0x008f]
pnp 00:02: [io  0x00c0-0x00df]
pnp 00:02: [dma 4]
pnp 00:02: Plug and Play ACPI device, IDs PNP0200 (active)
pnp 00:03: [irq 12]
pnp 00:03: Plug and Play ACPI device, IDs PNP0f03 (active)
pnp 00:04: [io  0x0378-0x037f]
pnp 00:04: [io  0x0778-0x077f]
pnp 00:04: [irq 7]
pnp 00:04: Plug and Play ACPI device, IDs PNP0400 (active)
pnp: PnP ACPI: found 5 devices
ACPI: ACPI bus type pnp unregistered
Switching to clocksource acpi_pm
pci_bus 0000:00: resource 4 [io  0x0000-0xffff]
pci_bus 0000:00: resource 5 [mem 0x00000000-0x7fffffffff]
NET: Registered protocol family 2
IP route cache hash table entries: 262144 (order: 9, 2097152 bytes)
TCP established hash table entries: 524288 (order: 11, 8388608 bytes)
TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
TCP: Hash tables configured (established 524288 bind 65536)
TCP reno registered
NET: Registered protocol family 1
pci 0000:00:00.0: Limiting direct PCI/PCI transfers
pci 0000:00:01.0: Activating ISA DMA hang workarounds
pci 0000:00:02.0: Boot video device
  alloc irq_desc for 22 on node -1
  alloc kstat_irqs on node -1
pci 0000:00:06.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
pci 0000:00:06.0: PCI INT A disabled
Trying to unpack rootfs image as initramfs...
hrtimer: interrupt took 4470931 ns
Freeing initrd memory: 15970k freed
platform rtc_cmos: registered platform RTC device (no PNP device found)
futex hash table entries: 1024 (order: 4, 65536 bytes)
audit: initializing netlink socket (disabled)
type=2000 audit(1437917507.373:1): initialized
HugeTLB registered 2 MB page size, pre-allocated 0 pages
VFS: Disk quotas dquot_6.5.2
Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
msgmni has been set to 15733
SELinux:  Registering netfilter hooks
ksign: Installing public key data
Loading keyring
- Added public key 76584FBB80B29DF4
- User ID: CentOS (Kernel Module GPG key)
Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
pci_hotplug: PCI Hot Plug PCI Core version: 0.5
pciehp: PCI Express Hot Plug Controller Driver version: 0.4
acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
ACPI: AC Adapter [AC] (on-line)
input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
ACPI: Power Button [PWRF]
input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input1
ACPI: Sleep Button [SLPF]
ACPI: acpi_idle registered with cpuidle
[Firmware Bug]: No valid trip found
GHES: HEST is not enabled!
Non-volatile memory driver v1.3
Linux agpgart interface v0.103
crash memory driver: version 1.1
Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
ACPI: Battery Slot [BAT0] (battery present)
brd: module loaded
loop: module loaded
input: Macintosh mouse button emulation as /devices/virtual/input/input2
Fixed MDIO Bus: probed
ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
ohci_hcd 0000:00:06.0: PCI INT A -> GSI 22 (level, low) -> IRQ 22
ohci_hcd 0000:00:06.0: setting latency timer to 64
ohci_hcd 0000:00:06.0: OHCI Host Controller
ohci_hcd 0000:00:06.0: new USB bus registered, assigned bus number 1
ohci_hcd 0000:00:06.0: irq 22, io mem 0xf0804000
usb usb1: New USB device found, idVendor=1d6b, idProduct=0001
usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
usb usb1: Product: OHCI Host Controller
usb usb1: Manufacturer: Linux 2.6.32-504.30.3.el6.x86_64 ohci_hcd
usb usb1: SerialNumber: 0000:00:06.0
usb usb1: configuration #1 chosen from 1 choice
hub 1-0:1.0: USB hub found
hub 1-0:1.0: 12 ports detected
uhci_hcd: USB Universal Host Controller Interface driver
PNP: PS/2 Controller [PNP0303:PS2K,PNP0f03:PS2M] at 0x60,0x64 irq 1,12
serio: i8042 KBD port at 0x60,0x64 irq 1
serio: i8042 AUX port at 0x60,0x64 irq 12
mice: PS/2 mouse device common for all mice
rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
rtc0: alarms up to one day, 114 bytes nvram
input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input3
cpuidle: using governor ladder
cpuidle: using governor menu
EFI Variables Facility v0.08 2004-May-17
usbcore: registered new interface driver hiddev
usbcore: registered new interface driver usbhid
usbhid: v2.6:USB HID core driver
GRE over IPv4 demultiplexor driver
TCP cubic registered
Initializing XFRM netlink socket
NET: Registered protocol family 17
registered taskstats version 1
rtc_cmos rtc_cmos: setting system clock to 2015-07-26 13:31:48 UTC (1437917508)
Initalizing network drop monitor service
Freeing unused kernel memory: 1288k freed
Write protecting the kernel read-only data: 10240k
Freeing unused kernel memory: 788k freed
input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4
Freeing unused kernel memory: 1564k freed
dracut: dracut-004-356.el6_6.3
dracut: rd_NO_LUKS: removing cryptoluks activation
device-mapper: uevent: version 1.0.3
device-mapper: ioctl: 4.27.0-ioctl (2013-10-30) initialised: dm-devel@redhat.com
udev: starting version 147
dracut: Starting plymouth daemon
dracut: rd_NO_MD: removing MD RAID activation
ata_piix 0000:00:01.1: version 2.13
ata_piix 0000:00:01.1: setting latency timer to 64
scsi0 : ata_piix
scsi1 : ata_piix
ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0xd000 irq 14
ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0xd008 irq 15
ata1.01: NODEV after polling detection
ata1.00: ATA-6: VBOX HARDDISK, 1.0, max UDMA/133
ata1.00: 102400000 sectors, multi 128: LBA 
ata1.00: configured for UDMA/33
scsi 0:0:0:0: Direct-Access     ATA      VBOX HARDDISK    1.0  PQ: 0 ANSI: 5
input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:00/input/input5
ACPI: Video Device [GFX0] (multi-head: yes  rom: no  post: no)
STARTING CRC_T10DIF
sd 0:0:0:0: [sda] 102400000 512-byte logical blocks: (52.4 GB/48.8 GiB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2
sd 0:0:0:0: [sda] Attached SCSI disk
usb 1-1: new full speed USB device number 2 using ohci_hcd
dracut: Scanning devices sda2  for LVM logical volumes vg_sandbox/lv_root vg_sandbox/lv_swap 
dracut: inactive '/dev/vg_sandbox/lv_root' [43.45 GiB] inherit
dracut: inactive '/dev/vg_sandbox/lv_swap' [4.88 GiB] inherit
usb 1-1: New USB device found, idVendor=80ee, idProduct=0021
usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=0
usb 1-1: Product: USB Tablet
usb 1-1: Manufacturer: VirtualBox
usb 1-1: configuration #1 chosen from 1 choice
input: VirtualBox USB Tablet as /devices/pci0000:00/0000:00:06.0/usb1/1-1/1-1:1.0/input/input6
generic-usb 0003:80EE:0021.0001: input,hidraw0: USB HID v1.10 Mouse [VirtualBox USB Tablet] on usb-0000:00:06.0-1/input0
EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: 
dracut: Mounted root filesystem /dev/mapper/vg_sandbox-lv_root
SELinux:  Disabled at runtime.
SELinux:  Unregistering netfilter hooks
type=1404 audit(1437917509.329:2): selinux=0 auid=4294967295 ses=4294967295
dracut: 
dracut: Switching root
Refined TSC clocksource calibration: 2494.226 MHz.
Switching to clocksource tsc
udev: starting version 147
sd 0:0:0:0: Attached scsi generic sg0 type 0
[drm] Initialized drm 1.1.0 20060810
  alloc irq_desc for 18 on node -1
  alloc kstat_irqs on node -1
pci 0000:00:02.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
[drm] Initialized vboxvideo 1.0.0 20090303 for 0000:00:02.0 on minor 0
pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de
  alloc irq_desc for 19 on node -1
  alloc kstat_irqs on node -1
pcnet32 0000:00:03.0: PCI INT A -> GSI 19 (level, low) -> IRQ 19
pcnet32 0000:00:03.0: setting latency timer to 64
pcnet32: PCnet/FAST III 79C973 at 0xd020, 08:00:27:ca:f6:76 assigned IRQ 19.
pcnet32: Found PHY 0022:561b at address 0.
eth0: registered as PCnet/FAST III 79C973
pcnet32: 1 cards_found.
  alloc irq_desc for 20 on node -1
  alloc kstat_irqs on node -1
vboxguest 0000:00:04.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20
input: Unspecified device as /devices/pci0000:00/0000:00:04.0/input/input7
vboxguest: misc device minor 57, IRQ 20, I/O port d040, MMIO at 00000000f0400000 (size 0x400000)
vboxguest: Successfully loaded version 4.3.22 (interface 0x00010004)
piix4_smbus 0000:00:07.0: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
parport_pc 00:04: reported by Plug and Play ACPI
ppdev: user-space parallel port driver
EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: 
Adding 5119996k swap on /dev/mapper/vg_sandbox-lv_swap.  Priority:-1 extents:1 across:5119996k 
NET: Registered protocol family 10
lo: Disabled Privacy Extensions
eth0: link up, 100Mbps, full-duplex
vboxsf: Successfully loaded version 4.3.22 (interface 0x00010004)
VBoxService 4.3.22 r98236 (verbosity: 0) linux.amd64 (Feb 12 2015 16:53:43) release log
00:00:00.000075 main     Log opened 2015-07-26T13:31:56.880679000Z
00:00:00.000143 main     OS Product: Linux
00:00:00.000162 main     OS Release: 2.6.32-504.30.3.el6.x86_64
00:00:00.000190 main     OS Version: #1 SMP Wed Jul 15 10:13:09 UTC 2015
00:00:00.000205 main     OS Service Pack: #1 SMP Wed Jul 15 10:13:09 UTC 2015
00:00:00.000219 main     Executable: /opt/VBoxGuestAdditions-4.3.22/sbin/VBoxService
00:00:00.000220 main     Process ID: 1048
00:00:00.000221 main     Package type: LINUX_64BITS_GENERIC
00:00:00.001634 main     4.3.22 r98236 started. Verbose level = 0
warning: `jsvc' uses 32-bit capabilities (legacy support in use)
Bridge firewalling registered
ip_tables: (C) 2000-2006 Netfilter Core Team
eth0: link down
eth0: link up, 100Mbps, full-duplex
eth0: link down
eth0: link up, 100Mbps, full-duplex
usb 1-1: USB disconnect, device number 2
usb 1-1: new full speed USB device number 3 using ohci_hcd
eth0: link down
usb 1-1: New USB device found, idVendor=80ee, idProduct=0021
usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=0
usb 1-1: Product: USB Tablet
usb 1-1: Manufacturer: VirtualBox
usb 1-1: configuration #1 chosen from 1 choice
input: VirtualBox USB Tablet as /devices/pci0000:00/0000:00:06.0/usb1/1-1/1-1:1.0/input/input8
generic-usb 0003:80EE:0021.0002: input,hidraw0: USB HID v1.10 Mouse [VirtualBox USB Tablet] on usb-0000:00:06.0-1/input0
eth0: link up, 100Mbps, full-duplex

Go to the top


System Status


Status Summary

Process Count

User Count VirtualSize ResidentSize PhysicalSize WriteSize
497 2 78896 808 19724 664
513 1 2675924 116640 668981 2595992
apache 8 1402976 10436 350744 13632
dbus 1 21432 800 5358 292
hcat 1 1236372 243372 309093 1133092
hdfs 4 4778224 1167968 1194556 4394196
hive 2 2679448 638148 669862 2449360
hue 3 798232 127788 199558 261876
mapred 1 877140 186572 219285 776988
mysql 1 1495452 45644 373863 1437316
oozie 1 5003840 575864 1250960 4865292
postfix 2 162048 5824 40512 1276
postgres 18 2133152 113912 533288 50036
root 133 11481976 1070428 2870494 9945276
yarn 3 2850344 886656 712586 2551736
Total(MB) 181 36792 5069 9198 29763

Shared Memory Usage (Entries, Attach, Size)

Count Attach Size Size(MB)
1 16 37879808 36

Semaphores Usage (Entries, Total)

14 123

TCP Activities

      4 CLOSE_WAIT
     79 ESTABLISHED
     46 LISTEN
    112 TIME_WAIT

Go to the top


Processes

UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 Jul26 ?        00:00:00 /sbin/init
root         2     0  0 Jul26 ?        00:00:00 [kthreadd]
root         3     2  0 Jul26 ?        00:00:01 [migration/0]
root         4     2  0 Jul26 ?        00:00:37 [ksoftirqd/0]
root         5     2  0 Jul26 ?        00:00:00 [stopper/0]
root         6     2  0 Jul26 ?        00:00:00 [watchdog/0]
root         7     2  0 Jul26 ?        00:00:02 [migration/1]
root         8     2  0 Jul26 ?        00:00:00 [stopper/1]
root         9     2  0 Jul26 ?        00:00:45 [ksoftirqd/1]
root        10     2  0 Jul26 ?        00:00:00 [watchdog/1]
root        11     2  0 Jul26 ?        00:00:02 [migration/2]
root        12     2  0 Jul26 ?        00:00:00 [stopper/2]
root        13     2  0 Jul26 ?        00:00:42 [ksoftirqd/2]
root        14     2  0 Jul26 ?        00:00:00 [watchdog/2]
root        15     2  0 Jul26 ?        00:00:02 [migration/3]
root        16     2  0 Jul26 ?        00:00:00 [stopper/3]
root        17     2  0 Jul26 ?        00:00:45 [ksoftirqd/3]
root        18     2  0 Jul26 ?        00:00:00 [watchdog/3]
root        19     2  0 Jul26 ?        00:00:07 [events/0]
root        20     2  0 Jul26 ?        00:00:06 [events/1]
root        21     2  0 Jul26 ?        00:00:06 [events/2]
root        22     2  0 Jul26 ?        00:00:08 [events/3]
root        23     2  0 Jul26 ?        00:00:00 [cgroup]
root        24     2  0 Jul26 ?        00:00:00 [khelper]
root        25     2  0 Jul26 ?        00:00:00 [netns]
root        26     2  0 Jul26 ?        00:00:00 [async/mgr]
root        27     2  0 Jul26 ?        00:00:00 [pm]
root        28     2  0 Jul26 ?        00:00:00 [sync_supers]
root        29     2  0 Jul26 ?        00:00:00 [bdi-default]
root        30     2  0 Jul26 ?        00:00:00 [kintegrityd/0]
root        31     2  0 Jul26 ?        00:00:00 [kintegrityd/1]
root        32     2  0 Jul26 ?        00:00:00 [kintegrityd/2]
root        33     2  0 Jul26 ?        00:00:00 [kintegrityd/3]
root        34     2  0 Jul26 ?        00:00:07 [kblockd/0]
root        35     2  0 Jul26 ?        00:00:04 [kblockd/1]
root        36     2  0 Jul26 ?        00:00:03 [kblockd/2]
root        37     2  0 Jul26 ?        00:00:03 [kblockd/3]
root        38     2  0 Jul26 ?        00:00:00 [kacpid]
root        39     2  0 Jul26 ?        00:00:00 [kacpi_notify]
root        40     2  0 Jul26 ?        00:00:00 [kacpi_hotplug]
root        41     2  0 Jul26 ?        00:00:00 [ata_aux]
root        42     2  0 Jul26 ?        00:00:00 [ata_sff/0]
root        43     2  0 Jul26 ?        00:00:00 [ata_sff/1]
root        44     2  0 Jul26 ?        00:00:00 [ata_sff/2]
root        45     2  0 Jul26 ?        00:00:00 [ata_sff/3]
root        46     2  0 Jul26 ?        00:00:00 [ksuspend_usbd]
root        47     2  0 Jul26 ?        00:00:00 [khubd]
root        48     2  0 Jul26 ?        00:00:00 [kseriod]
root        49     2  0 Jul26 ?        00:00:00 [md/0]
root        50     2  0 Jul26 ?        00:00:00 [md/1]
root        51     2  0 Jul26 ?        00:00:00 [md/2]
root        52     2  0 Jul26 ?        00:00:00 [md/3]
root        53     2  0 Jul26 ?        00:00:00 [md_misc/0]
root        54     2  0 Jul26 ?        00:00:00 [md_misc/1]
root        55     2  0 Jul26 ?        00:00:00 [md_misc/2]
root        56     2  0 Jul26 ?        00:00:00 [md_misc/3]
root        57     2  0 Jul26 ?        00:00:00 [linkwatch]
root        59     2  0 Jul26 ?        00:00:00 [khungtaskd]
root        60     2  0 Jul26 ?        00:00:00 [kswapd0]
root        61     2  0 Jul26 ?        00:00:00 [ksmd]
root        62     2  0 Jul26 ?        00:00:01 [khugepaged]
root        63     2  0 Jul26 ?        00:00:00 [aio/0]
root        64     2  0 Jul26 ?        00:00:00 [aio/1]
root        65     2  0 Jul26 ?        00:00:00 [aio/2]
root        66     2  0 Jul26 ?        00:00:00 [aio/3]
root        67     2  0 Jul26 ?        00:00:00 [crypto/0]
root        68     2  0 Jul26 ?        00:00:00 [crypto/1]
root        69     2  0 Jul26 ?        00:00:00 [crypto/2]
root        70     2  0 Jul26 ?        00:00:00 [crypto/3]
root        77     2  0 Jul26 ?        00:00:00 [kthrotld/0]
root        78     2  0 Jul26 ?        00:00:00 [kthrotld/1]
root        79     2  0 Jul26 ?        00:00:00 [kthrotld/2]
root        80     2  0 Jul26 ?        00:00:00 [kthrotld/3]
root        82     2  0 Jul26 ?        00:00:00 [kpsmoused]
root        83     2  0 Jul26 ?        00:00:00 [usbhid_resumer]
root        84     2  0 Jul26 ?        00:00:00 [deferwq]
root       116     2  0 Jul26 ?        00:00:00 [kdmremove]
root       117     2  0 Jul26 ?        00:00:00 [kstriped]
root       217     2  0 Jul26 ?        00:00:00 [scsi_eh_0]
root       218     2  0 Jul26 ?        00:00:00 [scsi_eh_1]
root       289     2  0 Jul26 ?        00:00:02 [kdmflush]
root       291     2  0 Jul26 ?        00:00:00 [kdmflush]
root       310     2  0 Jul26 ?        00:00:13 [jbd2/dm-0-8]
root       311     2  0 Jul26 ?        00:00:00 [ext4-dio-unwrit]
root       398     1  0 Jul26 ?        00:00:00 /sbin/udevd -d
root       513     2  0 Jul26 ?        00:00:00 [iprt/0]
root       514     2  0 Jul26 ?        00:00:00 [iprt/1]
root       515     2  0 Jul26 ?        00:00:00 [iprt/2]
root       516     2  0 Jul26 ?        00:00:00 [iprt/3]
root       683     2  0 Jul26 ?        00:00:00 [jbd2/sda1-8]
root       684     2  0 Jul26 ?        00:00:00 [ext4-dio-unwrit]
root       737     2  0 Jul26 ?        00:00:00 [kauditd]
root       865     2  0 Jul26 ?        00:00:09 [flush-253:0]
root       887     1  0 Jul26 ?        00:00:00 /sbin/dhclient -H sandbox.hortonworks.com -1 -q -cf /etc/dhcp/dhclient-eth0.conf -lf /var/lib/dhclient/dhclient-eth0.leases -pf /var/run/dhclient-eth0.pid eth0
root       924     1  0 Jul26 ?        00:00:00 /sbin/portreserve
root       932     1  0 Jul26 ?        00:00:00 /sbin/rsyslogd -i /var/run/syslogd.pid -c 5
root       955     1  0 Jul26 ?        00:00:01 /usr/lib/tutorials/.env/bin/python /usr/lib/tutorials/manage.py run_gunicorn 0:8888
dbus       964     1  0 Jul26 ?        00:00:03 dbus-daemon --system
root      1052     1  0 Jul26 ?        00:00:31 /usr/sbin/VBoxService
root      1062     1  0 Jul26 ?        00:00:02 /usr/sbin/console-kit-daemon --no-daemon
root      1076     1  0 Jul26 ?        00:00:00 /usr/sbin/sshd
root      1217     1  0 Jul26 ?        00:00:00 /usr/libexec/postfix/master
postfix   1224  1217  0 Jul26 ?        00:00:00 qmgr -l -t fifo -u
497       1229     1  0 Jul26 ?        00:00:00 shellinaboxd -u shellinabox -g shellinabox --cert=/var/lib/shellinabox --port=4200 --background=/var/run/shellinaboxd.pid -t -s /:SSH:sandbox.hortonworks.com --css white-on-black.css
497       1230  1229  0 Jul26 ?        00:00:00 shellinaboxd -u shellinabox -g shellinabox --cert=/var/lib/shellinabox --port=4200 --background=/var/run/shellinaboxd.pid -t -s /:SSH:sandbox.hortonworks.com --css white-on-black.css
root      1239     1  0 Jul26 ?        00:00:07 /usr/sbin/httpd
root      1295     1  0 Jul26 ?        00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
mysql     1397  1295  0 Jul26 ?        00:01:05 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock
postgres  1514     1  0 Jul26 ?        00:00:02 /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data
postgres  1565  1514  0 Jul26 ?        00:00:07 postgres: logger process                          
postgres  1567  1514  0 Jul26 ?        00:00:48 postgres: writer process                          
postgres  1568  1514  0 Jul26 ?        00:00:43 postgres: wal writer process                      
postgres  1569  1514  0 Jul26 ?        00:00:10 postgres: autovacuum launcher process             
postgres  1570  1514  0 Jul26 ?        00:00:06 postgres: stats collector process                 
root      1602     1  3 Jul26 ?        00:39:14 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/mysql-connector-java.jar:/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar org.apache.ambari.server.controller.AmbariServer
root      1628     1  0 Jul26 ?        00:00:00 dhclient
postgres  1631  1514  0 Jul26 ?        00:00:00 postgres: ambari ambari 127.0.0.1(34891) idle     
postgres  1633  1514  0 Jul26 ?        00:00:50 postgres: ambari ambari 127.0.0.1(34893) idle     
hdfs      1802     1  0 Jul26 ?        00:03:12 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_secondarynamenode -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-secondarynamenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node" -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
hdfs      1815     1  2 Jul26 ?        00:26:33 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_namenode -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-namenode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
hdfs      1824     1  0 Jul26 ?        00:07:51 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_datanode -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hdfs -Dhadoop.log.file=hadoop-hdfs-datanode-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/hdfs/gc.log-201507261332 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
513       2114     1  0 Jul26 ?        00:02:53 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/httpcore-4.2.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/httpclient-4.2.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.3.0.0-2557.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /etc/zookeeper/conf/zoo.cfg
root      2381     1  0 Jul26 ?        00:00:00 /usr/bin/python2.6 /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start
root      2465     1  0 Jul26 ?        00:08:03 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_portmap -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop--portmap-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.portmap.Portmap
root      2506  2381  3 Jul26 ?        00:41:17 /usr/bin/python2.6 /usr/lib/python2.6/site-packages/ambari_agent/main.py start
root      2516     1  0 Jul26 ?        00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter
root      2533     2  0 15:36 ?        00:00:00 [flush-8:0]
hdfs      2560  2516  0 Jul26 ?        00:06:12 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter
oozie     2606     1  8 Jul26 ?        01:45:35 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.3.0.0-2557 -Xmx2048m -XX:MaxPermSize=512m -Xmx2048m -XX:MaxPermSize=512m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.3.0.0-2557/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start
root      2662  2650  0 15:36 pts/0    00:00:00 ps -efaww
postgres  2714  1514  0 Jul26 ?        00:01:09 postgres: ambari ambari 127.0.0.1(34906) idle     
hive      2852     1  0 Jul26 ?        00:05:05 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -XX:MaxPermSize=512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.0.0-2557/hive/lib/hive-service-1.2.1.2.3.0.0-2557.jar org.apache.hadoop.hive.metastore.HiveMetaStore -hiveconf hive.log.file=hivemetastore.log -hiveconf hive.log.dir=/var/log/hive
hive      3001     1  0 Jul26 ?        00:05:43 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -XX:MaxPermSize=512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.0.0-2557/hive/lib/hive-service-1.2.1.2.3.0.0-2557.jar org.apache.hive.service.server.HiveServer2 --hiveconf hive.aux.jars.path=file:///usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar -hiveconf hive.metastore.uris=  -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
yarn      3037     1  2 Jul26 ?        00:29:21 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
yarn      3038     1  0 Jul26 ?        00:11:51 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_nodemanager -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
yarn      3042     1  0 Jul26 ?        00:05:08 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/ahs-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
mapred    3055     1  0 Jul26 ?        00:05:45 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.root.logger=INFO,console -Dhadoop.id.str=mapred -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/mapred -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=mapred -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-mapreduce/mapred -Dhadoop.log.file=mapred-mapred-historyserver-sandbox.hortonworks.com.log -Dhadoop.root.logger=INFO,RFA -Dmapred.jobsummary.logger=INFO,JSA -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
hcat      3382     1  0 Jul26 ?        00:03:21 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dwebhcat.log.dir=/var/log/webhcat/ -Dlog4j.configuration=file:///usr/hdp/2.3.0.0-2557/hive-hcatalog/sbin/../etc/webhcat/webhcat-log4j.properties -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/hcat -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str=hcat -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx250m -XX:MaxPermSize=512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/hdp/2.3.0.0-2557/hive-hcatalog/sbin/../share/webhcat/svr/lib/hive-webhcat-1.2.1.2.3.0.0-2557.jar org.apache.hive.hcatalog.templeton.Main
root      4366     1  0 Jul26 ?        00:00:00 crond
hue       4392     1  0 Jul26 ?        00:01:28 python2.6 /usr/lib/hue/build/env/bin//supervisor -p /var/run/hue/supervisor.pid -l /var/log/hue -d
hue       4403  4392  0 Jul26 ?        00:03:36 python2.6 /usr/lib/hue/build/env/bin/hue runspawningserver
hue       4423  4403  0 Jul26 ?        00:00:10 /usr/bin/python2.6 -c import sys; from spawning import spawning_child; spawning_child.main() 4403 3 15 spawning.django_factory.config_factory {"app_factory": "spawning.django_factory.app_factory", "access_log_file": "/dev/null", "status_port": 0, "port": 8000, "verbose": null, "deadman_timeout": 1, "source_directories": ["/usr/lib/hue/desktop/core/src/desktop"], "pidfile": null, "args": ["desktop.settings"], "max_age": null, "num_processes": 1, "watch": null, "host": "0.0.0.0", "coverage": null, "ssl_private_key": null, "sysinfo": null, "status_host": "0.0.0.0", "ssl_certificate": null, "argv_str": "--factory=spawning.django_factory.config_factory desktop.settings --port 8000 -s 1 -t 0", "no_keepalive": null, "reload": null, "django_settings_module": "desktop.settings", "threadpool_workers": 0}
root      4444     1  0 Jul26 ?        00:00:00 /usr/sbin/atd
root      4457     1  0 Jul26 tty2     00:00:00 /sbin/mingetty /dev/tty2
root      4459     1  0 Jul26 tty3     00:00:00 /sbin/mingetty /dev/tty3
root      4461   398  0 Jul26 ?        00:00:00 /sbin/udevd -d
root      4462     1  0 Jul26 tty4     00:00:00 /sbin/mingetty /dev/tty4
root      4464     1  0 Jul26 tty5     00:00:00 /sbin/mingetty /dev/tty5
root      4466     1  0 Jul26 tty6     00:00:00 /sbin/mingetty /dev/tty6
root      4468     1  0 Jul26 ?        00:00:00 login -- root     
root      4470  4468  0 Jul26 tty1     00:00:00 -bash
root      4489  4470  0 Jul26 tty1     00:00:00 bash /usr/lib/hue/tools/start_scripts/post_start.sh
root      4491  4489  0 Jul26 tty1     00:00:00 python /usr/lib/hue/tools/start_scripts/splash.py
postgres  5118  1514  0 Jul26 ?        00:00:00 postgres: ambari ambari 127.0.0.1(35047) idle     
postgres  5119  1514  0 Jul26 ?        00:00:00 postgres: ambari ambari 127.0.0.1(35048) idle     
postgres  5120  1514  0 Jul26 ?        00:00:03 postgres: ambari ambari 127.0.0.1(35049) idle     
postgres  5678  1514  0 Jul26 ?        00:01:00 postgres: ambari ambari 127.0.0.1(50366) idle     
root      6690   398  0 14:54 ?        00:00:00 /sbin/udevd -d
root      6722  1076  0 14:54 ?        00:00:00 sshd: root@pts/0 
root      6755  6722  0 14:54 pts/0    00:00:00 -bash
root      7292   955  0 14:54 ?        00:00:00 /usr/lib/tutorials/.env/bin/python /usr/lib/tutorials/manage.py run_gunicorn 0:8888
postfix   7327  1217  0 14:55 ?        00:00:00 pickup -l -t fifo -u
postgres 20114  1514  0 Jul26 ?        00:01:08 postgres: ambari ambari 127.0.0.1(46603) idle     
postgres 25523  1514  0 Jul26 ?        00:00:02 postgres: ambari ambari 127.0.0.1(37597) idle     
postgres 25525  1514  0 Jul26 ?        00:00:03 postgres: ambari ambari 127.0.0.1(37609) idle     
postgres 25530  1514  0 Jul26 ?        00:00:48 postgres: ambari ambari 127.0.0.1(37619) idle     
postgres 26138  1514  0 Jul26 ?        00:00:27 postgres: ambari ambari 127.0.0.1(37805) idle     
apache   29020  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29021  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29022  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29023  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29024  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29025  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29026  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29027  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd

Go to the top


System usage

procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0  76096 2231028 164136 695956    0    0     4    17   92   50 10  1 89  0  0	
 0  0  76096 2230960 164140 696072    0    0     0    78 1391 3367  2  1 97  0  0	
 0  0  76096 2231388 164144 696072    0    0     0    61 1613 3470  7  2 92  0  0	
 0  0  76096 2231420 164144 696080    0    0     0    38 1356 3289  2  1 97  0  0	
 3  0  76096 2230948 164148 696084    0    0     0    23 1281 3269  2  1 97  0  0	


Go to the top


InterProcess Communication


------ Shared Memory Segments --------
key        shmid      owner      perms      bytes      nattch     status      
0x0052e2c1 0          postgres   600        37879808   16                      

------ Semaphore Arrays --------
key        semid      owner      perms      nsems     
0x00000000 0          root       600        1         
0x00000000 65537      root       600        1         
0x00000000 393218     apache     600        1         
0x00000000 425987     apache     600        1         
0x0052e2c1 163844     postgres   600        17        
0x0052e2c2 196613     postgres   600        17        
0x0052e2c3 229382     postgres   600        17        
0x0052e2c4 262151     postgres   600        17        
0x0052e2c5 294920     postgres   600        17        
0x0052e2c6 327689     postgres   600        17        
0x0052e2c7 360458     postgres   600        17        

------ Message Queues --------
key        msqid      owner      perms      used-bytes   messages    


total 0
drwxrwxrwt  2 root root   40 2015-07-26 18:46 .
drwxr-xr-x 19 root root 3560 2015-07-27 10:58 ..

Go to the top


Network Activity

Kernel Interface table
Iface       MTU Met    RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0       1500   0   228257      1      0      0   203151      4      0      0 BMRU
lo        65536   0  5756421      0      0      0  5756421      0      0      0 LRU

Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:39266 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:10020 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:2181 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8040 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4200 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8042 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8010 0.0.0.0:* LISTEN tcp 0 0 10.0.2.15:50090 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:52235 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8141 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:45454 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:19888 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:10033 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8050 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:4242 0.0.0.0:* LISTEN tcp 0 0 10.0.2.15:8020 0.0.0.0:* LISTEN tcp 0 0 10.0.2.15:50070 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:10200 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8440 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:11000 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8025 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:11001 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8441 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:13562 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:9083 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8188 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8030 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8670 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:50111 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:50079 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN tcp 0 0 10.0.2.15:3306 10.0.2.15:36018 ESTABLISHED tcp 0 0 10.0.2.15:59462 158.197.16.70:80 TIME_WAIT tcp 0 0 10.0.2.15:884 10.0.2.15:4242 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:36036 ESTABLISHED tcp 0 0 10.0.2.15:54437 193.140.100.100:80 TIME_WAIT tcp 0 0 10.0.2.15:46956 79.143.180.138:80 TIME_WAIT tcp 0 0 10.0.2.15:40264 10.0.2.15:2181 TIME_WAIT tcp 0 0 10.0.2.15:57394 93.63.162.59:80 TIME_WAIT tcp 0 0 10.0.2.15:36037 10.0.2.15:3306 ESTABLISHED tcp 0 0 127.0.0.1:5432 127.0.0.1:37609 ESTABLISHED tcp 0 0 10.0.2.15:47224 193.174.29.5:80 TIME_WAIT tcp 0 0 10.0.2.15:8088 10.0.2.15:60382 TIME_WAIT tcp 0 0 10.0.2.15:60329 212.224.83.174:443 TIME_WAIT tcp 0 0 10.0.2.15:50401 10.0.2.15:8020 TIME_WAIT tcp 0 0 127.0.0.1:57970 127.0.0.1:3306 ESTABLISHED tcp 0 0 10.0.2.15:49364 129.143.116.10:80 TIME_WAIT tcp 0 0 10.0.2.15:33696 213.83.42.56:80 TIME_WAIT tcp 0 0 10.0.2.15:50647 88.150.173.218:80 TIME_WAIT tcp 0 0 10.0.2.15:42772 86.57.251.8:80 TIME_WAIT tcp 0 0 10.0.2.15:50010 10.0.2.15:50365 ESTABLISHED tcp 0 0 10.0.2.15:49824 134.109.228.1:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:35048 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:51415 ESTABLISHED tcp 0 0 10.0.2.15:37500 94.75.223.121:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:35047 ESTABLISHED tcp 0 0 10.0.2.15:60113 37.58.58.140:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:50366 ESTABLISHED tcp 0 0 10.0.2.15:37856 213.180.139.200:80 TIME_WAIT tcp 0 0 10.0.2.15:60001 194.14.179.253:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:37597 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:50908 ESTABLISHED tcp 0 0 10.0.2.15:52472 62.149.2.9:80 TIME_WAIT tcp 0 0 127.0.0.1:37805 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:42063 195.248.234.19:80 TIME_WAIT tcp 0 0 10.0.2.15:37261 10.0.2.15:2181 ESTABLISHED tcp 0 0 10.0.2.15:54130 54.239.168.122:80 TIME_WAIT tcp 0 0 10.0.2.15:43826 194.8.57.42:80 TIME_WAIT tcp 0 0 10.0.2.15:50070 10.0.2.15:55798 TIME_WAIT tcp 0 0 10.0.2.15:8020 10.0.2.15:50330 ESTABLISHED tcp 0 0 10.0.2.15:8020 10.0.2.15:50488 ESTABLISHED tcp 0 0 10.0.2.15:50330 10.0.2.15:8020 ESTABLISHED tcp 0 0 127.0.0.1:37597 127.0.0.1:5432 ESTABLISHED tcp 0 0 127.0.0.1:34906 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:50386 10.0.2.15:8020 ESTABLISHED tcp 0 0 10.0.2.15:50332 10.0.2.15:8020 TIME_WAIT tcp 0 0 10.0.2.15:37434 85.31.185.102:80 TIME_WAIT tcp 0 0 10.0.2.15:39158 5.83.232.126:80 TIME_WAIT tcp 0 0 10.0.2.15:57753 212.110.161.69:80 TIME_WAIT tcp 0 0 10.0.2.15:50365 10.0.2.15:8020 TIME_WAIT tcp 0 0 127.0.0.1:35047 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:51254 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:60349 10.0.2.15:8020 ESTABLISHED tcp 0 0 10.0.2.15:50111 10.0.2.15:39139 TIME_WAIT tcp 0 0 127.0.0.1:34893 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:37259 10.0.2.15:2181 ESTABLISHED tcp 0 0 10.0.2.15:58738 194.105.226.20:80 TIME_WAIT tcp 0 0 10.0.2.15:32845 178.32.100.7:80 TIME_WAIT tcp 0 0 10.0.2.15:51416 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:8025 10.0.2.15:52144 ESTABLISHED tcp 0 0 10.0.2.15:36885 130.236.100.79:80 TIME_WAIT tcp 0 0 10.0.2.15:41113 129.102.1.37:80 TIME_WAIT tcp 0 0 10.0.2.15:41026 147.251.48.205:80 TIME_WAIT tcp 0 0 10.0.2.15:38536 5.199.174.4:80 TIME_WAIT tcp 0 0 10.0.2.15:52489 195.20.242.90:80 TIME_WAIT tcp 0 0 10.0.2.15:41059 132.180.15.2:80 TIME_WAIT tcp 0 0 10.0.2.15:50648 88.150.173.218:80 TIME_WAIT tcp 0 0 127.0.0.1:37619 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:54085 194.8.197.22:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:50909 ESTABLISHED tcp 0 0 10.0.2.15:53029 85.14.85.4:80 TIME_WAIT tcp 0 0 10.0.2.15:46535 10.0.2.15:2049 TIME_WAIT tcp 0 0 10.0.2.15:50910 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:36019 ESTABLISHED tcp 0 0 10.0.2.15:8088 10.0.2.15:60371 TIME_WAIT tcp 0 0 10.0.2.15:8020 10.0.2.15:50386 ESTABLISHED tcp 0 0 10.0.2.15:8441 10.0.2.15:58144 ESTABLISHED tcp 0 0 10.0.2.15:48811 93.94.109.138:80 TIME_WAIT tcp 0 0 10.0.2.15:50914 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:50908 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:58767 213.180.204.183:80 TIME_WAIT tcp 0 0 10.0.2.15:50070 10.0.2.15:55796 TIME_WAIT tcp 0 0 10.0.2.15:8188 10.0.2.15:54135 TIME_WAIT tcp 0 0 10.0.2.15:2181 10.0.2.15:37261 ESTABLISHED tcp 0 0 10.0.2.15:35974 131.188.12.211:443 TIME_WAIT tcp 0 0 10.0.2.15:52481 195.20.242.90:80 TIME_WAIT tcp 0 0 10.0.2.15:41339 129.177.13.120:80 TIME_WAIT tcp 0 0 10.0.2.15:51833 91.221.151.185:80 TIME_WAIT tcp 0 0 10.0.2.15:49087 88.80.191.36:80 TIME_WAIT tcp 0 0 10.0.2.15:36036 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:44252 192.155.89.90:80 TIME_WAIT tcp 0 0 10.0.2.15:37245 212.83.32.25:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:51254 ESTABLISHED tcp 0 0 10.0.2.15:39339 80.237.136.138:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:35049 ESTABLISHED tcp 0 0 10.0.2.15:36019 10.0.2.15:3306 ESTABLISHED tcp 0 0 127.0.0.1:5432 127.0.0.1:34891 ESTABLISHED tcp 0 0 10.0.2.15:59922 194.14.179.253:80 TIME_WAIT tcp 0 0 10.0.2.15:8042 10.0.2.15:46682 TIME_WAIT tcp 0 0 10.0.2.15:51253 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:51849 91.218.89.74:80 TIME_WAIT tcp 0 0 10.0.2.15:35548 10.0.2.15:9083 TIME_WAIT tcp 1 0 10.0.2.15:44401 10.0.2.15:8188 CLOSE_WAIT tcp 0 0 10.0.2.15:58144 10.0.2.15:8441 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:50910 ESTABLISHED tcp 0 0 10.0.2.15:22 10.0.2.2:55892 ESTABLISHED tcp 0 0 127.0.0.1:37609 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:50487 10.0.2.15:8020 ESTABLISHED tcp 0 0 10.0.2.15:48419 91.210.88.42:80 TIME_WAIT tcp 0 0 10.0.2.15:51714 193.136.37.8:80 TIME_WAIT tcp 0 0 10.0.2.15:52607 193.140.192.33:80 TIME_WAIT tcp 0 0 10.0.2.15:43734 54.231.12.240:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:50911 ESTABLISHED tcp 0 0 10.0.2.15:8020 10.0.2.15:60349 ESTABLISHED tcp 0 0 10.0.2.15:34133 78.109.175.117:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:36037 ESTABLISHED tcp 0 0 10.0.2.15:8042 10.0.2.15:46679 TIME_WAIT tcp 0 0 127.0.0.1:46603 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:50263 10.0.2.15:8020 TIME_WAIT tcp 0 0 10.0.2.15:48239 193.206.139.34:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:37619 ESTABLISHED tcp 0 0 127.0.0.1:5432 127.0.0.1:37805 ESTABLISHED tcp 0 0 10.0.2.15:49148 62.90.168.59:80 TIME_WAIT tcp 0 0 10.0.2.15:48039 212.219.56.184:443 TIME_WAIT tcp 0 0 10.0.2.15:40769 152.19.134.142:443 TIME_WAIT tcp 0 0 10.0.2.15:50911 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:50630 185.34.86.124:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:51416 ESTABLISHED tcp 0 0 127.0.0.1:47504 127.0.0.1:50010 TIME_WAIT tcp 0 0 127.0.0.1:52235 127.0.0.1:36914 TIME_WAIT tcp 0 0 10.0.2.15:41256 92.87.156.5:80 TIME_WAIT tcp 0 0 10.0.2.15:36336 10.0.2.15:19888 TIME_WAIT tcp 0 0 10.0.2.15:50909 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:50322 10.0.2.15:8020 TIME_WAIT tcp 0 0 127.0.0.1:35048 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:38941 193.84.206.135:80 TIME_WAIT tcp 0 0 10.0.2.15:50488 10.0.2.15:8020 ESTABLISHED tcp 0 0 10.0.2.15:50070 10.0.2.15:55803 TIME_WAIT tcp 0 0 10.0.2.15:43598 5.135.162.176:80 TIME_WAIT tcp 0 0 10.0.2.15:37345 150.214.5.134:80 TIME_WAIT tcp 0 0 10.0.2.15:3306 10.0.2.15:50914 ESTABLISHED tcp 0 0 127.0.0.1:10000 127.0.0.1:55564 ESTABLISHED tcp 0 0 10.0.2.15:47732 147.156.223.157:80 TIME_WAIT tcp 0 0 10.0.2.15:50070 10.0.2.15:55797 TIME_WAIT tcp 1 0 10.0.2.15:44516 10.0.2.15:8088 CLOSE_WAIT tcp 0 0 10.0.2.15:45853 217.243.224.144:80 TIME_WAIT tcp 0 0 10.0.2.15:36018 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:50070 10.0.2.15:55815 TIME_WAIT tcp 0 0 127.0.0.1:3306 127.0.0.1:57970 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:50913 ESTABLISHED tcp 1 0 10.0.2.15:51625 10.0.2.15:11000 CLOSE_WAIT tcp 0 0 10.0.2.15:38048 10.0.2.15:10000 TIME_WAIT tcp 0 0 10.0.2.15:38389 130.236.254.50:80 TIME_WAIT tcp 0 0 10.0.2.15:19888 10.0.2.15:36340 TIME_WAIT tcp 0 0 10.0.2.15:55323 130.226.184.9:80 TIME_WAIT tcp 0 0 127.0.0.1:55564 127.0.0.1:10000 ESTABLISHED tcp 0 0 10.0.2.15:52144 10.0.2.15:8025 ESTABLISHED tcp 1 0 10.0.2.15:39953 10.0.2.15:50070 CLOSE_WAIT tcp 0 0 10.0.2.15:50010 10.0.2.15:57490 ESTABLISHED tcp 0 0 10.0.2.15:37280 89.102.0.150:80 TIME_WAIT tcp 0 0 127.0.0.1:52235 127.0.0.1:36909 TIME_WAIT tcp 0 0 10.0.2.15:50323 10.0.2.15:8020 TIME_WAIT tcp 0 0 10.0.2.15:8020 10.0.2.15:50487 ESTABLISHED tcp 0 0 10.0.2.15:39780 192.87.102.42:80 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:46603 ESTABLISHED tcp 0 0 10.0.2.15:56460 134.95.114.165:80 TIME_WAIT tcp 0 0 10.0.2.15:51415 10.0.2.15:3306 ESTABLISHED tcp 0 0 10.0.2.15:42702 31.13.223.131:80 TIME_WAIT tcp 0 0 10.0.2.15:39276 10.0.2.15:11000 TIME_WAIT tcp 0 0 10.0.2.15:56337 130.239.18.173:80 TIME_WAIT tcp 0 0 127.0.0.1:35049 127.0.0.1:5432 ESTABLISHED tcp 0 0 127.0.0.1:34891 127.0.0.1:5432 ESTABLISHED tcp 0 0 10.0.2.15:34751 194.116.84.14:80 TIME_WAIT tcp 0 0 10.0.2.15:54936 91.216.163.60:80 TIME_WAIT tcp 0 0 10.0.2.15:50365 10.0.2.15:50010 ESTABLISHED tcp 0 0 10.0.2.15:45852 217.243.224.144:80 TIME_WAIT tcp 0 0 10.0.2.15:50075 10.0.2.15:43275 TIME_WAIT tcp 0 0 10.0.2.15:50070 10.0.2.15:55805 TIME_WAIT tcp 0 0 10.0.2.15:57490 10.0.2.15:50010 ESTABLISHED tcp 0 0 10.0.2.15:50070 10.0.2.15:55802 TIME_WAIT tcp 0 0 127.0.0.1:5432 127.0.0.1:34893 ESTABLISHED tcp 0 0 10.0.2.15:3306 10.0.2.15:51253 ESTABLISHED tcp 0 0 10.0.2.15:49443 195.220.108.108:80 TIME_WAIT tcp 0 0 10.0.2.15:41555 193.227.234.135:80 TIME_WAIT tcp 0 0 10.0.2.15:2181 10.0.2.15:37259 ESTABLISHED tcp 0 0 10.0.2.15:44619 149.202.98.175:80 TIME_WAIT tcp 0 0 10.0.2.15:36435 193.219.28.2:80 TIME_WAIT tcp 0 0 10.0.2.15:41030 147.251.48.205:80 TIME_WAIT tcp 0 0 10.0.2.15:59175 147.229.3.144:80 TIME_WAIT tcp 0 0 10.0.2.15:58468 54.231.13.153:80 TIME_WAIT tcp 0 0 10.0.2.15:50913 10.0.2.15:3306 ESTABLISHED tcp 0 0 127.0.0.1:5432 127.0.0.1:34906 ESTABLISHED tcp 0 0 10.0.2.15:34685 194.54.81.27:80 TIME_WAIT tcp 0 0 10.0.2.15:57917 37.59.70.252:80 TIME_WAIT tcp 0 0 127.0.0.1:50366 127.0.0.1:5432 ESTABLISHED tcp 0 0 :::22 :::* LISTEN tcp 0 0 :::5432 :::* LISTEN udp 0 0 127.0.0.1:38564 127.0.0.1:38564 ESTABLISHED udp 0 0 127.0.0.1:40 0.0.0.0:* udp 0 0 0.0.0.0:45871 0.0.0.0:* udp 0 0 0.0.0.0:68 0.0.0.0:* udp 0 0 0.0.0.0:68 0.0.0.0:* udp 0 0 0.0.0.0:111 0.0.0.0:* udp 0 0 0.0.0.0:631 0.0.0.0:* udp 0 0 0.0.0.0:4242 0.0.0.0:* Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path unix 2 [ ACC ] STREAM LISTENING 10462 /var/lib/mysql/mysql.sock unix 2 [ ACC ] STREAM LISTENING 10690 /tmp/.s.PGSQL.5432 unix 2 [ ACC ] STREAM LISTENING 9288 public/cleanup unix 2 [ ACC ] STREAM LISTENING 9295 private/tlsmgr unix 2 [ ACC ] STREAM LISTENING 9299 private/rewrite unix 2 [ ACC ] STREAM LISTENING 9303 private/bounce unix 2 [ ACC ] STREAM LISTENING 9307 private/defer unix 2 [ ACC ] STREAM LISTENING 9311 private/trace unix 2 [ ACC ] STREAM LISTENING 9315 private/verify unix 2 [ ACC ] STREAM LISTENING 9319 public/flush unix 2 [ ACC ] STREAM LISTENING 9323 private/proxymap unix 2 [ ACC ] STREAM LISTENING 9327 private/proxywrite unix 2 [ ACC ] STREAM LISTENING 9331 private/smtp unix 2 [ ACC ] STREAM LISTENING 9335 private/relay unix 2 [ ACC ] STREAM LISTENING 9339 public/showq unix 2 [ ACC ] STREAM LISTENING 6920 @/com/ubuntu/upstart unix 2 [ ACC ] STREAM LISTENING 9343 private/error unix 2 [ ACC ] STREAM LISTENING 9347 private/retry unix 2 [ ACC ] STREAM LISTENING 9351 private/discard unix 2 [ ACC ] STREAM LISTENING 9355 private/local unix 2 [ ACC ] STREAM LISTENING 9359 private/virtual unix 2 [ ACC ] STREAM LISTENING 9363 private/lmtp unix 2 [ ACC ] STREAM LISTENING 9367 private/anvil unix 2 [ ACC ] STREAM LISTENING 9371 private/scache unix 2 [ ] DGRAM 8572 /var/run/portreserve/socket unix 2 [ ] DGRAM 7429 @/org/kernel/udev/udevd unix 9 [ ] DGRAM 8601 /dev/log unix 2 [ ACC ] STREAM LISTENING 12665 /var/lib/hadoop-hdfs/dn_socket unix 2 [ ACC ] STREAM LISTENING 8691 /var/run/dbus/system_bus_socket unix 2 [ ] DGRAM 7325290 unix 2 [ ] DGRAM 7319673 unix 2 [ ] DGRAM 3211603 unix 3 [ ] STREAM CONNECTED 1719017 unix 3 [ ] STREAM CONNECTED 1719016 unix 3 [ ] STREAM CONNECTED 13081 /var/lib/hadoop-hdfs/dn_socket unix 3 [ ] STREAM CONNECTED 816813 unix 2 [ ] STREAM CONNECTED 35376 unix 2 [ ] STREAM CONNECTED 35186 unix 3 [ ] STREAM CONNECTED 34650 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 34649 unix 2 [ ] DGRAM 33718 unix 2 [ ] STREAM CONNECTED 32569 unix 3 [ ] STREAM CONNECTED 32564 unix 3 [ ] STREAM CONNECTED 32563 unix 2 [ ] STREAM CONNECTED 30643 unix 2 [ ] STREAM CONNECTED 30228 unix 2 [ ] STREAM CONNECTED 30097 unix 2 [ ] STREAM CONNECTED 26860 unix 2 [ ] STREAM CONNECTED 26401 unix 2 [ ] STREAM CONNECTED 25123 unix 2 [ ] STREAM CONNECTED 25116 unix 2 [ ] STREAM CONNECTED 25099 unix 2 [ ] STREAM CONNECTED 25085 unix 2 [ ] STREAM CONNECTED 25081 unix 3 [ ] STREAM CONNECTED 24286 unix 3 [ ] STREAM CONNECTED 24285 unix 2 [ ] STREAM CONNECTED 24243 unix 2 [ ] STREAM CONNECTED 20570 unix 2 [ ] STREAM CONNECTED 18746 unix 3 [ ] STREAM CONNECTED 18739 unix 3 [ ] STREAM CONNECTED 18738 unix 2 [ ] STREAM CONNECTED 18734 unix 2 [ ] STREAM CONNECTED 18263 unix 2 [ ] STREAM CONNECTED 14410 unix 3 [ ] STREAM CONNECTED 14401 unix 3 [ ] STREAM CONNECTED 14400 unix 2 [ ] STREAM CONNECTED 14036 unix 2 [ ] STREAM CONNECTED 13087 unix 2 [ ] STREAM CONNECTED 13048 unix 2 [ ] STREAM CONNECTED 12677 unix 3 [ ] STREAM CONNECTED 12669 unix 3 [ ] STREAM CONNECTED 12668 unix 2 [ ] STREAM CONNECTED 12663 unix 2 [ ] STREAM CONNECTED 12660 unix 2 [ ] STREAM CONNECTED 12655 unix 2 [ ] STREAM CONNECTED 10825 unix 2 [ ] STREAM CONNECTED 10817 unix 2 [ ] DGRAM 9500 unix 3 [ ] STREAM CONNECTED 9419 unix 3 [ ] STREAM CONNECTED 9418 unix 2 [ ] DGRAM 9401 unix 3 [ ] STREAM CONNECTED 9374 unix 3 [ ] STREAM CONNECTED 9373 unix 3 [ ] STREAM CONNECTED 9370 unix 3 [ ] STREAM CONNECTED 9369 unix 3 [ ] STREAM CONNECTED 9366 unix 3 [ ] STREAM CONNECTED 9365 unix 3 [ ] STREAM CONNECTED 9362 unix 3 [ ] STREAM CONNECTED 9361 unix 3 [ ] STREAM CONNECTED 9358 unix 3 [ ] STREAM CONNECTED 9357 unix 3 [ ] STREAM CONNECTED 9354 unix 3 [ ] STREAM CONNECTED 9353 unix 3 [ ] STREAM CONNECTED 9350 unix 3 [ ] STREAM CONNECTED 9349 unix 3 [ ] STREAM CONNECTED 9346 unix 3 [ ] STREAM CONNECTED 9345 unix 3 [ ] STREAM CONNECTED 9342 unix 3 [ ] STREAM CONNECTED 9341 unix 3 [ ] STREAM CONNECTED 9338 unix 3 [ ] STREAM CONNECTED 9337 unix 3 [ ] STREAM CONNECTED 9334 unix 3 [ ] STREAM CONNECTED 9333 unix 3 [ ] STREAM CONNECTED 9330 unix 3 [ ] STREAM CONNECTED 9329 unix 3 [ ] STREAM CONNECTED 9326 unix 3 [ ] STREAM CONNECTED 9325 unix 3 [ ] STREAM CONNECTED 9322 unix 3 [ ] STREAM CONNECTED 9321 unix 3 [ ] STREAM CONNECTED 9318 unix 3 [ ] STREAM CONNECTED 9317 unix 3 [ ] STREAM CONNECTED 9314 unix 3 [ ] STREAM CONNECTED 9313 unix 3 [ ] STREAM CONNECTED 9310 unix 3 [ ] STREAM CONNECTED 9309 unix 3 [ ] STREAM CONNECTED 9306 unix 3 [ ] STREAM CONNECTED 9305 unix 3 [ ] STREAM CONNECTED 9302 unix 3 [ ] STREAM CONNECTED 9301 unix 3 [ ] STREAM CONNECTED 9298 unix 3 [ ] STREAM CONNECTED 9297 unix 3 [ ] STREAM CONNECTED 9294 unix 3 [ ] STREAM CONNECTED 9293 unix 3 [ ] STREAM CONNECTED 9291 unix 3 [ ] STREAM CONNECTED 9290 unix 3 [ ] STREAM CONNECTED 9287 unix 3 [ ] STREAM CONNECTED 9286 unix 3 [ ] STREAM CONNECTED 9284 unix 3 [ ] STREAM CONNECTED 9283 unix 2 [ ] DGRAM 9253 unix 3 [ ] STREAM CONNECTED 9106 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9105 unix 3 [ ] STREAM CONNECTED 9082 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 9081 unix 3 [ ] STREAM CONNECTED 8991 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 8990 unix 3 [ ] STREAM CONNECTED 8703 /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 8702 unix 3 [ ] STREAM CONNECTED 8696 unix 3 [ ] STREAM CONNECTED 8695 unix 3 [ ] DGRAM 7446 unix 3 [ ] DGRAM 7445

Go to the top


Services


tcp        0      0 0.0.0.0:39266               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:10020               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:2181                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8040                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:4200                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8042                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8010                0.0.0.0:*                   LISTEN      
tcp        0      0 10.0.2.15:50090             0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN      
tcp        0      0 127.0.0.1:52235             0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8141                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:45454               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:111                 0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:10000               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:19888               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8080                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:10033               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8050                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:4242                0.0.0.0:*                   LISTEN      
tcp        0      0 10.0.2.15:8020              0.0.0.0:*                   LISTEN      
tcp        0      0 10.0.2.15:50070             0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8088                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:10200               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8440                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:11000               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:5432                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8888                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8025                0.0.0.0:*                   LISTEN      
tcp        0      0 127.0.0.1:11001             0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8441                0.0.0.0:*                   LISTEN      
tcp        0      0 127.0.0.1:25                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:13562               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:50010               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:9083                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:50075               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8188                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8030                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8670                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:50111               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:50079               0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:8000                0.0.0.0:*                   LISTEN      
tcp        0      0 0.0.0.0:2049                0.0.0.0:*                   LISTEN      
tcp        0      0 :::22                       :::*                        LISTEN      
tcp        0      0 :::5432                     :::*                        LISTEN      

Cron

-rw-------. 1 root root    0 2013-11-23 12:43 /etc/cron.deny
-rw-r--r--. 1 root root  457 2011-09-27 01:33 /etc/crontab

/etc/cron.d:
total 8
-rw-r--r--. 1 root root 113 2013-11-23 12:43 0hourly
-rw-------  1 root root 108 2014-11-12 14:58 raid-check

/etc/cron.daily:
total 16
-rwx------  1 root root 118 2015-06-17 17:23 cups
-rwxr-xr-x. 1 root root 196 2013-07-18 10:08 logrotate
-rwxr-xr-x  1 root root 905 2013-02-22 02:13 makewhatis.cron
-rwxr-xr-x  1 root root 365 2009-10-16 05:52 tmpwatch

/etc/cron.hourly:
total 4
-rwxr-xr-x. 1 root root 409 2013-11-23 12:43 0anacron

/etc/cron.monthly:
total 0

/etc/cron.weekly:
total 0

total 0

Crontabs

Go to the top


Optional modules


Active Plug-in

PostgreSQL MySQL Apache SQLiteEmp7 Virtual Hortonworks

Go to the top


PostgreSQL

Plug-in version: 1.0.5a

Owner

Database Cluster Owner: postgres

Database Statistics

sandbox.hortonworks.com.postgres.5432.postgres.htm
sandbox.hortonworks.com.postgres.5432.ambari.htm
sandbox.hortonworks.com.postgres.5432.ambarirca.htm

Configuration files

PGHOME= /var/lib/pgsql/data


./postgresql.conf

listen_addresses = '*'        # what IP address(es) to listen on;
max_connections = 100			# (change requires restart)
shared_buffers = 32MB			# min 128kB
logging_collector = on			# Enable capturing of stderr and csvlog
log_directory = 'pg_log'		# directory where log files are written,
log_filename = 'postgresql-%a.log'	# log file name pattern,
log_truncate_on_rotation = on		# If on, an existing log file of the
log_rotation_age = 1d			# Automatic rotation of logfiles will
log_rotation_size = 0			# Automatic rotation of logfiles will 
datestyle = 'iso, mdy'
lc_messages = 'en_US.UTF-8'			# locale for system error message
lc_monetary = 'en_US.UTF-8'			# locale for monetary formatting
lc_numeric = 'en_US.UTF-8'			# locale for number formatting
lc_time = 'en_US.UTF-8'				# locale for time formatting
default_text_search_config = 'pg_catalog.english'

./pg_hba.conf
local   all   postgres                               ident
host    all   postgres         127.0.0.1/32          ident
host    all   postgres         ::1/128               ident
local  all  ambari,mapred md5
host  all   ambari,mapred 0.0.0.0/0  md5
host  all   ambari,mapred ::/0 md5

Summary

Users
postgres:x:26:26:PostgreSQL Server:/var/lib/pgsql:/bin/bash

Active processes postgres /usr/bin/postmaster -p 5432 -D /var/lib/pgsql/data postgres postgres: logger process postgres postgres: writer process postgres postgres: wal writer process postgres postgres: autovacuum launcher process postgres postgres: stats collector process postgres postgres: ambari ambari 127.0.0.1(34891) idle postgres postgres: ambari ambari 127.0.0.1(34893) idle postgres postgres: ambari ambari 127.0.0.1(34906) idle postgres postgres: ambari ambari 127.0.0.1(35047) idle postgres postgres: ambari ambari 127.0.0.1(35048) idle postgres postgres: ambari ambari 127.0.0.1(35049) idle postgres postgres: ambari ambari 127.0.0.1(50366) idle postgres postgres: ambari ambari 127.0.0.1(46603) idle postgres postgres: ambari ambari 127.0.0.1(37597) idle postgres postgres: ambari ambari 127.0.0.1(37609) idle postgres postgres: ambari ambari 127.0.0.1(37619) idle postgres postgres: ambari ambari 127.0.0.1(37805) idle
Shared memory 0x0052e2c1 0 postgres 600 37879808 16 0x0052e2c1 163844 postgres 600 17 0x0052e2c2 196613 postgres 600 17 0x0052e2c3 229382 postgres 600 17 0x0052e2c4 262151 postgres 600 17 0x0052e2c5 294920 postgres 600 17 0x0052e2c6 327689 postgres 600 17 0x0052e2c7 360458 postgres 600 17
Software packages postgresql-server-8.4.20-3.el6_6 x86_64 postgresql-8.4.20-3.el6_6 x86_64 postgresql-libs-8.4.20-3.el6_6 x86_64
DB Server Logs total 56 -rw------- 1 postgres postgres 7801 2015-07-21 23:58 postgresql-Tue.log -rw------- 1 postgres postgres 1932 2015-07-22 00:08 postgresql-Wed.log -rw------- 1 postgres postgres 40236 2015-07-26 18:32 postgresql-Sun.log -rw------- 1 postgres postgres 1200 2015-07-27 15:37 postgresql-Mon.log
Last Log excerpt FATAL: no pg_hba.conf entry for host "[local]", user "root", database "root", SSL off ERROR: division by zero STATEMENT: SELECT 'Table', ''||sum(heap_blks_read) as heap_read, ''||sum(heap_blks_hit) as heap_hit, ''||100*sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables; FATAL: no pg_hba.conf entry for host "[local]", user "root", database "root", SSL off ERROR: division by zero STATEMENT: SELECT 'Table', ''||sum(heap_blks_read) as heap_read, ''||sum(heap_blks_hit) as heap_hit, ''||100*sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables; FATAL: no pg_hba.conf entry for host "[local]", user "root", database "root", SSL off ERROR: division by zero STATEMENT: SELECT 'Table', ''||sum(heap_blks_read) as heap_read, ''||sum(heap_blks_hit) as heap_hit, ''||100*sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) as ratio FROM pg_statio_user_tables;
Write-Ahead Logs total 32772 drwx------ 2 postgres postgres 4096 2015-07-21 15:43 archive_status -rw------- 1 postgres postgres 16777216 2015-07-27 07:11 000000010000000000000003 -rw------- 1 postgres postgres 16777216 2015-07-27 15:36 000000010000000000000002

Go to the top


MySQL DB

Plug-in version: 1.0.2

Summary

Version
mysqladmin  Ver 8.42 Distrib 5.1.73, for redhat-linux-gnu on x86_64
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Server version		5.1.73
Protocol version	10
Connection		Localhost via UNIX socket
UNIX socket		/var/lib/mysql/mysql.sock
Uptime:			1 day 2 hours 5 min 12 sec

Threads: 16  Questions: 185170  Slow queries: 0  Opens: 7967  Flush tables: 1  Open tables: 64  Queries per second avg: 1.971

Databases information_schema hive mysql ranger ranger_audit test
Users mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash
Active processes root 1295 1 0 Jul26 ? 00:00:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql mysql 1397 1295 0 Jul26 ? 00:01:05 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock root 1602 1 3 Jul26 ? 00:39:14 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -server -XX:NewRatio=3 -XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit -XX:CMSInitiatingOccupancyFraction=60 -Dsun.zip.disableMemoryMapping=true -Xms512m -Xmx2048m -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -cp /etc/ambari-server/conf:/usr/lib/ambari-server/*:/usr/share/java/mysql-connector-java.jar:/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar org.apache.ambari.server.controller.AmbariServer root 2516 1 0 Jul26 ? 00:00:00 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter hdfs 2560 2516 0 Jul26 ? 00:06:12 jsvc.exec -Dproc_nfs3 -outfile /var/log/hadoop/root/nfs3_jsvc.out -errfile /var/log/hadoop/root/nfs3_jsvc.err -pidfile /var/run/hadoop/root/hadoop_privileged_nfs3.pid -nodetach -user hdfs -cp /etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf -Xmx250m -Dhdp.version=2.3.0.0-2557 -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhdp.version= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop/ -Dhadoop.log.file=hadoop-hdfs-nfs3-sandbox.hortonworks.com.log -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.id.str= -Dhadoop.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/root -Dhadoop.id.str=hdfs -Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.nfs.nfs3.PrivilegedNfsGatewayStarter yarn 3037 1 2 Jul26 ? 00:29:21 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_resourcemanager -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-resourcemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager yarn 3038 1 0 Jul26 ? 00:11:52 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_nodemanager -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-nodemanager-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager yarn 3042 1 0 Jul26 ? 00:05:08 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dproc_historyserver -Xmx250m -Dhdp.version=2.3.0.0-2557 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.home.dir= -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn -Dyarn.log.dir=/var/log/hadoop-yarn/yarn -Dhadoop.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.log.file=yarn-yarn-historyserver-sandbox.hortonworks.com.log -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-nodemanager -Dhadoop.home.dir=/usr/hdp/2.3.0.0-2557/hadoop -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA -Djava.library.path=:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native:/usr/hdp/2.3.0.0-2557/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.3.0.0-2557/hadoop/lib/native -classpath /etc/hadoop/conf:/etc/hadoop/conf:/etc/hadoop/conf:/usr/hdp/2.3.0.0-2557/hadoop/lib/*:/usr/hdp/2.3.0.0-2557/hadoop/.//*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/./:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-hdfs/.//*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-yarn/.//*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/lib/*:/usr/hdp/2.3.0.0-2557/hadoop-mapreduce/.//*:::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf::/usr/share/java/mysql-connector-java-5.1.17.jar:/usr/share/java/mysql-connector-java-5.1.31-bin.jar:/usr/share/java/mysql-connector-java.jar:/usr/hdp/2.3.0.0-2557/tez/*:/usr/hdp/2.3.0.0-2557/tez/lib/*:/usr/hdp/2.3.0.0-2557/tez/conf:/usr/hdp/current/hadoop-yarn-nodemanager/.//*:/usr/hdp/current/hadoop-yarn-nodemanager/lib/*:/etc/hadoop/conf/ahs-config/log4j.properties org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
Software packages mysql-server-5.1.73-5.el6_6 x86_64 mysql-libs-5.1.73-5.el6_6 x86_64 mysql-connector-java-5.1.17-6.el6 noarch mysql-5.1.73-5.el6_6 x86_64 perl-DBD-MySQL-4.013-3.el6 x86_64

Database Statistics

Configuration Files
-rw-r--r-- 1 root root 251 2015-07-21 15:58 /etc/my.cnf

Configuration Parameters
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
user=mysql
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Go to the top


Apache HTTP Server

Plug-in version: 1.0.1

Configuration files

/etc/httpd/conf/httpd.conf
-rw-r--r-- 1 root root 34418 2014-08-15 06:57 /etc/httpd/conf/httpd.conf

Most Important Configuration directives

/etc/httpd/conf/httpd.conf
ServerRoot "/etc/httpd"
KeepAlive Off
MaxKeepAliveRequests 100
KeepAliveTimeout 15
Listen 80
DocumentRoot "/var/www/html"
ErrorLog logs/error_log

Active Processes

root      1239     1  0 Jul26 ?        00:00:07 /usr/sbin/httpd
513       2114     1  0 Jul26 ?        00:02:53 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,CONSOLE -cp /usr/hdp/current/zookeeper-server/bin/../build/classes:/usr/hdp/current/zookeeper-server/bin/../build/lib/*.jar:/usr/hdp/current/zookeeper-server/bin/../lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-provider-api-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared4-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-http-2.4.jar:/usr/hdp/current/zookeeper-server/bin/../lib/wagon-file-1.0-beta-6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/slf4j-api-1.6.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-utils-3.0.8.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-interpolation-1.11.jar:/usr/hdp/current/zookeeper-server/bin/../lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/netty-3.7.0.Final.jar:/usr/hdp/current/zookeeper-server/bin/../lib/nekohtml-1.9.6.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-settings-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-project-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-profile-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-model-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-artifact-2.2.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/log4j-1.2.16.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jsoup-1.7.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/jline-0.9.94.jar:/usr/hdp/current/zookeeper-server/bin/../lib/httpcore-4.2.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/httpclient-4.2.3.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-logging-1.1.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-io-2.2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/commons-codec-1.6.jar:/usr/hdp/current/zookeeper-server/bin/../lib/classworlds-1.1-alpha-2.jar:/usr/hdp/current/zookeeper-server/bin/../lib/backport-util-concurrent-3.1.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-launcher-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../lib/ant-1.8.0.jar:/usr/hdp/current/zookeeper-server/bin/../zookeeper-3.4.6.2.3.0.0-2557.jar:/usr/hdp/current/zookeeper-server/bin/../src/java/lib/*.jar:/etc/zookeeper/conf::/usr/share/zookeeper/*:/usr/share/zookeeper/* -Xmx1024m -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false org.apache.zookeeper.server.quorum.QuorumPeerMain /etc/zookeeper/conf/zoo.cfg
oozie     2606     1  8 Jul26 ?        01:45:38 /usr/lib/jvm/java-1.7.0-openjdk.x86_64/bin/java -Djava.util.logging.config.file=/usr/hdp/current/oozie-server/oozie-server/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dhdp.version=2.3.0.0-2557 -Xmx2048m -XX:MaxPermSize=512m -Xmx2048m -XX:MaxPermSize=512m -Dderby.stream.error.file=/var/log/oozie/derby.log -Doozie.home.dir=/usr/hdp/2.3.0.0-2557/oozie -Doozie.config.dir=/usr/hdp/current/oozie-server/conf -Doozie.log.dir=/var/log/oozie -Doozie.data.dir=/hadoop/oozie/data -Doozie.instance.id=sandbox.hortonworks.com -Doozie.config.file=oozie-site.xml -Doozie.log4j.file=oozie-log4j.properties -Doozie.log4j.reload=10 -Doozie.http.hostname=sandbox.hortonworks.com -Doozie.admin.port=11001 -Doozie.http.port=11000 -Doozie.https.port=11443 -Doozie.base.url=http://sandbox.hortonworks.com:11000/oozie -Doozie.https.keystore.file=/home/oozie/.keystore -Doozie.https.keystore.pass=password -Djava.library.path=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64 -Djava.endorsed.dirs=/usr/lib/bigtop-tomcat/endorsed -classpath /usr/lib/bigtop-tomcat/bin/bootstrap.jar -Dcatalina.base=/usr/hdp/current/oozie-server/oozie-server -Dcatalina.home=/usr/lib/bigtop-tomcat -Djava.io.tmpdir=/var/tmp/oozie org.apache.catalina.startup.Bootstrap start
apache   29020  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29021  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29022  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29023  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29024  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29025  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29026  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd
apache   29027  1239  0 Jul26 ?        00:00:00 /usr/sbin/httpd

HTDOCS (using common location)

drwxr-xr-x 6 root root 4096 2015-07-21 16:13 /var/www

Go to the top


emp7 SQLite Benchmark

Plug-in version: 1.0.0 /usr/bin/sqlite3

SQLite version: 3.6.20

EMP7 Benchmark:

105413504

real	0m13.941s
user	0m13.168s
sys	0m0.066s

Go to the top


Virtual

Plug-in version: 1.0.5
Summary
System seems to be a VirtualBox Guest


OS Release puppetlabs-release-6-7.noarch epel-release-6-8.noarch centos-release-6-6.el6.centos.12.2.x86_64
Processor 4 model name : Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz 4 flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good xtopology nonstop_tsc pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx rdrand lahf_lm abm avx2
Memory total used free shared buffers cached Mem: 8059344 5827336 2232008 10852 164236 696152 -/+ buffers/cache: 4966948 3092396 Swap: 5119996 76096 5043900
Active processes
Packages

Kernel ring buffer messages related to virtualization

ACPI: RSDP 00000000000e0000 00024 (v02 VBOX  )
ACPI: XSDT 00000000dfff0030 0003C (v01 VBOX   VBOXXSDT 00000001 ASL  00000061)
ACPI: FACP 00000000dfff00f0 000F4 (v04 VBOX   VBOXFACP 00000001 ASL  00000061)
ACPI: DSDT 00000000dfff0480 01BF1 (v01 VBOX   VBOXBIOS 00000002 INTL 20100528)
ACPI: APIC 00000000dfff0240 0006C (v02 VBOX   VBOXAPIC 00000001 ASL  00000061)
ACPI: SSDT 00000000dfff02b0 001CC (v01 VBOX   VBOXCPUT 00000002 INTL 20100528)
ata1.00: ATA-6: VBOX HARDDISK, 1.0, max UDMA/133
scsi 0:0:0:0: Direct-Access     ATA      VBOX HARDDISK    1.0  PQ: 0 ANSI: 5
[drm] Initialized vboxvideo 1.0.0 20090303 for 0000:00:02.0 on minor 0
vboxguest 0000:00:04.0: PCI INT A -> GSI 20 (level, low) -> IRQ 20
vboxguest: misc device minor 57, IRQ 20, I/O port d040, MMIO at 00000000f0400000 (size 0x400000)
vboxguest: Successfully loaded version 4.3.22 (interface 0x00010004)
vboxsf: Successfully loaded version 4.3.22 (interface 0x00010004)
VBoxService 4.3.22 r98236 (verbosity: 0) linux.amd64 (Feb 12 2015 16:53:43) release log
00:00:00.000219 main     Executable: /opt/VBoxGuestAdditions-4.3.22/sbin/VBoxService
Found: VirtualBox
Booting paravirtualized kernel on bare hardware
Found: Paravirtualized Kernel

Go to the top


Hortonworks

Plug-in version: 1.0.0

Release

Name        : hdp-select                   Relocations: (not relocatable)
Version     : 2.3.0.0                           Vendor: (none)
Release     : 2557.el6                      Build Date: Tue 14 Jul 2015 05:07:53 PM UTC
Install Date: Tue 21 Jul 2015 03:46:00 PM UTC      Build Host: ip-10-0-0-89.ec2.internal
Group       : Distro/utilities              Source RPM: hdp-select-2.3.0.0-2557.el6.src.rpm
Size        : 19909                            License: APL2
Signature   : RSA/SHA1, Tue 14 Jul 2015 05:19:36 PM UTC, Key ID b9733a7a07513cad
Summary     : hdp-select Distro select package
Description :
hdp-select-2.3.0.0 select package

Configuration

Tools: ambari-server   ambari-agent   ams-hbase   atlas   falcon   flume   hadoop   hbase   hive   hue   kafka   knox   oozie   pig   slider   spark   sqoop   storm   tez   zookeeper  

ambari-server

/etc/ambari-server/conf:
-rw-r--r-- 1 root root 3629 2015-07-21 15:44 ambari.properties
-rw-r--r-- 1 root root  286 2015-07-20 04:14 krb5JAASLogin.conf
-rw-r--r-- 1 root root 2379 2015-07-20 04:14 log4j.properties
-rw-r----- 1 root root    7 2015-07-21 15:43 password.dat

ambari.properties

jdk1.7.dest-file=jdk-7u67-linux-x64.tar.gz
agent.package.install.task.timeout=1800
server.connection.max.idle.millis=900000
bootstrap.script=/usr/lib/python2.6/site-packages/ambari_server/bootstrap.py
server.version.file=/var/lib/ambari-server/resources/version
api.authenticate=true
server.persistence.type=local
jdk1.8.jcpol-url=http://public-repo-1.hortonworks.com/ARTIFACTS/jce_policy-8.zip
jdk1.8.dest-file=jdk-8u40-linux-x64.tar.gz
common.services.path=/var/lib/ambari-server/resources/common-services
ambari-server.user=root
webapp.dir=/usr/lib/ambari-server/web
agent.threadpool.size.max=25
ambari.python.wrap=ambari-python-wrap
jdk1.8.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-8u40-linux-x64.tar.gz
jdk1.7.url=http://public-repo-1.hortonworks.com/ARTIFACTS/jdk-7u67-linux-x64.tar.gz
server.jdbc.user.name=ambari
server.os_family=redhat6
java.home=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
server.jdbc.postgres.schema=ambari
java.releases=jdk1.8,jdk1.7
skip.service.checks=false
shared.resources.dir=/usr/lib/ambari-server/lib/ambari_commons/resources
recommendations.dir=/var/run/ambari-server/stack-recommendations
ulimit.open.files=10000
jdk1.8.desc=Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
server.os_type=centos6
resources.dir=/var/lib/ambari-server/resources
custom.action.definitions=/var/lib/ambari-server/resources/custom_action_definitions
views.request.connect.timeout.millis=5000
jdk1.7.re=(jdk.*)/jre
server.execution.scheduler.maxDbConnections=5
jdk1.7.jcpol-url=http://public-repo-1.hortonworks.com/ARTIFACTS/UnlimitedJCEPolicyJDK7.zip
bootstrap.setup_agent.script=/usr/lib/python2.6/site-packages/ambari_server/setupAgent.py
jdk1.8.jcpol-file=jce_policy-8.zip
server.http.session.inactive_timeout=1800
jdk1.7.jcpol-file=UnlimitedJCEPolicyJDK7.zip
server.execution.scheduler.misfire.toleration.minutes=480
security.server.keys_dir=/var/lib/ambari-server/keys
stackadvisor.script=/var/lib/ambari-server/resources/scripts/stack_advisor.py
server.tmp.dir=/var/lib/ambari-server/data/tmp
server.execution.scheduler.maxThreads=5
metadata.path=/var/lib/ambari-server/resources/stacks
server.fqdn.service.url=http://169.254.169.254/latest/meta-data/public-hostname
bootstrap.dir=/var/run/ambari-server/bootstrap
jdk1.7.home=/usr/jdk64/
kerberos.keytab.cache.dir=/var/lib/ambari-server/data/cache
jdk1.8.home=/usr/jdk64/
jdk1.8.re=(jdk.*)/jre
agent.task.timeout=900
client.threadpool.size.max=25
jdk1.7.desc=Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
server.jdbc.user.passwd=/etc/ambari-server/conf/password.dat
server.execution.scheduler.isClustered=false
server.stages.parallel=true
views.request.read.timeout.millis=10000
server.jdbc.database=postgres
server.jdbc.database_name=ambari

krb5JAASLogin.conf

com.sun.security.jgss.krb5.initiate {
    com.sun.security.auth.module.Krb5LoginModule required
    renewTGT=false
    doNotPrompt=true
    useKeyTab=true
    keyTab="/etc/security/keytabs/ambari.keytab"
    principal="ambari@EXAMPLE.COM"
    storeKey=true
    useTicketCache=false;
};

log4j.properties

ambari.log.dir=/var/log/ambari-server
ambari.log.file=ambari-server.log
ambari.config-changes.file=ambari-config-changes.log
ambari.alerts.file=ambari-alerts.log
log4j.rootLogger=INFO,file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ambari.log.dir}/${ambari.log.file}
log4j.appender.file.MaxFileSize=80MB
log4j.appender.file.MaxBackupIndex=60
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{DATE} %5p [%t] %c{1}:%L - %m%n
log4j.logger.configchange=INFO,configchange
log4j.additivity.configchange=false
log4j.appender.configchange=org.apache.log4j.FileAppender
log4j.appender.configchange.File=${ambari.log.dir}/${ambari.config-changes.file}
log4j.appender.configchange.layout=org.apache.log4j.PatternLayout
log4j.appender.configchange.layout.ConversionPattern=%d{ISO8601} %5p - %m%n
log4j.logger.alerts=INFO,alerts
log4j.additivity.alerts=false
log4j.appender.alerts=org.apache.log4j.FileAppender
log4j.appender.alerts.File=${ambari.log.dir}/${ambari.alerts.file}
log4j.appender.alerts.layout=org.apache.log4j.PatternLayout
log4j.appender.alerts.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.logger.org.apache.hadoop.yarn.client=WARN
log4j.logger.org.apache.slider.common.tools.SliderUtils=WARN
log4j.logger.org.apache.ambari.server.security.authorization=WARN

password.dat

bigdata

ambari-agent

/etc/ambari-agent/conf:
-rwxr-xr-x 1 root root 1638 2015-07-21 15:44 ambari-agent.ini
-rwxr-xr-x 1 root root 1624 2015-07-20 04:16 ambari-agent.ini.bak
-rwxr-xr-x 1 root root 2694 2015-07-20 04:16 logging.conf.sample

ambari-agent.ini

[server]
hostname=sandbox.hortonworks.com
url_port=8440
secured_url_port=8441
[agent]
prefix=/var/lib/ambari-agent/data
tmp_dir=/var/lib/ambari-agent/data/tmp
;loglevel=(DEBUG/INFO)
loglevel=INFO
data_cleanup_interval=86400
data_cleanup_max_age=2592000
data_cleanup_max_size_MB = 100
ping_port=8670
cache_dir=/var/lib/ambari-agent/cache
tolerate_download_failures=true
run_as_user=root
parallel_execution=0
[security]
keysdir=/var/lib/ambari-agent/keys
server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE
[services]
pidLookupPath=/var/run/
[heartbeat]
state_interval=6
dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
  /etc/sqoop,/etc/ganglia,
  /var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
  /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
; 0 - unlimited
log_lines_count=300
[logging]
syslog_enabled=0

ambari-agent.ini.bak

[server]
hostname=localhost
url_port=8440
secured_url_port=8441
[agent]
prefix=/var/lib/ambari-agent/data
tmp_dir=/var/lib/ambari-agent/data/tmp
;loglevel=(DEBUG/INFO)
loglevel=INFO
data_cleanup_interval=86400
data_cleanup_max_age=2592000
data_cleanup_max_size_MB = 100
ping_port=8670
cache_dir=/var/lib/ambari-agent/cache
tolerate_download_failures=true
run_as_user=root
parallel_execution=0
[security]
keysdir=/var/lib/ambari-agent/keys
server_crt=ca.crt
passphrase_env_var_name=AMBARI_PASSPHRASE
[services]
pidLookupPath=/var/run/
[heartbeat]
state_interval=6
dirs=/etc/hadoop,/etc/hadoop/conf,/etc/hbase,/etc/hcatalog,/etc/hive,/etc/oozie,
  /etc/sqoop,/etc/ganglia,
  /var/run/hadoop,/var/run/zookeeper,/var/run/hbase,/var/run/templeton,/var/run/oozie,
  /var/log/hadoop,/var/log/zookeeper,/var/log/hbase,/var/run/templeton,/var/log/hive
; 0 - unlimited
log_lines_count=300
[logging]
syslog_enabled=0

logging.conf.sample

[loggers]
keys=root,Controller
[handlers]
keys=logfile
[formatters]
keys=logfileformatter
[logger_root]
level=WARNING
handlers=logfile
[logger_Controller]
level=DEBUG
handlers=logfile
qualname=Controller
[formatter_logfileformatter]
format=%(levelname)s %(asctime)s %(filename)s:%(lineno)d - %(message)s
[handler_logfile]
class=handlers.RotatingFileHandler
level=DEBUG
args=('/var/log/ambari-agent/ambari-agent.log',"a", 10000000, 25)
formatter=logfileformatter

ams-hbase

/etc/ams-hbase/conf:
-rw-r--r-- 1 ams  hadoop 2022 2015-07-21 16:02 hadoop-metrics2-hbase.properties
-rw-r--r-- 1 root root   4023 2015-07-20 03:58 hbase-env.cmd
-rw-r--r-- 1 ams  root   3352 2015-07-21 16:02 hbase-env.sh
-rw-r--r-- 1 ams  hadoop  401 2015-07-26 18:50 hbase-policy.xml
-rw-r--r-- 1 ams  hadoop 4337 2015-07-26 18:50 hbase-site.xml
-rw-r--r-- 1 ams  hadoop 4241 2015-07-21 16:02 log4j.properties
-rw-r--r-- 1 ams  root     11 2015-07-21 16:02 regionservers

hadoop-metrics2-hbase.properties

hbase.extendedperiod = 3600
hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
hbase.period=10
hbase.collector=sandbox.hortonworks.com:6188
jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
jvm.period=10
jvm.collector=sandbox.hortonworks.com:6188
rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
rpc.period=10
rpc.collector=sandbox.hortonworks.com:6188
*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
*.sink.timeline.slave.host.name=sandbox.hortonworks.com
hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
hbase.sink.timeline.period=10
hbase.sink.timeline.collector=sandbox.hortonworks.com:6188
hbase.sink.timeline.serviceName-prefix=ams
*.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
hbase.*.source.filter.exclude=*Regions*

hbase-env.cmd

set HBASE_OPTS="-XX:+UseConcMarkSweepGC" "-Djava.net.preferIPv4Stack=true"

hbase-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/etc/ams-hbase/conf}
export HBASE_CLASSPATH=${HBASE_CLASSPATH}
export HBASE_HEAPSIZE=1024m
export HBASE_OPTS="-XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/ambari-metrics-collector/hs_err_pid%p.log -Djava.io.tmpdir=/var/lib/ambari-metrics-collector/hbase-tmp"
export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/ambari-metrics-collector/gc.log-`date +'%Y%m%d%H%M'`"
export HBASE_MASTER_OPTS=" -XX:PermSize=64m -XX:MaxPermSize=128m -Xms1024m -Xmx1024m -Xmn256m -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly"
export HBASE_REGIONSERVER_OPTS="-XX:MaxPermSize=128m -Xmn96m -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms512m -Xmx512m"
export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
export HBASE_LOG_DIR=/var/log/ambari-metrics-collector
export HBASE_PID_DIR=/var/run/ambari-metrics-collector/
export HBASE_MANAGES_ZK=false
_HADOOP_NATIVE_LIB="/usr/lib/ams-hbase/lib/hadoop-native/"
export HBASE_OPTS="$HBASE_OPTS -Djava.library.path=${_HADOOP_NATIVE_LIB}"
export HADOOP_HOME=/usr/lib/ams-hbase/
    

hbase-policy.xml

-!--Sun Jul 26 18:50:08 2015---
    -configuration-
    
    -property-
      -name-security.admin.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.client.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.masterregion.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
  -/configuration-

hbase-site.xml

-!--Sun Jul 26 18:50:08 2015---
    -configuration-
    
    -property-
      -name-hbase.client.scanner.caching-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-hbase.client.scanner.timeout.period-/name-
      -value-900000-/value-
    -/property-
    
    -property-
      -name-hbase.cluster.distributed-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.majorcompaction-/name-
      -value-0-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.memstore.block.multiplier-/name-
      -value-4-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.memstore.flush.size-/name-
      -value-134217728-/value-
    -/property-
    
    -property-
      -name-hbase.hstore.blockingStoreFiles-/name-
      -value-200-/value-
    -/property-
    
    -property-
      -name-hbase.hstore.flusher.count-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-hbase.local.dir-/name-
      -value-${hbase.tmp.dir}/local-/value-
    -/property-
    
    -property-
      -name-hbase.master.info.bindAddress-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-hbase.master.info.port-/name-
      -value-61310-/value-
    -/property-
    
    -property-
      -name-hbase.master.port-/name-
      -value-61300-/value-
    -/property-
    
    -property-
      -name-hbase.master.wait.on.regionservers.mintostart-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.global.memstore.lowerLimit-/name-
      -value-0.4-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.global.memstore.upperLimit-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.info.port-/name-
      -value-61330-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.port-/name-
      -value-61320-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.thread.compaction.large-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.thread.compaction.small-/name-
      -value-3-/value-
    -/property-
    
    -property-
      -name-hbase.replication-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hbase.rootdir-/name-
      -value-file:///var/lib/ambari-metrics-collector/hbase-/value-
    -/property-
    
    -property-
      -name-hbase.snapshot.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hbase.tmp.dir-/name-
      -value-/var/lib/ambari-metrics-collector/hbase-tmp-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.leaderport-/name-
      -value-61388-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.peerport-/name-
      -value-61288-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.property.clientPort-/name-
      -value-61181-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.property.dataDir-/name-
      -value-${hbase.tmp.dir}/zookeeper-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.quorum-/name-
      -value-localhost-/value-
    -/property-
    
    -property-
      -name-hfile.block.cache.size-/name-
      -value-0.3-/value-
    -/property-
    
    -property-
      -name-phoenix.groupby.maxCacheSize-/name-
      -value-307200000-/value-
    -/property-
    
    -property-
      -name-phoenix.query.maxGlobalMemoryPercentage-/name-
      -value-15-/value-
    -/property-
    
    -property-
      -name-phoenix.query.spoolThresholdBytes-/name-
      -value-12582912-/value-
    -/property-
    
    -property-
      -name-phoenix.query.timeoutMs-/name-
      -value-1200000-/value-
    -/property-
    
    -property-
      -name-phoenix.sequence.saltBuckets-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-phoenix.spool.directory-/name-
      -value-${hbase.tmp.dir}/phoenix-spool-/value-
    -/property-
    
    -property-
      -name-zookeeper.session.timeout-/name-
      -value-120000-/value-
    -/property-
    
    -property-
      -name-zookeeper.session.timeout.localHBaseCluster-/name-
      -value-20000-/value-
    -/property-
    
  -/configuration-

log4j.properties

hbase.root.logger=INFO,console
hbase.security.logger=INFO,console
hbase.log.dir=.
hbase.log.file=hbase.log
log4j.rootLogger=${hbase.root.logger}
log4j.threshold=ALL
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
hbase.log.maxfilesize=256MB
hbase.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
hbase.security.log.file=SecurityAuth.audit
hbase.security.log.maxfilesize=256MB
hbase.security.log.maxbackupindex=20
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.category.SecurityLogger=${hbase.security.logger}
log4j.additivity.SecurityLogger=false
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
log4j.logger.org.apache.zookeeper=INFO
log4j.logger.org.apache.hadoop.hbase=INFO
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO
    

regionservers

localhost

atlas

/etc/atlas/conf:
-rw-r--r-- 1 atlas hadoop 1271 2015-07-21 16:00 application.properties
-rwxr-xr-x 1 atlas hadoop  830 2015-07-21 16:00 atlas-env.sh
-rwxr-xr-x 1 root  root   1265 2015-07-14 14:03 client.properties
-rw-r--r-- 1 atlas hadoop 3005 2015-07-21 16:00 log4j.xml

application.properties

    
atlas.authentication.keytab=/etc/security/keytabs/atlas.service.keytab
atlas.authentication.method=simple
atlas.authentication.principal=atlas
atlas.enableTLS=false
atlas.graph.index.search.backend=elasticsearch
atlas.graph.index.search.directory=/var/lib/atlas/data/es
atlas.graph.index.search.elasticsearch.client-only=false
atlas.graph.index.search.elasticsearch.local-mode=true
atlas.graph.storage.backend=berkeleyje
atlas.graph.storage.directory=/var/lib/atlas/data/berkeley
atlas.http.authentication.enabled=false
atlas.http.authentication.kerberos.keytab=/etc/security/keytabs/spnego.service.keytab
atlas.http.authentication.kerberos.name.rules=RULE:[1:$1@$0](.*@EXAMPLE.COM)s/@.*// \ 
      DEFAULT
atlas.http.authentication.kerberos.principal=HTTP/_HOST@EXAMPLE.COM
atlas.http.authentication.type=simple
atlas.lineage.hive.process.inputs.name=inputs
atlas.lineage.hive.process.outputs.name=outputs
atlas.lineage.hive.process.type.name=Process
atlas.lineage.hive.table.schema.query.hive_table=hive_table where name='%s'\, columns
atlas.lineage.hive.table.schema.query.Table=Table where name='%s'\, columns
atlas.lineage.hive.table.type.name=DataSet
atlas.server.bind.address=sandbox.hortonworks.com
    

atlas-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export METADATA_OPTS=-Xmx1024m
export METADATA_CONF=/etc/atlas/conf
export METADATA_LOG_DIR=/var/log/atlas
export METADATACPPATH= 
export METADATA_DATA_DIR=/var/lib/atlas/data
export METADATA_PID_DIR=/var/run/atlas
export METADATA_EXPANDED_WEBAPP_DIR=/var/lib/atlas/server/webapp
    

client.properties

atlas.enableTLS=false
truststore.file=/path/to/truststore.jks
cert.stores.credential.provider.path=jceks://file/path/to/credentialstore.jceks
keystore.file=/path/to/keystore.jks
atlas.http.authentication.enabled=false
atlas.http.authentication.type=simple

log4j.xml

-?xml version="1.0" encoding="UTF-8" ?-
-!--
  ~ Licensed to the Apache Software Foundation (ASF) under one
  ~ or more contributor license agreements.  See the NOTICE file
  ~ distributed with this work for additional information
  ~ regarding copyright ownership.  The ASF licenses this file
  ~ to you under the Apache License, Version 2.0 (the
  ~ "License"); you may not use this file except in compliance
  ~ with the License.  You may obtain a copy of the License at
  ~
  ~     http://www.apache.org/licenses/LICENSE-2.0
  ~
  ~ Unless required by applicable law or agreed to in writing, software
  ~ distributed under the License is distributed on an "AS IS" BASIS,
  ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  ~ See the License for the specific language governing permissions and
  ~ limitations under the License.
  ---
-!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"-
-log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"-
    -appender name="console" class="org.apache.log4j.ConsoleAppender"-
        -param name="Target" value="System.out"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %-5p - [%t:%x] ~ %m (%c{1}:%L)%n"/-
        -/layout-
    -/appender-
    -appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${atlas.log.dir}/application.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %-5p - [%t:%x] ~ %m (%c{1}:%L)%n"/-
        -/layout-
    -/appender-
    -appender name="AUDIT" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${atlas.log.dir}/audit.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %x %m%n"/-
        -/layout-
    -/appender-
    -logger name="org.apache.atlas" additivity="false"-
        -level value="debug"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="com.thinkaurelius.titan" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="org.elasticsearch" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="org.apache.lucene" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="com.google" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="AUDIT"-
        -level value="info"/-
        -appender-ref ref="AUDIT"/-
    -/logger-
    -root-
        -priority value="info"/-
        -appender-ref ref="FILE"/-
    -/root-
-/log4j:configuration-

falcon

/etc/falcon/conf:
-rw-r--r-- 1 falcon root 1135 2015-07-21 15:49 client.properties
-rw-r--r-- 1 root   root 2152 2015-07-14 16:35 falcon_env.ini
-rw-r--r-- 1 falcon root 1590 2015-07-21 15:49 falcon-env.sh
-rw-r--r-- 1 root   root 4133 2015-07-14 16:35 log4j.xml
-rw-r--r-- 1 root   root 2084 2015-07-14 16:40 prism.keystore
-rw-r--r-- 1 falcon root  292 2015-07-21 16:06 runtime.properties
-rw-r--r-- 1 falcon root 3417 2015-07-21 16:06 startup.properties

client.properties

falcon.url=http://sandbox.hortonworks.com:15000/

falcon_env.ini

[environment]

falcon-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export FALCON_SERVER_OPTS="-Dfalcon.embeddedmq=True -Dfalcon.emeddedmq.port=61616"
export FALCON_LOG_DIR=/var/log/falcon
export FALCON_PID_DIR=/var/run/falcon
export FALCON_DATA_DIR=/hadoop/falcon/embeddedmq/data
    

log4j.xml

-?xml version="1.0" encoding="UTF-8" ?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one
  or more contributor license agreements.  See the NOTICE file
  distributed with this work for additional information
  regarding copyright ownership.  The ASF licenses this file
  to you under the Apache License, Version 2.0 (the
  "License"); you may not use this file except in compliance
  with the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
  ---
-!--
    This is used for falcon packaging only.
  ---
-!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"-
-log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"-
    -appender name="FILE" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${falcon.log.dir}/${falcon.app.type}.application.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %-5p - [%t:%x] ~ %m (%c{1}:%L)%n"/-
        -/layout-
    -/appender-
    -appender name="AUDIT" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${falcon.log.dir}/${falcon.app.type}.audit.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %x %m%n"/-
        -/layout-
    -/appender-
    -appender name="METRIC" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${falcon.log.dir}/${falcon.app.type}.metric.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %m%n"/-
        -/layout-
    -/appender-
    -appender name="ALERT" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${falcon.log.dir}/${falcon.app.type}.alerts.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %m%n"/-
        -/layout-
    -/appender-
    -appender name="SECURITY" class="org.apache.log4j.DailyRollingFileAppender"-
        -param name="File" value="${falcon.log.dir}/${falcon.app.type}.security.audit.log"/-
        -param name="Append" value="true"/-
        -param name="Threshold" value="debug"/-
        -layout class="org.apache.log4j.PatternLayout"-
            -param name="ConversionPattern" value="%d %x %m%n"/-
        -/layout-
    -/appender-
    -logger name="org.apache.falcon" additivity="false"-
        -level value="debug"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="AUDIT"-
        -level value="info"/-
        -appender-ref ref="AUDIT"/-
    -/logger-
    -logger name="METRIC"-
        -level value="info"/-
        -appender-ref ref="METRIC"/-
    -/logger-
    -logger name="org.apache.hadoop.security" additivity="false"-
        -level value="info"/-
        -appender-ref ref="SECURITY"/-
    -/logger-
    -logger name="org.apache.hadoop" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="org.apache.oozie" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -logger name="org.apache.hadoop.hive" additivity="false"-
        -level value="info"/-
        -appender-ref ref="FILE"/-
    -/logger-
    -root-
        -priority value="info"/-
        -appender-ref ref="FILE"/-
    -/root-
-/log4j:configuration-

prism.keystore

Binary file (standard input) matches

runtime.properties

    
*.domain=${falcon.app.type}
*.log.cleanup.frequency.days.retention=days(7)
*.log.cleanup.frequency.hours.retention=minutes(1)
*.log.cleanup.frequency.minutes.retention=hours(6)
*.log.cleanup.frequency.months.retention=months(3)
    

startup.properties

    
*.application.services=org.apache.falcon.security.AuthenticationInitializationService,\
      org.apache.falcon.workflow.WorkflowJobEndNotificationService, \
      org.apache.falcon.service.ProcessSubscriberService,\
      org.apache.falcon.entity.store.ConfigurationStore,\
      org.apache.falcon.rerun.service.RetryService,\
      org.apache.falcon.rerun.service.LateRunService,\
      org.apache.falcon.service.LogCleanupService,\
      org.apache.falcon.metadata.MetadataMappingService
    
*.broker.impl.class=org.apache.activemq.ActiveMQConnectionFactory
*.broker.ttlInMins=4320
*.broker.url=tcp://sandbox.hortonworks.com:61616
*.catalog.service.impl=org.apache.falcon.catalog.HiveCatalogService
*.config.store.uri=file:///hadoop/falcon/store
*.configstore.listeners=org.apache.falcon.entity.v0.EntityGraph,\
      org.apache.falcon.entity.ColoClusterRelation,\
      org.apache.falcon.group.FeedGroupMap,\
      org.apache.falcon.service.SharedLibraryHostingService
    
*.ConfigSyncService.impl=org.apache.falcon.resource.ConfigSyncService
*.domain=${falcon.app.type}
*.entity.topic=FALCON.ENTITY.TOPIC
*.falcon.authentication.type=simple
*.falcon.cleanup.service.frequency=days(1)
*.falcon.enableTLS=false
*.falcon.graph.blueprints.graph=com.thinkaurelius.titan.core.TitanFactory
*.falcon.graph.preserve.history=false
*.falcon.graph.serialize.path=/hadoop/falcon/data/lineage
*.falcon.graph.storage.backend=berkeleyje
*.falcon.graph.storage.directory=/hadoop/falcon/data/lineage/graphdb
*.falcon.http.authentication.blacklisted.users=
*.falcon.http.authentication.cookie.domain=EXAMPLE.COM
*.falcon.http.authentication.kerberos.name.rules=DEFAULT
*.falcon.http.authentication.signature.secret=falcon
*.falcon.http.authentication.simple.anonymous.allowed=true
*.falcon.http.authentication.token.validity=36000
*.falcon.http.authentication.type=simple
*.falcon.security.authorization.admin.groups=falcon
*.falcon.security.authorization.admin.users=falcon,ambari-qa
*.falcon.security.authorization.enabled=false
*.falcon.security.authorization.provider=org.apache.falcon.security.DefaultAuthorizationProvider
*.falcon.security.authorization.superusergroup=falcon
*.hive.shared.libs=hive-exec,hive-metastore,hive-common,hive-service,hive-hcatalog-server-extensions,\
hive-hcatalog-core,hive-jdbc,hive-webhcat-java-client
*.internal.queue.size=1000
*.journal.impl=org.apache.falcon.transaction.SharedFileSystemJournal
*.max.retry.failure.count=1
*.oozie.feed.workflow.builder=org.apache.falcon.workflow.OozieFeedWorkflowBuilder
*.oozie.process.workflow.builder=org.apache.falcon.workflow.OozieProcessWorkflowBuilder
*.ProcessInstanceManager.impl=org.apache.falcon.resource.InstanceManager
*.retry.recorder.path=${falcon.log.dir}/retry
*.SchedulableEntityManager.impl=org.apache.falcon.resource.SchedulableEntityManager
*.shared.libs=activemq-core,ant,geronimo-j2ee-management,jms,json-simple,oozie-client,spring-jms,commons-lang3,commons-el
*.system.lib.location=${falcon.home}/server/webapp/${falcon.app.type}/WEB-INF/lib
*.workflow.engine.impl=org.apache.falcon.workflow.engine.OozieWorkflowEngine
prism.application.services=org.apache.falcon.entity.store.ConfigurationStore
prism.configstore.listeners=org.apache.falcon.entity.v0.EntityGraph,\
      org.apache.falcon.entity.ColoClusterRelation,\
      org.apache.falcon.group.FeedGroupMap
    
    

flume

/etc/flume/conf:
-rw-r--r-- 1 root root 1043 2015-07-14 15:29 flume.conf
-rw-r--r-- 1 root root 1661 2015-07-14 15:20 flume-conf.properties.template
-rw-r--r-- 1 root root 1139 2015-07-14 15:20 flume-env.ps1
-rw-r--r-- 1 root root 1288 2015-07-14 15:20 flume-env.sh.template
-rw-r--r-- 1 root root 3115 2015-07-14 15:29 log4j.properties

flume.conf


flume-conf.properties.template

agent.sources = seqGenSrc
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.seqGenSrc.type = seq
agent.sources.seqGenSrc.channels = memoryChannel
agent.sinks.loggerSink.type = logger
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100

flume-env.ps1

$JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"
$FLUME_CLASSPATH=""   # Example:  "path1;path2;path3"

flume-env.sh.template


log4j.properties

flume.root.logger=INFO,LOGFILE
flume.log.dir=/var/log/flume
flume.log.file=flume.log
log4j.logger.org.apache.flume.lifecycle = INFO
log4j.logger.org.jboss = WARN
log4j.logger.org.mortbay = INFO
log4j.logger.org.apache.avro.ipc.NettyTransceiver = WARN
log4j.logger.org.apache.hadoop = INFO
log4j.logger.org.apache.hadoop.hive = ERROR
log4j.rootLogger=${flume.root.logger}
log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.LOGFILE.MaxFileSize=100MB
log4j.appender.LOGFILE.MaxBackupIndex=10
log4j.appender.LOGFILE.File=${flume.log.dir}/${flume.log.file}
log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.LOGFILE.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
log4j.appender.DAILY=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DAILY.rollingPolicy=org.apache.log4j.rolling.TimeBasedRollingPolicy
log4j.appender.DAILY.rollingPolicy.ActiveFileName=${flume.log.dir}/${flume.log.file}
log4j.appender.DAILY.rollingPolicy.FileNamePattern=${flume.log.dir}/${flume.log.file}.%d{yyyy-MM-dd}
log4j.appender.DAILY.layout=org.apache.log4j.PatternLayout
log4j.appender.DAILY.layout.ConversionPattern=%d{dd MMM yyyy HH:mm:ss,SSS} %-5p [%t] (%C.%M:%L) %x - %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d (%t) [%p - %l] %m%n

hadoop

/etc/hadoop/conf:
-rw-r--r-- 1 hdfs   hadoop  2214 2015-07-21 17:52 capacity-scheduler.xml
-rw-r--r-- 1 hdfs   root    1020 2015-07-21 16:00 commons-logging.properties
-rw-r--r-- 1 hdfs   hadoop  1335 2015-07-14 13:23 configuration.xsl
-rw-r--r-- 1 root   hadoop  1019 2015-07-21 15:55 container-executor.cfg
-rw-r--r-- 1 hdfs   hadoop  3732 2015-07-26 18:46 core-site.xml
-rw-r--r-- 1 root   root     415 2015-07-21 16:41 dfs_data_dir_mount.hist
-rw-r--r-- 1 hdfs   hadoop     1 2015-07-21 16:01 dfs.exclude
-rw-r--r-- 1 root   root    3887 2015-07-14 13:23 hadoop-env.cmd
-rw-r--r-- 1 hdfs   hadoop  5483 2015-07-21 15:47 hadoop-env.sh
-rw-r--r-- 1 hdfs   root    1884 2015-07-21 16:00 hadoop-metrics2.properties
-rw-r--r-- 1 root   root    2490 2015-07-14 13:23 hadoop-metrics.properties
-rw-r--r-- 1 hdfs   hadoop  1342 2015-07-21 16:43 hadoop-policy.xml
-rw-r--r-- 1 hdfs   hadoop  7391 2015-07-26 18:46 hdfs-site.xml
-rw-r--r-- 1 hdfs   root    1602 2015-07-21 16:00 health_check
-rw-r--r-- 1 root   root    3518 2015-07-14 13:23 kms-acls.xml
-rw-r--r-- 1 root   root    1527 2015-07-14 13:23 kms-env.sh
-rw-r--r-- 1 root   root    1631 2015-07-14 13:23 kms-log4j.properties
-rw-r--r-- 1 root   root    5511 2015-07-14 13:23 kms-site.xml
-rw-r--r-- 1 hdfs   hadoop  8709 2015-07-21 16:00 log4j.properties
-rw-r--r-- 1 root   root     931 2015-07-14 13:23 mapred-env.cmd
-rw-r--r-- 1 hdfs   root     666 2015-07-21 15:55 mapred-env.sh
-rw-r--r-- 1 root   root    4113 2015-07-14 13:23 mapred-queues.xml.template
-rw-r--r-- 1 mapred hadoop  6943 2015-07-21 17:52 mapred-site.xml
-rw-r--r-- 1 root   root     758 2015-07-14 13:23 mapred-site.xml.template
-rwxr--r-- 1 hdfs   hdfs    7216 2015-07-21 20:16 ranger-hdfs-audit.xml
-rwxr--r-- 1 hdfs   hdfs    3294 2015-07-21 20:16 ranger-hdfs-security.xml
-rwxr--r-- 1 hdfs   hdfs    2277 2015-07-21 20:16 ranger-policymgr-ssl.xml
-rw-r--r-- 1 hdfs   hdfs      69 2015-07-21 20:16 ranger-security.xml
drwxr-xr-x 2 root   hadoop  4096 2015-07-21 15:50 secure
-rw-r--r-- 1 hdfs   root      25 2015-07-21 15:50 slaves
-rw-r--r-- 1 hdfs   hadoop   918 2015-07-21 17:52 ssl-client.xml
-rw-r--r-- 1 mapred hadoop  2316 2015-07-14 13:23 ssl-client.xml.example
-rw-r--r-- 1 hdfs   hadoop  1034 2015-07-21 17:52 ssl-server.xml
-rw-r--r-- 1 mapred hadoop  2268 2015-07-14 13:23 ssl-server.xml.example
-rw-r--r-- 1 hdfs   root     945 2015-07-21 15:55 taskcontroller.cfg
-rwxr-xr-x 1 root   root    4221 2015-07-21 16:00 task-log4j.properties
-rw-r--r-- 1 hdfs   hadoop    81 2015-07-21 16:00 topology_mappings.data
-rwxr-xr-x 1 root   root    2358 2015-07-21 16:00 topology_script.py
-rw-r--r-- 1 root   root    2191 2015-07-14 13:23 yarn-env.cmd
-rwxr-xr-x 1 yarn   hadoop  4930 2015-07-21 16:43 yarn-env.sh
-rw-r--r-- 1 yarn   hadoop     0 2015-07-21 15:55 yarn.exclude
-rw-r--r-- 1 yarn   hadoop 14918 2015-07-21 17:52 yarn-site.xml

/etc/hadoop/conf/secure:
-rw-r--r-- 1 hdfs hadoop 918 2015-07-21 17:52 ssl-client.xml

capacity-scheduler.xml

-!--Tue Jul 21 17:52:16 2015---
    -configuration-
    
    -property-
      -name-yarn.scheduler.capacity.default.minimum-user-limit-percent-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.maximum-am-resource-percent-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.maximum-applications-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.node-locality-delay-/name-
      -value-40-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.resource-calculator-/name-
      -value-org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.accessible-node-labels-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.acl_administer_queue-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.capacity-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.acl_administer_jobs-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.acl_submit_applications-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.capacity-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.maximum-am-resource-percent-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.maximum-capacity-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.state-/name-
      -value-RUNNING-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.default.user-limit-factor-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.capacity.root.queues-/name-
      -value-default-/value-
    -/property-
    
  -/configuration-

commons-logging.properties

org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JLogger

configuration.xsl

-?xml version="1.0"?-
-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---
-xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"-
-xsl:output method="html"/-
-xsl:template match="configuration"-
-html-
-body-
-table border="1"-
-tr-
 -td-name-/td-
 -td-value-/td-
 -td-description-/td-
-/tr-
-xsl:for-each select="property"-
-tr-
  -td--a name="{name}"--xsl:value-of select="name"/--/a--/td-
  -td--xsl:value-of select="value"/--/td-
  -td--xsl:value-of select="description"/--/td-
-/tr-
-/xsl:for-each-
-/table-
-/body-
-/html-
-/xsl:template-
-/xsl:stylesheet-

container-executor.cfg

yarn.nodemanager.local-dirs=/hadoop/yarn/local
yarn.nodemanager.log-dirs=/hadoop/yarn/log
yarn.nodemanager.linux-container-executor.group=hadoop
banned.users=hdfs,yarn,mapred,bin
min.user.id=1000

core-site.xml

-!--Sun Jul 26 18:46:01 2015---
    -configuration-
    
    -property-
      -name-fs.defaultFS-/name-
      -value-hdfs://sandbox.hortonworks.com:8020-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-fs.trash.interval-/name-
      -value-360-/value-
    -/property-
    
    -property-
      -name-ha.failover-controller.active-standby-elector.zk.op.retries-/name-
      -value-120-/value-
    -/property-
    
    -property-
      -name-hadoop.http.authentication.simple.anonymous.allowed-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.falcon.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.falcon.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hbase.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hbase.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hcat.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hcat.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hive.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hive.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hue.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hue.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.oozie.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.oozie.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.root.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.root.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.security.auth_to_local-/name-
      -value-DEFAULT-/value-
    -/property-
    
    -property-
      -name-hadoop.security.authentication-/name-
      -value-simple-/value-
    -/property-
    
    -property-
      -name-hadoop.security.authorization-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hadoop.security.key.provider.path-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-io.compression.codecs-/name-
      -value-org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec-/value-
    -/property-
    
    -property-
      -name-io.file.buffer.size-/name-
      -value-131072-/value-
    -/property-
    
    -property-
      -name-io.serializations-/name-
      -value-org.apache.hadoop.io.serializer.WritableSerialization-/value-
    -/property-
    
    -property-
      -name-ipc.client.connect.max.retries-/name-
      -value-50-/value-
    -/property-
    
    -property-
      -name-ipc.client.connection.maxidletime-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-ipc.client.idlethreshold-/name-
      -value-8000-/value-
    -/property-
    
    -property-
      -name-ipc.server.tcpnodelay-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobtracker.webinterface.trusted-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-net.topology.script.file.name-/name-
      -value-/etc/hadoop/conf/topology_script.py-/value-
    -/property-
    
  -/configuration-

dfs_data_dir_mount.hist

/hadoop/hdfs/data,/

dfs.exclude


hadoop-env.cmd

@echo off
set JAVA_HOME=%JAVA_HOME%
if exist %HADOOP_HOME%\contrib\capacity-scheduler (
  if not defined HADOOP_CLASSPATH (
    set HADOOP_CLASSPATH=%HADOOP_HOME%\contrib\capacity-scheduler\*.jar
  ) else (
    set HADOOP_CLASSPATH=%HADOOP_CLASSPATH%;%HADOOP_HOME%\contrib\capacity-scheduler\*.jar
  )
)
if defined TEZ_CLASSPATH (
  if not defined HADOOP_CLASSPATH (
    set HADOOP_CLASSPATH=%TEZ_CLASSPATH%
  ) else (
    set HADOOP_CLASSPATH=%HADOOP_CLASSPATH%;%TEZ_CLASSPATH%
  )
)
if not defined HADOOP_SECURITY_LOGGER (
  set HADOOP_SECURITY_LOGGER=INFO,RFAS
)
if not defined HDFS_AUDIT_LOGGER (
  set HDFS_AUDIT_LOGGER=INFO,NullAppender
)
set HADOOP_NAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_NAMENODE_OPTS%
set HADOOP_DATANODE_OPTS=-Dhadoop.security.logger=ERROR,RFAS %HADOOP_DATANODE_OPTS%
set HADOOP_SECONDARYNAMENODE_OPTS=-Dhadoop.security.logger=%HADOOP_SECURITY_LOGGER% -Dhdfs.audit.logger=%HDFS_AUDIT_LOGGER% %HADOOP_SECONDARYNAMENODE_OPTS%
set HADOOP_CLIENT_OPTS=-Xmx512m %HADOOP_CLIENT_OPTS%
set HADOOP_SECURE_DN_USER=%HADOOP_SECURE_DN_USER%
set HADOOP_SECURE_DN_LOG_DIR=%HADOOP_LOG_DIR%\%HADOOP_HDFS_USER%
set HADOOP_PID_DIR=%HADOOP_PID_DIR%
set HADOOP_SECURE_DN_PID_DIR=%HADOOP_PID_DIR%
set HADOOP_IDENT_STRING=%USERNAME%

hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export HADOOP_HOME_WARN_SUPPRESS=1
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
export JSVC_HOME=/usr/lib/bigtop-utils
export HADOOP_HEAPSIZE="250"
export HADOOP_NAMENODE_INIT_HEAPSIZE="-Xms250m"
export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true ${HADOOP_OPTS}"
HADOOP_JOBTRACKER_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xmx1024m -Dhadoop.security.logger=INFO,DRFAS -Dmapred.audit.logger=INFO,MRAUDIT -Dhadoop.mapreduce.jobsummary.logger=INFO,JSA ${HADOOP_JOBTRACKER_OPTS}"
HADOOP_TASKTRACKER_OPTS="-server -Xmx1024m -Dhadoop.security.logger=ERROR,console -Dmapred.audit.logger=ERROR,console ${HADOOP_TASKTRACKER_OPTS}"
SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=50m -XX:MaxNewSize=100m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -XX:PermSize=128m -XX:MaxPermSize=256m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms250m -Xmx250m -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}"
export HADOOP_SECONDARYNAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-secondarynamenode/bin/kill-secondary-name-node\" ${HADOOP_SECONDARYNAMENODE_OPTS}"
export HADOOP_CLIENT_OPTS="-Xmx${HADOOP_HEAPSIZE}m -XX:MaxPermSize=512m $HADOOP_CLIENT_OPTS"
HADOOP_NFS3_OPTS="-Xmx1024m -Dhadoop.security.logger=ERROR,DRFAS ${HADOOP_NFS3_OPTS}"
HADOOP_BALANCER_OPTS="-server -Xmx250m ${HADOOP_BALANCER_OPTS}"
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER:-""}
export HADOOP_SSH_OPTS="-o ConnectTimeout=5 -o SendEnv=HADOOP_CONF_DIR"
export HADOOP_LOG_DIR=/var/log/hadoop/$USER
export HADOOP_MAPRED_LOG_DIR=/var/log/hadoop-mapreduce/$USER
export HADOOP_SECURE_DN_LOG_DIR=/var/log/hadoop/$HADOOP_SECURE_DN_USER
export HADOOP_PID_DIR=/var/run/hadoop/$USER
export HADOOP_SECURE_DN_PID_DIR=/var/run/hadoop/$HADOOP_SECURE_DN_USER
export HADOOP_MAPRED_PID_DIR=/var/run/hadoop-mapreduce/$USER
YARN_RESOURCEMANAGER_OPTS="-Dyarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY"
export HADOOP_IDENT_STRING=$USER
JAVA_JDBC_LIBS=""
for jarFile in `ls /usr/share/java/*mysql* 2-/dev/null`
do
  JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile
done
for jarFile in `ls /usr/share/java/*ojdbc* 2-/dev/null`
do
  JAVA_JDBC_LIBS=${JAVA_JDBC_LIBS}:$jarFile
done
export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:${JAVA_JDBC_LIBS}
export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec
export JAVA_LIBRARY_PATH=${JAVA_LIBRARY_PATH}
export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"
    

hadoop-metrics2.properties

*.period=60
*.sink.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
*.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
*.sink.timeline.period=10
*.sink.timeline.slave.host.name = sandbox.hortonworks.com
datanode.sink.timeline.collector=sandbox.hortonworks.com:6188
namenode.sink.timeline.collector=sandbox.hortonworks.com:6188
resourcemanager.sink.timeline.collector=sandbox.hortonworks.com:6188
nodemanager.sink.timeline.collector=sandbox.hortonworks.com:6188
historyserver.sink.timeline.collector=sandbox.hortonworks.com:6188
journalnode.sink.timeline.collector=sandbox.hortonworks.com:6188
nimbus.sink.timeline.collector=sandbox.hortonworks.com:6188
supervisor.sink.timeline.collector=sandbox.hortonworks.com:6188
maptask.sink.timeline.collector=sandbox.hortonworks.com:6188
reducetask.sink.timeline.collector=sandbox.hortonworks.com:6188
resourcemanager.sink.timeline.tagsForPrefix.yarn=Queue

hadoop-metrics.properties

dfs.class=org.apache.hadoop.metrics.spi.NullContext
mapred.class=org.apache.hadoop.metrics.spi.NullContext
rpc.class=org.apache.hadoop.metrics.spi.NullContext
ugi.class=org.apache.hadoop.metrics.spi.NullContext

hadoop-policy.xml

-!--Tue Jul 21 16:43:38 2015---
    -configuration-
    
    -property-
      -name-security.admin.operations.protocol.acl-/name-
      -value-hadoop-/value-
    -/property-
    
    -property-
      -name-security.client.datanode.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.client.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.datanode.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.inter.datanode.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.inter.tracker.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.job.client.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.job.task.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.namenode.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.refresh.policy.protocol.acl-/name-
      -value-hadoop-/value-
    -/property-
    
    -property-
      -name-security.refresh.usertogroups.mappings.protocol.acl-/name-
      -value-hadoop-/value-
    -/property-
    
  -/configuration-

hdfs-site.xml

-!--Sun Jul 26 18:46:24 2015---
    -configuration-
    
    -property-
      -name-dfs.block.access.token.enable-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.block.size-/name-
      -value-34217472-/value-
    -/property-
    
    -property-
      -name-dfs.blockreport.initialDelay-/name-
      -value-120-/value-
    -/property-
    
    -property-
      -name-dfs.blocksize-/name-
      -value-134217728-/value-
    -/property-
    
    -property-
      -name-dfs.client.read.shortcircuit-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.client.read.shortcircuit.streams.cache.size-/name-
      -value-4096-/value-
    -/property-
    
    -property-
      -name-dfs.client.retry.policy.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.cluster.administrators-/name-
      -value- hdfs-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.address-/name-
      -value-0.0.0.0:50010-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.balance.bandwidthPerSec-/name-
      -value-6250000-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.data.dir-/name-
      -value-/hadoop/hdfs/data-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.datanode.data.dir.perm-/name-
      -value-750-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.du.reserved-/name-
      -value-1073741824-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.failed.volumes.tolerated-/name-
      -value-0-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.datanode.http.address-/name-
      -value-0.0.0.0:50075-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.https.address-/name-
      -value-0.0.0.0:50475-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.ipc.address-/name-
      -value-0.0.0.0:8010-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.max.transfer.threads-/name-
      -value-1024-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.max.xcievers-/name-
      -value-1024-/value-
    -/property-
    
    -property-
      -name-dfs.domain.socket.path-/name-
      -value-/var/lib/hadoop-hdfs/dn_socket-/value-
    -/property-
    
    -property-
      -name-dfs.encrypt.data.transfer.cipher.suites-/name-
      -value-AES/CTR/NoPadding-/value-
    -/property-
    
    -property-
      -name-dfs.encryption.key.provider.uri-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-dfs.heartbeat.interval-/name-
      -value-3-/value-
    -/property-
    
    -property-
      -name-dfs.hosts.exclude-/name-
      -value-/etc/hadoop/conf/dfs.exclude-/value-
    -/property-
    
    -property-
      -name-dfs.http.policy-/name-
      -value-HTTP_ONLY-/value-
    -/property-
    
    -property-
      -name-dfs.https.port-/name-
      -value-50470-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.edits.dir-/name-
      -value-/hadoop/hdfs/journalnode-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.http-address-/name-
      -value-0.0.0.0:8480-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.https-address-/name-
      -value-0.0.0.0:8481-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.accesstime.precision-/name-
      -value-3600000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.audit.log.async-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.avoid.read.stale.datanode-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.avoid.write.stale.datanode-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.dir-/name-
      -value-/hadoop/hdfs/namesecondary-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.edits.dir-/name-
      -value-${dfs.namenode.checkpoint.dir}-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.period-/name-
      -value-21600-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.txns-/name-
      -value-1000000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.fslock.fair-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.handler.count-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.http-address-/name-
      -value-sandbox.hortonworks.com:50070-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.namenode.https-address-/name-
      -value-sandbox.hortonworks.com:50470-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.inode.attributes.provider.class-/name-
      -value-org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.name.dir-/name-
      -value-/hadoop/hdfs/namenode-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.namenode.name.dir.restore-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.rpc-address-/name-
      -value-sandbox.hortonworks.com:8020-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.safemode.threshold-pct-/name-
      -value-0.999-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.secondary.http-address-/name-
      -value-sandbox.hortonworks.com:50090-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.stale.datanode.interval-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.startup.delay.block.deletion.sec-/name-
      -value-3600-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.write.stale.datanode.ratio-/name-
      -value-1.0f-/value-
    -/property-
    
    -property-
      -name-dfs.nfs.exports.allowed.hosts-/name-
      -value-* rw-/value-
    -/property-
    
    -property-
      -name-dfs.nfs3.dump.dir-/name-
      -value-/tmp/.hdfs-nfs-/value-
    -/property-
    
    -property-
      -name-dfs.permissions-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.permissions.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.permissions.superusergroup-/name-
      -value-hdfs-/value-
    -/property-
    
    -property-
      -name-dfs.replication-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-dfs.replication.max-/name-
      -value-50-/value-
    -/property-
    
    -property-
      -name-dfs.support.append-/name-
      -value-true-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.webhdfs.enabled-/name-
      -value-true-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-fs.permissions.umask-mode-/name-
      -value-022-/value-
    -/property-
    
    -property-
      -name-nfs.exports.allowed.hosts-/name-
      -value-* rw-/value-
    -/property-
    
    -property-
      -name-nfs.file.dump.dir-/name-
      -value-/tmp/.hdfs-nfs-/value-
    -/property-
    
  -/configuration-

health_check

err=0;
function check_disks {
  for m in `awk '$3~/ext3/ {printf" %s ",$2}' /etc/fstab` ; do
    fsdev=""
    fsdev=`awk -v m=$m '$2==m {print $1}' /proc/mounts`;
    if [ -z "$fsdev" -a "$m" != "/mnt" ] ; then
      msg_="$msg_ $m(u)"
    else
      msg_="$msg_`awk -v m=$m '$2==m { if ( $4 ~ /^ro,/ ) {printf"%s(ro)",$2 } ; }' /proc/mounts`"
    fi
  done
  if [ -z "$msg_" ] ; then
    echo "disks ok" ; exit 0
  else
    echo "$msg_" ; exit 2
  fi
}
for check in disks ; do
  msg=`check_${check}` ;
  if [ $? -eq 0 ] ; then
    ok_msg="$ok_msg$msg,"
  else
    err_msg="$err_msg$msg,"
  fi
done
if [ ! -z "$err_msg" ] ; then
  echo -n "ERROR $err_msg "
fi
if [ ! -z "$ok_msg" ] ; then
  echo -n "OK: $ok_msg"
fi
echo
exit 0

kms-acls.xml

-?xml version="1.0" encoding="UTF-8"?-
-!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
---
-configuration-
  -!-- This file is hot-reloaded when it changes ---
  -!-- KMS ACLs ---
  -property-
    -name-hadoop.kms.acl.CREATE-/name-
    -value-*-/value-
    -description-
      ACL for create-key operations.
      If the user is not in the GET ACL, the key material is not returned
      as part of the response.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.DELETE-/name-
    -value-*-/value-
    -description-
      ACL for delete-key operations.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.ROLLOVER-/name-
    -value-*-/value-
    -description-
      ACL for rollover-key operations.
      If the user is not in the GET ACL, the key material is not returned
      as part of the response.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.GET-/name-
    -value-*-/value-
    -description-
      ACL for get-key-version and get-current-key operations.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.GET_KEYS-/name-
    -value-*-/value-
    -description-
      ACL for get-keys operations.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.GET_METADATA-/name-
    -value-*-/value-
    -description-
      ACL for get-key-metadata and get-keys-metadata operations.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.SET_KEY_MATERIAL-/name-
    -value-*-/value-
    -description-
      Complementary ACL for CREATE and ROLLOVER operations to allow the client
      to provide the key material when creating or rolling a key.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.GENERATE_EEK-/name-
    -value-*-/value-
    -description-
      ACL for generateEncryptedKey CryptoExtension operations.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.acl.DECRYPT_EEK-/name-
    -value-*-/value-
    -description-
      ACL for decryptEncryptedKey CryptoExtension operations.
    -/description-
  -/property-
  -property-
    -name-default.key.acl.MANAGEMENT-/name-
    -value-*-/value-
    -description-
      default ACL for MANAGEMENT operations for all key acls that are not
      explicitly defined.
    -/description-
  -/property-
  -property-
    -name-default.key.acl.GENERATE_EEK-/name-
    -value-*-/value-
    -description-
      default ACL for GENERATE_EEK operations for all key acls that are not
      explicitly defined.
    -/description-
  -/property-
  -property-
    -name-default.key.acl.DECRYPT_EEK-/name-
    -value-*-/value-
    -description-
      default ACL for DECRYPT_EEK operations for all key acls that are not
      explicitly defined.
    -/description-
  -/property-
  -property-
    -name-default.key.acl.READ-/name-
    -value-*-/value-
    -description-
      default ACL for READ operations for all key acls that are not
      explicitly defined.
    -/description-
  -/property-
-/configuration-

kms-env.sh


kms-log4j.properties

log4j.appender.kms=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kms.DatePattern='.'yyyy-MM-dd
log4j.appender.kms.File=${kms.log.dir}/kms.log
log4j.appender.kms.Append=true
log4j.appender.kms.layout=org.apache.log4j.PatternLayout
log4j.appender.kms.layout.ConversionPattern=%d{ISO8601} %-5p %c{1} - %m%n
log4j.appender.kms-audit=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kms-audit.DatePattern='.'yyyy-MM-dd
log4j.appender.kms-audit.File=${kms.log.dir}/kms-audit.log
log4j.appender.kms-audit.Append=true
log4j.appender.kms-audit.layout=org.apache.log4j.PatternLayout
log4j.appender.kms-audit.layout.ConversionPattern=%d{ISO8601} %m%n
log4j.logger.kms-audit=INFO, kms-audit
log4j.additivity.kms-audit=false
log4j.rootLogger=ALL, kms
log4j.logger.org.apache.hadoop.conf=ERROR
log4j.logger.org.apache.hadoop=INFO
log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF

kms-site.xml

-?xml version="1.0" encoding="UTF-8"?-
-!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
---
-configuration-
  -!-- KMS Backend KeyProvider ---
  -property-
    -name-hadoop.kms.key.provider.uri-/name-
    -value-jceks://file@/${user.home}/kms.keystore-/value-
    -description-
      URI of the backing KeyProvider for the KMS.
    -/description-
  -/property-
  -property-
    -name-hadoop.security.keystore.JavaKeyStoreProvider.password-/name-
    -value-none-/value-
    -description-
      If using the JavaKeyStoreProvider, the password for the keystore file.
    -/description-
  -/property-
  -!-- KMS Cache ---
  -property-
    -name-hadoop.kms.cache.enable-/name-
    -value-true-/value-
    -description-
      Whether the KMS will act as a cache for the backing KeyProvider.
      When the cache is enabled, operations like getKeyVersion, getMetadata,
      and getCurrentKey will sometimes return cached data without consulting
      the backing KeyProvider. Cached values are flushed when keys are deleted
      or modified.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.cache.timeout.ms-/name-
    -value-600000-/value-
    -description-
      Expiry time for the KMS key version and key metadata cache, in
      milliseconds. This affects getKeyVersion and getMetadata.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.current.key.cache.timeout.ms-/name-
    -value-30000-/value-
    -description-
      Expiry time for the KMS current key cache, in milliseconds. This
      affects getCurrentKey operations.
    -/description-
  -/property-
  -!-- KMS Audit ---
  -property-
    -name-hadoop.kms.audit.aggregation.window.ms-/name-
    -value-10000-/value-
    -description-
      Duplicate audit log events within the aggregation window (specified in
      ms) are quashed to reduce log traffic. A single message for aggregated
      events is printed at the end of the window, along with a count of the
      number of aggregated events.
    -/description-
  -/property-
  -!-- KMS Security ---
  -property-
    -name-hadoop.kms.authentication.type-/name-
    -value-simple-/value-
    -description-
      Authentication type for the KMS. Can be either "simple"
      or "kerberos".
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.kerberos.keytab-/name-
    -value-${user.home}/kms.keytab-/value-
    -description-
      Path to the keytab with credentials for the configured Kerberos principal.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.kerberos.principal-/name-
    -value-HTTP/localhost-/value-
    -description-
      The Kerberos principal to use for the HTTP endpoint.
      The principal must start with 'HTTP/' as per the Kerberos HTTP SPNEGO specification.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.kerberos.name.rules-/name-
    -value-DEFAULT-/value-
    -description-
      Rules used to resolve Kerberos principal names.
    -/description-
  -/property-
  -!-- Authentication cookie signature source ---
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider-/name-
    -value-random-/value-
    -description-
      Indicates how the secret to sign the authentication cookies will be
      stored. Options are 'random' (default), 'string' and 'zookeeper'.
      If using a setup with multiple KMS instances, 'zookeeper' should be used.
    -/description-
  -/property-
  -!-- Configuration for 'zookeeper' authentication cookie signature source ---
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider.zookeeper.path-/name-
    -value-/hadoop-kms/hadoop-auth-signature-secret-/value-
    -description-
      The Zookeeper ZNode path where the KMS instances will store and retrieve
      the secret from.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string-/name-
    -value-#HOSTNAME#:#PORT#,...-/value-
    -description-
      The Zookeeper connection string, a list of hostnames and port comma
      separated.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type-/name-
    -value-kerberos-/value-
    -description-
      The Zookeeper authentication type, 'none' or 'sasl' (Kerberos).
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab-/name-
    -value-/etc/hadoop/conf/kms.keytab-/value-
    -description-
      The absolute path for the Kerberos keytab with the credentials to
      connect to Zookeeper.
    -/description-
  -/property-
  -property-
    -name-hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal-/name-
    -value-kms/#HOSTNAME#-/value-
    -description-
      The Kerberos service principal used to connect to Zookeeper.
    -/description-
  -/property-
-/configuration-

log4j.properties

hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
log4j.rootLogger=${hadoop.root.logger}, EventCounter
log4j.threshhold=ALL
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
hadoop.security.logger=INFO,console
hadoop.security.log.maxfilesize=256MB
hadoop.security.log.maxbackupindex=20
log4j.category.SecurityLogger=${hadoop.security.logger}
hadoop.security.log.file=SecurityAuth.audit
log4j.appender.DRFAS=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.DRFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.DRFAS.DatePattern=.yyyy-MM-dd
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hadoop.log.dir}/${hadoop.security.log.file}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.RFAS.MaxFileSize=${hadoop.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hadoop.security.log.maxbackupindex}
hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
mapred.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.mapred.AuditLogger=${mapred.audit.logger}
log4j.additivity.org.apache.hadoop.mapred.AuditLogger=false
log4j.appender.MRAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.MRAUDIT.File=${hadoop.log.dir}/mapred-audit.log
log4j.appender.MRAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.MRAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.MRAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.RFA.MaxFileSize=256MB
log4j.appender.RFA.MaxBackupIndex=10
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} - %m%n
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
hadoop.metrics.log.level=INFO
log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN
    
yarn.log.dir=.
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
yarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log
yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
log4j.appender.RMSUMMARY.File=${yarn.log.dir}/${yarn.server.resourcemanager.appsummary.log.file}
log4j.appender.RMSUMMARY.MaxFileSize=256MB
log4j.appender.RMSUMMARY.MaxBackupIndex=20
log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.JSA.DatePattern=.yyyy-MM-dd
log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false
yarn.ewma.cleanupInterval=300
yarn.ewma.messageAgeLimitSeconds=86400
yarn.ewma.maxUniqueMessages=250
log4j.appender.EWMA=org.apache.hadoop.yarn.util.Log4jWarningErrorMetricsAppender
log4j.appender.EWMA.cleanupInterval=${yarn.ewma.cleanupInterval}
log4j.appender.EWMA.messageAgeLimitSeconds=${yarn.ewma.messageAgeLimitSeconds}
log4j.appender.EWMA.maxUniqueMessages=${yarn.ewma.maxUniqueMessages}
    

mapred-env.cmd

@echo off
set HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
set HADOOP_MAPRED_ROOT_LOGGER=%HADOOP_LOGLEVEL%,RFA

mapred-env.sh

export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=250
export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
export HADOOP_OPTS="-Dhdp.version=$HDP_VERSION $HADOOP_OPTS"
    

mapred-queues.xml.template

-?xml version="1.0"?-
-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---
-!-- This is the template for queue configuration. The format supports nesting of
     queues within queues - a feature called hierarchical queues. All queues are
     defined within the 'queues' tag which is the top level element for this
     XML document. The queue acls configured here for different queues are
     checked for authorization only if the configuration property
     mapreduce.cluster.acls.enabled is set to true. ---
-queues-
  -!-- Configuration for a queue is specified by defining a 'queue' element. ---
  -queue-
    -!-- Name of a queue. Queue name cannot contain a ':'  ---
    -name-default-/name-
    -!-- properties for a queue, typically used by schedulers,
    can be defined here ---
    -properties-
    -/properties-
	-!-- State of the queue. If running, the queue will accept new jobs.
         If stopped, the queue will not accept new jobs. ---
    -state-running-/state-
    -!-- Specifies the ACLs to check for submitting jobs to this queue.
         If set to '*', it allows all users to submit jobs to the queue.
         If set to ' '(i.e. space), no user will be allowed to do this
         operation. The default value for any queue acl is ' '.
         For specifying a list of users and groups the format to use is
         user1,user2 group1,group2
         It is only used if authorization is enabled in Map/Reduce by setting
         the configuration property mapreduce.cluster.acls.enabled to true.
         Irrespective of this ACL configuration, the user who started the
         cluster and cluster administrators configured via
         mapreduce.cluster.administrators can do this operation. ---
    -acl-submit-job- -/acl-submit-job-
    -!-- Specifies the ACLs to check for viewing and modifying jobs in this
         queue. Modifications include killing jobs, tasks of jobs or changing
         priorities.
         If set to '*', it allows all users to view, modify jobs of the queue.
         If set to ' '(i.e. space), no user will be allowed to do this
         operation.
         For specifying a list of users and groups the format to use is
         user1,user2 group1,group2
         It is only used if authorization is enabled in Map/Reduce by setting
         the configuration property mapreduce.cluster.acls.enabled to true.
         Irrespective of this ACL configuration, the user who started the
         cluster  and cluster administrators configured via
         mapreduce.cluster.administrators can do the above operations on all
         the jobs in all the queues. The job owner can do all the above
         operations on his/her job irrespective of this ACL configuration. ---
    -acl-administer-jobs- -/acl-administer-jobs-
  -/queue-
  -!-- Here is a sample of a hierarchical queue configuration
       where q2 is a child of q1. In this example, q2 is a leaf level
       queue as it has no queues configured within it. Currently, ACLs
       and state are only supported for the leaf level queues.
       Note also the usage of properties for the queue q2.
  -queue-
    -name-q1-/name-
    -queue-
      -name-q2-/name-
      -properties-
        -property key="capacity" value="20"/-
        -property key="user-limit" value="30"/-
      -/properties-
    -/queue-
  -/queue-
 ---
-/queues-

mapred-site.xml

-!--Tue Jul 21 17:52:16 2015---
    -configuration-
    
    -property-
      -name-io.sort.mb-/name-
      -value-64-/value-
    -/property-
    
    -property-
      -name-mapred.child.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapred.job.map.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapred.job.reduce.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.map.child.java.opts-/name-
      -value--server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.reduce.child.java.opts-/name-
      -value--server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.user.env-/name-
      -value-LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64-/value-
    -/property-
    
    -property-
      -name-mapreduce.am.max-attempts-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-mapreduce.application.classpath-/name-
      -value-$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure-/value-
    -/property-
    
    -property-
      -name-mapreduce.application.framework.path-/name-
      -value-/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework-/value-
    -/property-
    
    -property-
      -name-mapreduce.cluster.administrators-/name-
      -value- hadoop-/value-
    -/property-
    
    -property-
      -name-mapreduce.framework.name-/name-
      -value-yarn-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.counters.max-/name-
      -value-130-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.emit-timeline-data-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.reduce.slowstart.completedmaps-/name-
      -value-0.05-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.address-/name-
      -value-sandbox.hortonworks.com:10020-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.bind-host-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.done-dir-/name-
      -value-/mr-history/done-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.intermediate-done-dir-/name-
      -value-/mr-history/tmp-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.store.class-/name-
      -value-org.apache.hadoop.mapreduce.v2.hs.HistoryServerLeveldbStateStoreService-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.store.leveldb.path-/name-
      -value-/hadoop/mapreduce/jhs-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.webapp.address-/name-
      -value-sandbox.hortonworks.com:19888-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.output.compress-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.sort.spill.percent-/name-
      -value-0.7-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.speculative-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.output.fileoutputformat.compress-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.output.fileoutputformat.compress.type-/name-
      -value-BLOCK-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.input.buffer.percent-/name-
      -value-0.0-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.enabled-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.interval-ms-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.timeout-ms-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.input.buffer.percent-/name-
      -value-0.7-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.merge.percent-/name-
      -value-0.66-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.parallelcopies-/name-
      -value-30-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.speculative-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.shuffle.port-/name-
      -value-13562-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.io.sort.factor-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.io.sort.mb-/name-
      -value-64-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.timeout-/name-
      -value-300000-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.admin-command-opts-/name-
      -value--Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.command-opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.resource.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.staging-dir-/name-
      -value-/user-/value-
    -/property-
    
  -/configuration-

mapred-site.xml.template

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
---
-!-- Put site-specific property overrides in this file. ---
-configuration-
-/configuration-

ranger-hdfs-audit.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-xasecure.audit.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-!-- DB audit provider configuration ---
	-property-
		-name-xasecure.audit.db.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.is.async-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.batch.size-/name-
		-value-100-/value-
	-/property-	
	-!--  Properties whose name begin with "xasecure.audit.jpa." are used to configure JPA ---
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.url-/name-
		-value-jdbc:mysql://localhost/ranger_audit-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.user-/name-
		-value-rangerlogger-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.password-/name-
		-value-crypted-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.driver-/name-
		-value-com.mysql.jdbc.Driver-/value-
	-/property-
	
	-property-
		-name-xasecure.audit.credential.provider.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hdfs/cred.jceks-/value-
	-/property-
	-!-- HDFS audit provider configuration ---
	-property-
		-name-xasecure.audit.hdfs.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.hdfs.async.max.queue.size-/name-
		-value-1048576-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.encoding-/name-
		-value/-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.directory-/name-
		-value-hdfs://sandbox.hortonworks.com:8020/ranger/audit/%app-type%/%time:yyyyMMdd%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.file-/name-
		-value-%hostname%-audit.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.flush.interval.seconds-/name-
		-value-900-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.rollover.interval.seconds-/name-
		-value-86400-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.open.retry.interval.seconds-/name-
		-value-60-/value-
	-/property-
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.directory-/name-
		-value-/var/log/hadoop/%app-type%/audit-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file-/name-
		-value-%time:yyyyMMdd-HHmm.ss%.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file.buffer.size.bytes-/name-
		-value-8192-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.flush.interval.seconds-/name-
		-value-60-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.rollover.interval.seconds-/name-
		-value-600-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.directory-/name-
		-value-_/var/log/hadoop/%app-type%/audit/archive-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.max.file.count-/name-
		-value-10-/value-
	-/property-	
	-!-- Log4j audit provider configuration ---
	-property-
		-name-xasecure.audit.log4j.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.is.async-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	
	-!-- Kafka audit provider configuration ---
	-property-
		-name-xasecure.audit.kafka.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.kafka.broker_list-/name-
		-value-localhost:9092-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.topic_name-/name-
		-value-ranger_audits-/value-
	-/property-	
	
	-!-- Ranger audit provider configuration ---
	-property-
		-name-xasecure.audit.solr.is.enabled-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.solr.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.solr_url-/name-
		-value-http://localhost:6083/solr/ranger_audits-/value-
	-/property-	
	
-property-
        -name-xasecure.audit.destination.solr-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.urls-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.user-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.password-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.zookeepers-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.batch.filespool.dir-/name-
        -value-/var/log/hadoop/hdfs/audit/solr/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.batch.filespool.dir-/name-
        -value-/var/log/hadoop/hdfs/audit/hdfs/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.dir-/name-
        -value-hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit-/value-
    -/property-
-/configuration-

ranger-hdfs-security.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-ranger.plugin.hdfs.service.name-/name-
		-value-sandbox_hdfs-/value-
		-description-
			Name of the Ranger service containing policies for this YARN instance
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hdfs.policy.source.impl-/name-
		-value-org.apache.ranger.admin.client.RangerAdminRESTClient-/value-
		-description-
			Class to retrieve policies from the source
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hdfs.policy.rest.url-/name-
		-value-http://sandbox.hortonworks.com:6080-/value-
		-description-
			URL to Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hdfs.policy.rest.ssl.config.file-/name-
		-value-/etc/hadoop/conf/ranger-policymgr-ssl.xml-/value-
		-description-
			Path to the file containing SSL details to contact Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hdfs.policy.pollIntervalMs-/name-
		-value-5000-/value-
		-description-
			How often to poll for changes in policies?
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hdfs.policy.cache.dir-/name-
		-value-/etc/ranger/sandbox_hdfs/policycache-/value-
		-description-
			Directory where Ranger policies are cached after successful retrieval from the source
		-/description-
	-/property-
	-!--  The following fields are used to customize the audit logging feature ---
	-!-- 
	-property-
		-name-xasecure.auditlog.xasecureAcl.name-/name-
		-value-ranger-acl-/value-
		-description-
			The module name listed in the auditlog when the permission check is done by RangerACL
		-/description-
	-/property-
	-property-
		-name-xasecure.auditlog.hadoopAcl.name-/name-
		-value-hadoop-acl-/value-
		-description-
			The module name listed in the auditlog when the permission check is done by HadoopACL
		-/description-
	-/property-
	-property-
		-name-xasecure.auditlog.hdfs.excludeusers-/name-
		-value-hbase,hive-/value-
		-description-
			List of comma separated users for whom the audit log is not written
		-/description-
	-/property-
	---
	
	-property-
		-name-xasecure.add-hadoop-authorization-/name-
		-value-true-/value-
		-description-
			Enable/Disable the default hadoop authorization (based on
			rwxrwxrwx permission on the resource) if Ranger Authorization fails.
		-/description-
	-/property-
-/configuration-

ranger-policymgr-ssl.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-!--  The following properties are used for 2-way SSL client server validation ---
	-property-
		-name-xasecure.policymgr.clientssl.keystore-/name-
		-value-/etc/hadoop/conf/ranger-plugin-keystore.jks-/value-
		-description- 
			Java Keystore files 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.password-/name-
		-value-myKeyFilePassword-/value-
		-description- 
			password for keystore 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore-/name-
		-value-/etc/hadoop/conf/ranger-plugin-truststore.jks-/value-
		-description- 
			java truststore file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.password-/name-
		-value-changeit-/value-
		-description- 
			java  truststore password
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hdfs/cred.jceks-/value-
		-description- 
			java  keystore credential file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hdfs/cred.jceks-/value-
		-description- 
			java  truststore credential file
		-/description-
	-/property-
-/configuration-

ranger-security.xml

-ranger-\n-enabled-Tue Jul 21 20:16:21 UTC 2015-/enabled-\n-/ranger-

secure


slaves

sandbox.hortonworks.com

ssl-client.xml

-!--Tue Jul 21 17:52:16 2015---
    -configuration-
    
    -property-
      -name-ssl.client.keystore.location-/name-
      -value-/etc/security/clientKeys/keystore.jks-/value-
    -/property-
    
    -property-
      -name-ssl.client.keystore.password-/name-
      -value-bigdata-/value-
    -/property-
    
    -property-
      -name-ssl.client.keystore.type-/name-
      -value-jks-/value-
    -/property-
    
    -property-
      -name-ssl.client.truststore.location-/name-
      -value-/etc/security/clientKeys/all.jks-/value-
    -/property-
    
    -property-
      -name-ssl.client.truststore.password-/name-
      -value-bigdata-/value-
    -/property-
    
    -property-
      -name-ssl.client.truststore.reload.interval-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-ssl.client.truststore.type-/name-
      -value-jks-/value-
    -/property-
    
  -/configuration-

ssl-client.xml.example

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---
-configuration-
-property-
  -name-ssl.client.truststore.location-/name-
  -value--/value-
  -description-Truststore to be used by clients like distcp. Must be
  specified.
  -/description-
-/property-
-property-
  -name-ssl.client.truststore.password-/name-
  -value--/value-
  -description-Optional. Default value is "".
  -/description-
-/property-
-property-
  -name-ssl.client.truststore.type-/name-
  -value-jks-/value-
  -description-Optional. The keystore file format, default value is "jks".
  -/description-
-/property-
-property-
  -name-ssl.client.truststore.reload.interval-/name-
  -value-10000-/value-
  -description-Truststore reload check interval, in milliseconds.
  Default value is 10000 (10 seconds).
  -/description-
-/property-
-property-
  -name-ssl.client.keystore.location-/name-
  -value--/value-
  -description-Keystore to be used by clients like distcp. Must be
  specified.
  -/description-
-/property-
-property-
  -name-ssl.client.keystore.password-/name-
  -value--/value-
  -description-Optional. Default value is "".
  -/description-
-/property-
-property-
  -name-ssl.client.keystore.keypassword-/name-
  -value--/value-
  -description-Optional. Default value is "".
  -/description-
-/property-
-property-
  -name-ssl.client.keystore.type-/name-
  -value-jks-/value-
  -description-Optional. The keystore file format, default value is "jks".
  -/description-
-/property-
-/configuration-

ssl-server.xml

-!--Tue Jul 21 17:52:16 2015---
    -configuration-
    
    -property-
      -name-ssl.server.keystore.keypassword-/name-
      -value-bigdata-/value-
    -/property-
    
    -property-
      -name-ssl.server.keystore.location-/name-
      -value-/etc/security/serverKeys/keystore.jks-/value-
    -/property-
    
    -property-
      -name-ssl.server.keystore.password-/name-
      -value-bigdata-/value-
    -/property-
    
    -property-
      -name-ssl.server.keystore.type-/name-
      -value-jks-/value-
    -/property-
    
    -property-
      -name-ssl.server.truststore.location-/name-
      -value-/etc/security/serverKeys/all.jks-/value-
    -/property-
    
    -property-
      -name-ssl.server.truststore.password-/name-
      -value-bigdata-/value-
    -/property-
    
    -property-
      -name-ssl.server.truststore.reload.interval-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-ssl.server.truststore.type-/name-
      -value-jks-/value-
    -/property-
    
  -/configuration-

ssl-server.xml.example

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---
-configuration-
-property-
  -name-ssl.server.truststore.location-/name-
  -value--/value-
  -description-Truststore to be used by NN and DN. Must be specified.
  -/description-
-/property-
-property-
  -name-ssl.server.truststore.password-/name-
  -value--/value-
  -description-Optional. Default value is "".
  -/description-
-/property-
-property-
  -name-ssl.server.truststore.type-/name-
  -value-jks-/value-
  -description-Optional. The keystore file format, default value is "jks".
  -/description-
-/property-
-property-
  -name-ssl.server.truststore.reload.interval-/name-
  -value-10000-/value-
  -description-Truststore reload check interval, in milliseconds.
  Default value is 10000 (10 seconds).
  -/description-
-/property-
-property-
  -name-ssl.server.keystore.location-/name-
  -value--/value-
  -description-Keystore to be used by NN and DN. Must be specified.
  -/description-
-/property-
-property-
  -name-ssl.server.keystore.password-/name-
  -value--/value-
  -description-Must be specified.
  -/description-
-/property-
-property-
  -name-ssl.server.keystore.keypassword-/name-
  -value--/value-
  -description-Must be specified.
  -/description-
-/property-
-property-
  -name-ssl.server.keystore.type-/name-
  -value-jks-/value-
  -description-Optional. The keystore file format, default value is "jks".
  -/description-
-/property-
-/configuration-

taskcontroller.cfg

mapred.local.dir=/tmp/hadoop-mapred/mapred/local
mapreduce.tasktracker.group=hadoop
hadoop.log.dir=/var/log/hadoop/mapred

task-log4j.properties

hadoop.root.logger=INFO,console
hadoop.log.dir=.
hadoop.log.file=hadoop.log
hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
log4j.rootLogger=${hadoop.root.logger}, EventCounter
log4j.threshhold=ALL
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hadoop.log.dir}/${hadoop.log.file}
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
hadoop.tasklog.taskid=null
hadoop.tasklog.iscleanup=false
hadoop.tasklog.noKeepSplits=4
hadoop.tasklog.totalLogFileSize=100
hadoop.tasklog.purgeLogSplits=true
hadoop.tasklog.logsRetainHours=12
log4j.appender.TLA=org.apache.hadoop.mapred.TaskLogAppender
log4j.appender.TLA.taskId=${hadoop.tasklog.taskid}
log4j.appender.TLA.isCleanup=${hadoop.tasklog.iscleanup}
log4j.appender.TLA.totalLogFileSize=${hadoop.tasklog.totalLogFileSize}
log4j.appender.TLA.layout=org.apache.log4j.PatternLayout
log4j.appender.TLA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
hadoop.metrics.log.level=INFO
log4j.logger.org.apache.hadoop.metrics2=${hadoop.metrics.log.level}
log4j.logger.org.jets3t.service.impl.rest.httpclient.RestS3Service=ERROR
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
log4j.appender.EventCounter=org.apache.hadoop.log.metrics.EventCounter
 
log4j.logger.org.apache.hadoop.conf.Configuration.deprecation=WARN

topology_mappings.data

[network_topology]
sandbox.hortonworks.com=/default-rack
10.0.2.15=/default-rack

topology_script.py

'''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
import sys, os
from string import join
import ConfigParser
DEFAULT_RACK = "/default-rack"
DATA_FILE_NAME =  os.path.dirname(os.path.abspath(__file__)) + "/topology_mappings.data"
SECTION_NAME = "network_topology"
class TopologyScript():
  def load_rack_map(self):
    try:
      #RACK_MAP contains both host name vs rack and ip vs rack mappings
      mappings = ConfigParser.ConfigParser()
      mappings.read(DATA_FILE_NAME)
      return dict(mappings.items(SECTION_NAME))
    except ConfigParser.NoSectionError:
      return {}
  def get_racks(self, rack_map, args):
    if len(args) == 1:
      return DEFAULT_RACK
    else:
      return join([self.lookup_by_hostname_or_ip(input_argument, rack_map) for input_argument in args[1:]],)
  def lookup_by_hostname_or_ip(self, hostname_or_ip, rack_map):
    #try looking up by hostname
    rack = rack_map.get(hostname_or_ip)
    if rack is not None:
      return rack
    #try looking up by ip
    rack = rack_map.get(self.extract_ip(hostname_or_ip))
    #try by localhost since hadoop could be passing in 127.0.0.1 which might not be mapped
    return rack if rack is not None else rack_map.get("localhost.localdomain", DEFAULT_RACK)
  #strips out port and slashes in case hadoop passes in something like 127.0.0.1/127.0.0.1:50010
  def extract_ip(self, container_string):
    return container_string.split("/")[0].split(":")[0]
  def execute(self, args):
    rack_map = self.load_rack_map()
    rack = self.get_racks(rack_map, args)
    print rack
if __name__ == "__main__":
  TopologyScript().execute(sys.argv)

yarn-env.cmd

@echo off
if not defined HADOOP_YARN_USER (
  set HADOOP_YARN_USER=%yarn%
)
if not defined YARN_CONF_DIR (
  set YARN_CONF_DIR=%HADOOP_YARN_HOME%\conf
)
if defined YARN_HEAPSIZE (
  @rem echo run with Java heapsize %YARN_HEAPSIZE%
  set JAVA_HEAP_MAX=-Xmx%YARN_HEAPSIZE%m
)
if not defined YARN_LOG_DIR (
  set YARN_LOG_DIR=%HADOOP_YARN_HOME%\logs
)
if not defined YARN_LOGFILE (
  set YARN_LOGFILE=yarn.log
)
if not defined YARN_POLICYFILE (
  set YARN_POLICYFILE=hadoop-policy.xml
)
if not defined YARN_ROOT_LOGGER (
  set YARN_ROOT_LOGGER=%HADOOP_LOGLEVEL%,console
)
set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.dir=%YARN_LOG_DIR%
set YARN_OPTS=%YARN_OPTS% -Dyarn.log.dir=%YARN_LOG_DIR%
set YARN_OPTS=%YARN_OPTS% -Dhadoop.log.file=%YARN_LOGFILE%
set YARN_OPTS=%YARN_OPTS% -Dyarn.log.file=%YARN_LOGFILE%
set YARN_OPTS=%YARN_OPTS% -Dyarn.home.dir=%HADOOP_YARN_HOME%
set YARN_OPTS=%YARN_OPTS% -Dyarn.id.str=%YARN_IDENT_STRING%
set YARN_OPTS=%YARN_OPTS% -Dhadoop.home.dir=%HADOOP_YARN_HOME%
set YARN_OPTS=%YARN_OPTS% -Dhadoop.root.logger=%YARN_ROOT_LOGGER%
set YARN_OPTS=%YARN_OPTS% -Dyarn.root.logger=%YARN_ROOT_LOGGER%
if defined JAVA_LIBRARY_PATH (
  set YARN_OPTS=%YARN_OPTS% -Djava.library.path=%JAVA_LIBRARY_PATH%
)
set YARN_OPTS=%YARN_OPTS% -Dyarn.policy.file=%YARN_POLICYFILE%

yarn-env.sh

      export HADOOP_YARN_HOME=/usr/hdp/current/hadoop-yarn-nodemanager
      export YARN_LOG_DIR=/var/log/hadoop-yarn/$USER
      export YARN_PID_DIR=/var/run/hadoop-yarn/$USER
      export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec
      export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
      # We need to add the EWMA appender for the yarn daemons only;
      # however, YARN_ROOT_LOGGER is shared by the yarn client and the
      # daemons. This is restrict the EWMA appender to daemons only.
      INVOKER="${0##*/}"
      if [ "$INVOKER" == "yarn-daemon.sh" ]; then
        export YARN_ROOT_LOGGER=${YARN_ROOT_LOGGER:-INFO,EWMA,RFA}
      fi
      # User for YARN daemons
      export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
      # resolve links - $0 may be a softlink
      export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"
      # some Java parameters
      # export JAVA_HOME=/home/y/libexec/jdk1.6.0/
      if [ "$JAVA_HOME" != "" ]; then
      #echo "run java in $JAVA_HOME"
      JAVA_HOME=$JAVA_HOME
      fi
      if [ "$JAVA_HOME" = "" ]; then
      echo "Error: JAVA_HOME is not set."
      exit 1
      fi
      JAVA=$JAVA_HOME/bin/java
      JAVA_HEAP_MAX=-Xmx1000m
      # For setting YARN specific HEAP sizes please use this
      # Parameter and set appropriately
      YARN_HEAPSIZE=250
      # check envvars which might override default args
      if [ "$YARN_HEAPSIZE" != "" ]; then
      JAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
      fi
      # Resource Manager specific parameters
      # Specify the max Heapsize for the ResourceManager using a numerical value
      # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
      # the value to 1000.
      # This value will be overridden by an Xmx setting specified in either YARN_OPTS
      # and/or YARN_RESOURCEMANAGER_OPTS.
      # If not specified, the default value will be picked from either YARN_HEAPMAX
      # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
      export YARN_RESOURCEMANAGER_HEAPSIZE=250
      # Specify the JVM options to be used when starting the ResourceManager.
      # These options will be appended to the options specified as YARN_OPTS
      # and therefore may override any similar flags set in YARN_OPTS
      #export YARN_RESOURCEMANAGER_OPTS=
      # Node Manager specific parameters
      # Specify the max Heapsize for the NodeManager using a numerical value
      # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
      # the value to 1000.
      # This value will be overridden by an Xmx setting specified in either YARN_OPTS
      # and/or YARN_NODEMANAGER_OPTS.
      # If not specified, the default value will be picked from either YARN_HEAPMAX
      # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
      export YARN_NODEMANAGER_HEAPSIZE=250
      # Specify the max Heapsize for the HistoryManager using a numerical value
      # in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
      # the value to 1024.
      # This value will be overridden by an Xmx setting specified in either YARN_OPTS
      # and/or YARN_HISTORYSERVER_OPTS.
      # If not specified, the default value will be picked from either YARN_HEAPMAX
      # or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
      export YARN_HISTORYSERVER_HEAPSIZE=250
      # Specify the JVM options to be used when starting the NodeManager.
      # These options will be appended to the options specified as YARN_OPTS
      # and therefore may override any similar flags set in YARN_OPTS
      #export YARN_NODEMANAGER_OPTS=
      # so that filenames w/ spaces are handled correctly in loops below
      IFS=
      # default log directory and file
      if [ "$YARN_LOG_DIR" = "" ]; then
      YARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
      fi
      if [ "$YARN_LOGFILE" = "" ]; then
      YARN_LOGFILE='yarn.log'
      fi
      # default policy file for service-level authorization
      if [ "$YARN_POLICYFILE" = "" ]; then
      YARN_POLICYFILE="hadoop-policy.xml"
      fi
      # restore ordinary behaviour
      unset IFS
      YARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
      YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
      YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
      YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
      YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME"
      YARN_OPTS="$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"
      YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
      YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
      if [ "x$JAVA_LIBRARY_PATH" != "x" ]; then
      YARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
      fi
      YARN_OPTS="$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"
    

yarn.exclude


yarn-site.xml

-!--Tue Jul 21 17:52:16 2015---
    -configuration-
    
    -property-
      -name-hadoop.registry.rm.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hadoop.registry.zk.quorum-/name-
      -value-sandbox.hortonworks.com:2181-/value-
    -/property-
    
    -property-
      -name-yarn.acl.enable-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.admin.acl-/name-
      -value-yarn-/value-
    -/property-
    
    -property-
      -name-yarn.application.classpath-/name-
      -value-$HADOOP_CONF_DIR,/usr/hdp/current/hadoop-client/*,/usr/hdp/current/hadoop-client/lib/*,/usr/hdp/current/hadoop-hdfs-client/*,/usr/hdp/current/hadoop-hdfs-client/lib/*,/usr/hdp/current/hadoop-yarn-client/*,/usr/hdp/current/hadoop-yarn-client/lib/*-/value-
    -/property-
    
    -property-
      -name-yarn.client.nodemanager-connect.max-wait-ms-/name-
      -value-60000-/value-
    -/property-
    
    -property-
      -name-yarn.client.nodemanager-connect.retry-interval-ms-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.http.policy-/name-
      -value-HTTP_ONLY-/value-
    -/property-
    
    -property-
      -name-yarn.log-aggregation-enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.log-aggregation.retain-seconds-/name-
      -value-2592000-/value-
    -/property-
    
    -property-
      -name-yarn.log.server.url-/name-
      -value-http://sandbox.hortonworks.com:19888/jobhistory/logs-/value-
    -/property-
    
    -property-
      -name-yarn.node-labels.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.node-labels.fs-store.retry-policy-spec-/name-
      -value-2000, 500-/value-
    -/property-
    
    -property-
      -name-yarn.node-labels.fs-store.root-dir-/name-
      -value-/system/yarn/node-labels-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.address-/name-
      -value-0.0.0.0:45454-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.admin-env-/name-
      -value-MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.aux-services-/name-
      -value-mapreduce_shuffle-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.aux-services.mapreduce_shuffle.class-/name-
      -value-org.apache.hadoop.mapred.ShuffleHandler-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.bind-host-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.container-executor.class-/name-
      -value-org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.container-monitor.interval-ms-/name-
      -value-3000-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.delete.debug-delay-sec-/name-
      -value-0-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage-/name-
      -value-90-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.disk-health-checker.min-healthy-disks-/name-
      -value-0.25-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.health-checker.interval-ms-/name-
      -value-135000-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.health-checker.script.timeout-ms-/name-
      -value-60000-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.linux-container-executor.cgroups.hierarchy-/name-
      -value-hadoop-yarn-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.linux-container-executor.cgroups.mount-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.linux-container-executor.group-/name-
      -value-hadoop-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.linux-container-executor.resources-handler.class-/name-
      -value-org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.local-dirs-/name-
      -value-/hadoop/yarn/local-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log-aggregation.compression-type-/name-
      -value-gz-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log-aggregation.debug-enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log-aggregation.num-log-files-per-app-/name-
      -value-30-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds-/name-
      -value--1-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log-dirs-/name-
      -value-/hadoop/yarn/log-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.log.retain-second-/name-
      -value-604800-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.pmem-check-enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.recovery.dir-/name-
      -value-/var/log/hadoop-yarn/nodemanager/recovery-state-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.recovery.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.remote-app-log-dir-/name-
      -value-/app-logs-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.remote-app-log-dir-suffix-/name-
      -value-logs-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.resource.cpu-vcores-/name-
      -value-8-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.resource.memory-mb-/name-
      -value-2250-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.resource.percentage-physical-cpu-limit-/name-
      -value-80-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.vmem-check-enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.nodemanager.vmem-pmem-ratio-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.address-/name-
      -value-sandbox.hortonworks.com:8050-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.admin.address-/name-
      -value-sandbox.hortonworks.com:8141-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.am.max-attempts-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.bind-host-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.connect.max-wait.ms-/name-
      -value-900000-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.connect.retry-interval.ms-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.fs.state-store.retry-policy-spec-/name-
      -value-2000, 500-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.fs.state-store.uri-/name-
      -value- -/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.ha.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.hostname-/name-
      -value-sandbox.hortonworks.com-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.nodes.exclude-path-/name-
      -value-/etc/hadoop/conf/yarn.exclude-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.recovery.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.resource-tracker.address-/name-
      -value-sandbox.hortonworks.com:8025-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.scheduler.address-/name-
      -value-sandbox.hortonworks.com:8030-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.scheduler.class-/name-
      -value-org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.scheduler.monitor.enable-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.state-store.max-completed-applications-/name-
      -value-${yarn.resourcemanager.max-completed-applications}-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.store.class-/name-
      -value-org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.system-metrics-publisher.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.address-/name-
      -value-sandbox.hortonworks.com:8088-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.https.address-/name-
      -value-sandbox.hortonworks.com:8090-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.proxyuser.hcat.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.proxyuser.hcat.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.proxyuser.oozie.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.webapp.proxyuser.oozie.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.work-preserving-recovery.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-acl-/name-
      -value-world:anyone:rwcda -/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-address-/name-
      -value-sandbox.hortonworks.com:2181-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-num-retries-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-retry-interval-ms-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-state-store.parent-path-/name-
      -value-/rmstore-/value-
    -/property-
    
    -property-
      -name-yarn.resourcemanager.zk-timeout-ms-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.maximum-allocation-mb-/name-
      -value-2250-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.maximum-allocation-vcores-/name-
      -value-8-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.minimum-allocation-mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-yarn.scheduler.minimum-allocation-vcores-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.address-/name-
      -value-sandbox.hortonworks.com:10200-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.bind-host-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.client.max-retries-/name-
      -value-30-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.client.retry-interval-ms-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.generic-application-history.store-class-/name-
      -value-org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.http-authentication.simple.anonymous.allowed-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.http-authentication.type-/name-
      -value-simple-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-state-store.path-/name-
      -value-/hadoop/yarn/timeline-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-timeline-store.path-/name-
      -value-/hadoop/yarn/timeline-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-timeline-store.read-cache-size-/name-
      -value-104857600-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms-/name-
      -value-300000-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.recovery.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.state-store-class-/name-
      -value-org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.store-class-/name-
      -value-org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.ttl-enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.ttl-ms-/name-
      -value-2678400000-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.webapp.address-/name-
      -value-sandbox.hortonworks.com:8188-/value-
    -/property-
    
    -property-
      -name-yarn.timeline-service.webapp.https.address-/name-
      -value-sandbox.hortonworks.com:8190-/value-
    -/property-
    
  -/configuration-

hbase

/etc/hbase/conf:
-rw-r--r-- 1 hbase hadoop 3732 2015-07-26 18:46 core-site.xml
-rw-r--r-- 1 hbase root   2362 2015-07-26 18:46 hadoop-metrics2-hbase.properties
-rw-r--r-- 1 root  root   4537 2015-07-14 13:35 hbase-env.cmd
-rw-r--r-- 1 hbase root   3004 2015-07-26 18:46 hbase-env.sh
-rw-r--r-- 1 hbase hadoop  401 2015-07-26 18:46 hbase-policy.xml
-rw-r--r-- 1 hbase hadoop 6019 2015-07-26 18:46 hbase-site.xml
-rw-r--r-- 1 hbase hadoop 7391 2015-07-26 18:46 hdfs-site.xml
-rw-r--r-- 1 hbase hadoop 4241 2015-07-21 15:50 log4j.properties
-rwxr--r-- 1 hbase hbase  7323 2015-07-21 20:16 ranger-hbase-audit.xml
-rwxr--r-- 1 hbase hbase  2514 2015-07-21 20:16 ranger-hbase-security.xml
-rwxr--r-- 1 hbase hbase  2280 2015-07-21 20:16 ranger-policymgr-ssl.xml
-rw-r--r-- 1 hbase hbase    69 2015-07-21 20:16 ranger-security.xml
-rw-r--r-- 1 hbase root     25 2015-07-21 15:50 regionservers

core-site.xml

-!--Sun Jul 26 18:46:24 2015---
    -configuration-
    
    -property-
      -name-fs.defaultFS-/name-
      -value-hdfs://sandbox.hortonworks.com:8020-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-fs.trash.interval-/name-
      -value-360-/value-
    -/property-
    
    -property-
      -name-ha.failover-controller.active-standby-elector.zk.op.retries-/name-
      -value-120-/value-
    -/property-
    
    -property-
      -name-hadoop.http.authentication.simple.anonymous.allowed-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.falcon.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.falcon.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hbase.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hbase.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hcat.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hcat.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hive.groups-/name-
      -value-users-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hive.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hue.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.hue.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.oozie.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.oozie.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.root.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.proxyuser.root.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-hadoop.security.auth_to_local-/name-
      -value-DEFAULT-/value-
    -/property-
    
    -property-
      -name-hadoop.security.authentication-/name-
      -value-simple-/value-
    -/property-
    
    -property-
      -name-hadoop.security.authorization-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hadoop.security.key.provider.path-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-io.compression.codecs-/name-
      -value-org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec-/value-
    -/property-
    
    -property-
      -name-io.file.buffer.size-/name-
      -value-131072-/value-
    -/property-
    
    -property-
      -name-io.serializations-/name-
      -value-org.apache.hadoop.io.serializer.WritableSerialization-/value-
    -/property-
    
    -property-
      -name-ipc.client.connect.max.retries-/name-
      -value-50-/value-
    -/property-
    
    -property-
      -name-ipc.client.connection.maxidletime-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-ipc.client.idlethreshold-/name-
      -value-8000-/value-
    -/property-
    
    -property-
      -name-ipc.server.tcpnodelay-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobtracker.webinterface.trusted-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-net.topology.script.file.name-/name-
      -value-/etc/hadoop/conf/topology_script.py-/value-
    -/property-
    
  -/configuration-

hadoop-metrics2-hbase.properties

hbase.extendedperiod = 3600
*.timeline.plugin.urls=file:///usr/lib/ambari-metrics-hadoop-sink/ambari-metrics-hadoop-sink.jar
*.sink.timeline.slave.host.name=sandbox.hortonworks.com
hbase.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
hbase.period=10
hbase.collector=sandbox.hortonworks.com:6188
jvm.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
jvm.period=10
jvm.collector=sandbox.hortonworks.com:6188
rpc.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
rpc.period=10
rpc.collector=sandbox.hortonworks.com:6188
hbase.sink.timeline.class=org.apache.hadoop.metrics2.sink.timeline.HadoopTimelineMetricsSink
hbase.sink.timeline.period=10
hbase.sink.timeline.collector=sandbox.hortonworks.com:6188
*.source.filter.class=org.apache.hadoop.metrics2.filter.GlobFilter
hbase.*.source.filter.exclude=*Regions*

hbase-env.cmd

set HBASE_OPTS="-XX:+UseConcMarkSweepGC" "-Djava.net.preferIPv4Stack=true"
set HBASE_MASTER_OPTS=%HBASE_MASTER_OPTS% "-XX:PermSize=128m" "-XX:MaxPermSize=128m"
set HBASE_REGIONSERVER_OPTS=%HBASE_REGIONSERVER_OPTS% "-XX:PermSize=128m" "-XX:MaxPermSize=128m"

hbase-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export HBASE_CONF_DIR=${HBASE_CONF_DIR:-/usr/hdp/current/hbase-regionserver/conf}
export HBASE_CLASSPATH=${HBASE_CLASSPATH}
export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-`date +'%Y%m%d%H%M'`"
export HBASE_REGIONSERVERS=${HBASE_CONF_DIR}/regionservers
export HBASE_LOG_DIR=/var/log/hbase
export HBASE_PID_DIR=/var/run/hbase
export HBASE_MANAGES_ZK=false
JDK_DEPENDED_OPTS="-XX:PermSize=128m -XX:MaxPermSize=128m"
      
      
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log"
export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xmx4096m $JDK_DEPENDED_OPTS"
export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xmn512m -XX:CMSInitiatingOccupancyFraction=70  -Xms4096m -Xmx4096m  $JDK_DEPENDED_OPTS"
    

hbase-policy.xml

-!--Sun Jul 26 18:46:25 2015---
    -configuration-
    
    -property-
      -name-security.admin.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.client.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-security.masterregion.protocol.acl-/name-
      -value-*-/value-
    -/property-
    
  -/configuration-

hbase-site.xml

-!--Sun Jul 26 18:46:24 2015---
    -configuration-
    
    -property-
      -name-dfs.domain.socket.path-/name-
      -value-/var/lib/hadoop-hdfs/dn_socket-/value-
    -/property-
    
    -property-
      -name-hbase.bucketcache.ioengine-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.bucketcache.percentage.in.combinedcache-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.bucketcache.size-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.bulkload.staging.dir-/name-
      -value-/apps/hbase/staging-/value-
    -/property-
    
    -property-
      -name-hbase.client.keyvalue.maxsize-/name-
      -value-1048576-/value-
    -/property-
    
    -property-
      -name-hbase.client.retries.number-/name-
      -value-35-/value-
    -/property-
    
    -property-
      -name-hbase.client.scanner.caching-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-hbase.cluster.distributed-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hbase.coprocessor.master.classes-/name-
      -value-org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor-/value-
    -/property-
    
    -property-
      -name-hbase.coprocessor.region.classes-/name-
      -value-org.apache.ranger.authorization.hbase.RangerAuthorizationCoprocessor-/value-
    -/property-
    
    -property-
      -name-hbase.coprocessor.regionserver.classes-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.defaults.for.version.skip-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.majorcompaction-/name-
      -value-604800000-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.majorcompaction.jitter-/name-
      -value-0.50-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.max.filesize-/name-
      -value-10737418240-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.memstore.block.multiplier-/name-
      -value-4-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.memstore.flush.size-/name-
      -value-134217728-/value-
    -/property-
    
    -property-
      -name-hbase.hregion.memstore.mslab.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hbase.hstore.blockingStoreFiles-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-hbase.hstore.compaction.max-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-hbase.hstore.compactionThreshold-/name-
      -value-3-/value-
    -/property-
    
    -property-
      -name-hbase.local.dir-/name-
      -value-${hbase.tmp.dir}/local-/value-
    -/property-
    
    -property-
      -name-hbase.master.info.bindAddress-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-hbase.master.info.port-/name-
      -value-16010-/value-
    -/property-
    
    -property-
      -name-hbase.master.port-/name-
      -value-16000-/value-
    -/property-
    
    -property-
      -name-hbase.region.server.rpc.scheduler.factory.class-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.global.memstore.size-/name-
      -value-0.4-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.handler.count-/name-
      -value-30-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.info.port-/name-
      -value-16030-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.port-/name-
      -value-16020-/value-
    -/property-
    
    -property-
      -name-hbase.regionserver.wal.codec-/name-
      -value-org.apache.hadoop.hbase.regionserver.wal.WALCellCodec-/value-
    -/property-
    
    -property-
      -name-hbase.rootdir-/name-
      -value-hdfs://sandbox.hortonworks.com:8020/apps/hbase/data-/value-
    -/property-
    
    -property-
      -name-hbase.rpc.controllerfactory.class-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-hbase.rpc.engine-/name-
      -value-org.apache.hadoop.hbase.ipc.SecureRpcEngine-/value-
    -/property-
    
    -property-
      -name-hbase.rpc.protection-/name-
      -value-PRIVACY-/value-
    -/property-
    
    -property-
      -name-hbase.rpc.timeout-/name-
      -value-90000-/value-
    -/property-
    
    -property-
      -name-hbase.security.authentication-/name-
      -value-simple-/value-
    -/property-
    
    -property-
      -name-hbase.security.authorization-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hbase.superuser-/name-
      -value-hbase-/value-
    -/property-
    
    -property-
      -name-hbase.tmp.dir-/name-
      -value-/tmp/hbase-${user.name}-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.property.clientPort-/name-
      -value-2181-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.quorum-/name-
      -value-sandbox.hortonworks.com-/value-
    -/property-
    
    -property-
      -name-hbase.zookeeper.useMulti-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hbase_master_heapsize-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-hbase_regionserver_heapsize-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-hfile.block.cache.size-/name-
      -value-0.40-/value-
    -/property-
    
    -property-
      -name-phoenix.functions.allowUserDefinedFunctions-/name-
      -value- -/value-
    -/property-
    
    -property-
      -name-phoenix.query.timeoutMs-/name-
      -value-60000-/value-
    -/property-
    
    -property-
      -name-zookeeper.session.timeout-/name-
      -value-90000-/value-
    -/property-
    
    -property-
      -name-zookeeper.znode.parent-/name-
      -value-/hbase-unsecure-/value-
    -/property-
    
  -/configuration-

hdfs-site.xml

-!--Sun Jul 26 18:46:24 2015---
    -configuration-
    
    -property-
      -name-dfs.block.access.token.enable-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.block.size-/name-
      -value-34217472-/value-
    -/property-
    
    -property-
      -name-dfs.blockreport.initialDelay-/name-
      -value-120-/value-
    -/property-
    
    -property-
      -name-dfs.blocksize-/name-
      -value-134217728-/value-
    -/property-
    
    -property-
      -name-dfs.client.read.shortcircuit-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.client.read.shortcircuit.streams.cache.size-/name-
      -value-4096-/value-
    -/property-
    
    -property-
      -name-dfs.client.retry.policy.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.cluster.administrators-/name-
      -value- hdfs-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.address-/name-
      -value-0.0.0.0:50010-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.balance.bandwidthPerSec-/name-
      -value-6250000-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.data.dir-/name-
      -value-/hadoop/hdfs/data-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.datanode.data.dir.perm-/name-
      -value-750-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.du.reserved-/name-
      -value-1073741824-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.failed.volumes.tolerated-/name-
      -value-0-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.datanode.http.address-/name-
      -value-0.0.0.0:50075-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.https.address-/name-
      -value-0.0.0.0:50475-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.ipc.address-/name-
      -value-0.0.0.0:8010-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.max.transfer.threads-/name-
      -value-1024-/value-
    -/property-
    
    -property-
      -name-dfs.datanode.max.xcievers-/name-
      -value-1024-/value-
    -/property-
    
    -property-
      -name-dfs.domain.socket.path-/name-
      -value-/var/lib/hadoop-hdfs/dn_socket-/value-
    -/property-
    
    -property-
      -name-dfs.encrypt.data.transfer.cipher.suites-/name-
      -value-AES/CTR/NoPadding-/value-
    -/property-
    
    -property-
      -name-dfs.encryption.key.provider.uri-/name-
      -value--/value-
    -/property-
    
    -property-
      -name-dfs.heartbeat.interval-/name-
      -value-3-/value-
    -/property-
    
    -property-
      -name-dfs.hosts.exclude-/name-
      -value-/etc/hadoop/conf/dfs.exclude-/value-
    -/property-
    
    -property-
      -name-dfs.http.policy-/name-
      -value-HTTP_ONLY-/value-
    -/property-
    
    -property-
      -name-dfs.https.port-/name-
      -value-50470-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.edits.dir-/name-
      -value-/hadoop/hdfs/journalnode-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.http-address-/name-
      -value-0.0.0.0:8480-/value-
    -/property-
    
    -property-
      -name-dfs.journalnode.https-address-/name-
      -value-0.0.0.0:8481-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.accesstime.precision-/name-
      -value-3600000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.audit.log.async-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.avoid.read.stale.datanode-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.avoid.write.stale.datanode-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.dir-/name-
      -value-/hadoop/hdfs/namesecondary-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.edits.dir-/name-
      -value-${dfs.namenode.checkpoint.dir}-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.period-/name-
      -value-21600-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.checkpoint.txns-/name-
      -value-1000000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.fslock.fair-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.handler.count-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.http-address-/name-
      -value-sandbox.hortonworks.com:50070-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.namenode.https-address-/name-
      -value-sandbox.hortonworks.com:50470-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.inode.attributes.provider.class-/name-
      -value-org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.name.dir-/name-
      -value-/hadoop/hdfs/namenode-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.namenode.name.dir.restore-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.rpc-address-/name-
      -value-sandbox.hortonworks.com:8020-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.safemode.threshold-pct-/name-
      -value-0.999-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.secondary.http-address-/name-
      -value-sandbox.hortonworks.com:50090-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.stale.datanode.interval-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.startup.delay.block.deletion.sec-/name-
      -value-3600-/value-
    -/property-
    
    -property-
      -name-dfs.namenode.write.stale.datanode.ratio-/name-
      -value-1.0f-/value-
    -/property-
    
    -property-
      -name-dfs.nfs.exports.allowed.hosts-/name-
      -value-* rw-/value-
    -/property-
    
    -property-
      -name-dfs.nfs3.dump.dir-/name-
      -value-/tmp/.hdfs-nfs-/value-
    -/property-
    
    -property-
      -name-dfs.permissions-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.permissions.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-dfs.permissions.superusergroup-/name-
      -value-hdfs-/value-
    -/property-
    
    -property-
      -name-dfs.replication-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-dfs.replication.max-/name-
      -value-50-/value-
    -/property-
    
    -property-
      -name-dfs.support.append-/name-
      -value-true-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-dfs.webhdfs.enabled-/name-
      -value-true-/value-
      -final-true-/final-
    -/property-
    
    -property-
      -name-fs.permissions.umask-mode-/name-
      -value-022-/value-
    -/property-
    
    -property-
      -name-nfs.exports.allowed.hosts-/name-
      -value-* rw-/value-
    -/property-
    
    -property-
      -name-nfs.file.dump.dir-/name-
      -value-/tmp/.hdfs-nfs-/value-
    -/property-
    
  -/configuration-

log4j.properties

hbase.root.logger=INFO,console
hbase.security.logger=INFO,console
hbase.log.dir=.
hbase.log.file=hbase.log
log4j.rootLogger=${hbase.root.logger}
log4j.threshold=ALL
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
hbase.log.maxfilesize=256MB
hbase.log.maxbackupindex=20
log4j.appender.RFA=org.apache.log4j.RollingFileAppender
log4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}
log4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}
log4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}
log4j.appender.RFA.layout=org.apache.log4j.PatternLayout
log4j.appender.RFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
hbase.security.log.file=SecurityAuth.audit
hbase.security.log.maxfilesize=256MB
hbase.security.log.maxbackupindex=20
log4j.appender.RFAS=org.apache.log4j.RollingFileAppender
log4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}
log4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}
log4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}
log4j.appender.RFAS.layout=org.apache.log4j.PatternLayout
log4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
log4j.category.SecurityLogger=${hbase.security.logger}
log4j.additivity.SecurityLogger=false
log4j.appender.NullAppender=org.apache.log4j.varia.NullAppender
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{ISO8601} %-5p [%t] %c{2}: %m%n
log4j.logger.org.apache.zookeeper=INFO
log4j.logger.org.apache.hadoop.hbase=INFO
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO
log4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO
    

ranger-hbase-audit.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-xasecure.audit.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-!-- DB audit provider configuration ---
	-property-
		-name-xasecure.audit.db.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.batch.size-/name-
		-value-100-/value-
	-/property-	
	-!--  Properties whose name begin with "xasecure.audit.jpa." are used to configure JPA ---
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.url-/name-
		-value-jdbc:mysql://localhost/ranger_audit-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.user-/name-
		-value-rangerlogger-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.password-/name-
		-value-crypted-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.driver-/name-
		-value-com.mysql.jdbc.Driver-/value-
	-/property-
	-property-
		-name-xasecure.audit.credential.provider.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hbase/cred.jceks-/value-
	-/property-
	-!-- HDFS audit provider configuration ---
	-property-
		-name-xasecure.audit.hdfs.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.hdfs.async.max.queue.size-/name-
		-value-1048576-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.encoding-/name-
		-value/-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.directory-/name-
		-value-hdfs://sandbox.hortonworks.com:8020/ranger/audit/%app-type%/%time:yyyyMMdd%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.file-/name-
		-value-%hostname%-audit.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.flush.interval.seconds-/name-
		-value-900-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.rollover.interval.seconds-/name-
		-value-86400-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.open.retry.interval.seconds-/name-
		-value-60-/value-
	-/property-
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.directory-/name-
		-value-/var/log/hbase/audit/%app-type%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file-/name-
		-value-%time:yyyyMMdd-HHmm.ss%.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file.buffer.size.bytes-/name-
		-value-8192-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.flush.interval.seconds-/name-
		-value-60-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.rollover.interval.seconds-/name-
		-value-600-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.directory-/name-
		-value-/var/log/hbase/audit/archive/%app-type%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.max.file.count-/name-
		-value-10-/value-
	-/property-	
	
	-!-- Log4j audit provider configuration ---
	-property-
		-name-xasecure.audit.log4j.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.is.async-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.log4j.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	
	-!-- Kafka audit provider configuration ---
	-property-
		-name-xasecure.audit.kafka.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.kafka.broker_list-/name-
		-value-localhost:9092-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.topic_name-/name-
		-value-ranger_audits-/value-
	-/property-	
	
	-!-- Ranger audit provider configuration ---
	-property-
		-name-xasecure.audit.solr.is.enabled-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.solr.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.solr_url-/name-
		-value-http://localhost:6083/solr/ranger_audits-/value-
	-/property-	
-property-
        -name-xasecure.audit.provider.summary.enabled-/name-
        -value-true-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.urls-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.user-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.password-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.zookeepers-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/solr/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/hdfs/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.dir-/name-
        -value-hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit-/value-
    -/property-
-/configuration-

ranger-hbase-security.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-ranger.plugin.hbase.service.name-/name-
		-value-sandbox_hbase-/value-
		-description-
			Name of the Ranger service containing HBase policies
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hbase.policy.source.impl-/name-
		-value-org.apache.ranger.admin.client.RangerAdminRESTClient-/value-
		-description-
			Class to retrieve policies from the source
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hbase.policy.rest.url-/name-
		-value-http://sandbox.hortonworks.com:6080-/value-
		-description-
			URL to Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hbase.policy.rest.ssl.config.file-/name-
		-value-/etc/hbase/conf/ranger-policymgr-ssl.xml-/value-
		-description-
			Path to the file containing SSL details to contact Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hbase.policy.pollIntervalMs-/name-
		-value-5000-/value-
		-description-
			How often to poll for changes in policies?
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hbase.policy.cache.dir-/name-
		-value-/etc/ranger/sandbox_hbase/policycache-/value-
		-description-
			Directory where Ranger policies are cached after successful retrieval from the source
		-/description-
	-/property-
	-property-
		-name-xasecure.hbase.update.xapolicies.on.grant.revoke-/name-
		-value-true-/value-
		-description-
			Should HBase plugin update Ranger policies for updates to permissions done using GRANT/REVOKE?
		-/description-
	-/property-
-/configuration-

ranger-policymgr-ssl.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-!--  The following properties are used for 2-way SSL client server validation ---
	-property-
		-name-xasecure.policymgr.clientssl.keystore-/name-
		-value-/etc/hbase/conf/ranger-plugin-keystore.jks-/value-
		-description- 
			Java Keystore files 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.password-/name-
		-value-myKeyFilePassword-/value-
		-description- 
			password for keystore 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore-/name-
		-value-/etc/hbase/conf/ranger-plugin-truststore.jks-/value-
		-description- 
			java truststore file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.password-/name-
		-value-changeit-/value-
		-description- 
			java  truststore password
		-/description-
	-/property-
    -property-
		-name-xasecure.policymgr.clientssl.keystore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hbase/cred.jceks-/value-
		-description- 
			java  keystore credential file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hbase/cred.jceks-/value-
		-description- 
			java  truststore credential file
		-/description-
	-/property-
-/configuration-

ranger-security.xml

-ranger-\n-enabled-Tue Jul 21 20:16:30 UTC 2015-/enabled-\n-/ranger-

regionservers

sandbox.hortonworks.com

hive

/etc/hive/conf:
-rw-r--r-- 1 root root     1139 2015-07-14 13:47 beeline-log4j.properties.template
-rw-r--r-- 1 hive hadoop    142 2015-07-21 16:41 client.properties
drwxr-xr-x 2 hive hadoop   4096 2015-07-21 20:16 conf.server
-rw-r--r-- 1 hive hadoop 169282 2015-07-14 13:48 hive-default.xml.template
-rw-r--r-- 1 hive hadoop   1526 2015-07-21 15:55 hive-env.sh
-rw-r--r-- 1 hive hadoop   2378 2015-07-14 13:47 hive-env.sh.template
-rw-r--r-- 1 hive hadoop   2658 2015-07-21 15:55 hive-exec-log4j.properties
-rw-r--r-- 1 hive hadoop   3055 2015-07-21 15:55 hive-log4j.properties
-rwx------ 1 hive hive     1588 2015-07-21 16:22 hiveserver2-site.xml
-rw-r--r-- 1 hive hadoop  19099 2015-07-21 16:44 hive-site.xml
-rw-r--r-- 1 root root     1593 2015-07-14 13:47 ivysettings.xml
-rw-r--r-- 1 hive hadoop   6943 2015-07-21 16:44 mapred-site.xml
-rwxr--r-- 1 hive hive     7204 2015-07-21 20:16 ranger-hive-audit.xml
-rwxr--r-- 1 hive hive     2513 2015-07-21 20:16 ranger-hive-security.xml
-rwxr--r-- 1 hive hive     2276 2015-07-21 20:16 ranger-policymgr-ssl.xml
-rw-r--r-- 1 hive hive       69 2015-07-21 20:16 ranger-security.xml

/etc/hive/conf/conf.server:
-rw-r--r-- 1 hive hadoop   142 2015-07-21 16:44 client.properties
-rw-r--r-- 1 hive hadoop     0 2015-07-21 15:55 hive-default.xml.template
-rw-r--r-- 1 hive hadoop  1539 2015-07-21 16:44 hive-env.sh
-rw-r--r-- 1 hive hadoop     0 2015-07-21 15:55 hive-env.sh.template
-rw-r--r-- 1 hive hadoop  2658 2015-07-21 15:55 hive-exec-log4j.properties
-rw-r--r-- 1 hive hadoop  3055 2015-07-21 15:55 hive-log4j.properties
-rw-r--r-- 1 hive hive    1588 2015-07-21 20:16 hiveserver2-site.xml
-rw-r--r-- 1 hive hadoop 19371 2015-07-21 16:44 hive-site.xml
-rw-r--r-- 1 hive hadoop  6943 2015-07-21 16:44 mapred-site.xml
-rwxr--r-- 1 hive hive    7204 2015-07-21 20:16 ranger-hive-audit.xml
-rwxr--r-- 1 hive hive    2514 2015-07-21 20:16 ranger-hive-security.xml
-rwxr--r-- 1 hive hive    2276 2015-07-21 20:16 ranger-policymgr-ssl.xml

beeline-log4j.properties.template

log4j.rootLogger=WARN, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
log4j.appender.console.encoding=UTF-8

client.properties

    
atlas.http.authentication.enabled=false
atlas.http.authentication.type=simple
    

conf.server


hive-default.xml.template

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?--!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
----configuration-
  -!-- WARNING!!! This file is auto generated for documentation purposes ONLY! ---
  -!-- WARNING!!! Any changes you make to this file will be ignored by Hive.   ---
  -!-- WARNING!!! You must make your changes in hive-site.xml instead.         ---
  -!-- Hive Execution Parameters ---
  -property-
    -name-hive.exec.script.wrapper-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.exec.plan-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.plan.serialization.format-/name-
    -value-kryo-/value-
    -description-
      Query plan format serialization between client and task nodes. 
      Two supported values are : kryo and javaXML. Kryo is default.
    -/description-
  -/property-
  -property-
    -name-hive.exec.stagingdir-/name-
    -value-.hive-staging-/value-
    -description-Directory name that will be created inside table locations in order to support HDFS encryption. This is replaces ${hive.exec.scratchdir} for query results with the exception of read-only tables. In all cases ${hive.exec.scratchdir} is still used for other temporary files, such as job plans.-/description-
  -/property-
  -property-
    -name-hive.exec.scratchdir-/name-
    -value-/tmp/hive-/value-
    -description-HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.-/description-
  -/property-
  -property-
    -name-hive.exec.local.scratchdir-/name-
    -value-${system:java.io.tmpdir}/${system:user.name}-/value-
    -description-Local scratch space for Hive jobs-/description-
  -/property-
  -property-
    -name-hive.downloaded.resources.dir-/name-
    -value-${system:java.io.tmpdir}/${hive.session.id}_resources-/value-
    -description-Temporary local directory for added resources in the remote file system.-/description-
  -/property-
  -property-
    -name-hive.scratch.dir.permission-/name-
    -value-700-/value-
    -description-The permission for the user specific scratch directories that get created.-/description-
  -/property-
  -property-
    -name-hive.exec.submitviachild-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.exec.submit.local.task.via.child-/name-
    -value-true-/value-
    -description-
      Determines whether local tasks (typically mapjoin hashtable generation phase) runs in 
      separate JVM (true recommended) or not. 
      Avoids the overhead of spawning new JVM, but can lead to out-of-memory issues.
    -/description-
  -/property-
  -property-
    -name-hive.exec.script.maxerrsize-/name-
    -value-100000-/value-
    -description-
      Maximum number of bytes a script is allowed to emit to standard error (per map-reduce task). 
      This prevents runaway scripts from filling logs partitions to capacity
    -/description-
  -/property-
  -property-
    -name-hive.exec.script.allow.partial.consumption-/name-
    -value-false-/value-
    -description-
      When enabled, this option allows a user script to exit successfully without consuming 
      all the data from the standard input.
    -/description-
  -/property-
  -property-
    -name-stream.stderr.reporter.prefix-/name-
    -value-reporter:-/value-
    -description-Streaming jobs that log to standard error with this prefix can log counter or status information.-/description-
  -/property-
  -property-
    -name-stream.stderr.reporter.enabled-/name-
    -value-true-/value-
    -description-Enable consumption of status and counter messages for streaming jobs.-/description-
  -/property-
  -property-
    -name-hive.exec.compress.output-/name-
    -value-false-/value-
    -description-
      This controls whether the final outputs of a query (to a local/HDFS file or a Hive table) is compressed. 
      The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
    -/description-
  -/property-
  -property-
    -name-hive.exec.compress.intermediate-/name-
    -value-false-/value-
    -description-
      This controls whether intermediate files produced by Hive between multiple map-reduce jobs are compressed. 
      The compression codec and other options are determined from Hadoop config variables mapred.output.compress*
    -/description-
  -/property-
  -property-
    -name-hive.intermediate.compression.codec-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.intermediate.compression.type-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.exec.reducers.bytes.per.reducer-/name-
    -value-256000000-/value-
    -description-size per reducer.The default is 256Mb, i.e if the input size is 1G, it will use 4 reducers.-/description-
  -/property-
  -property-
    -name-hive.exec.reducers.max-/name-
    -value-1009-/value-
    -description-
      max number of reducers will be used. If the one specified in the configuration parameter mapred.reduce.tasks is
      negative, Hive will use this one as the max number of reducers when automatically determine number of reducers.
    -/description-
  -/property-
  -property-
    -name-hive.exec.pre.hooks-/name-
    -value/-
    -description-
      Comma-separated list of pre-execution hooks to be invoked for each statement. 
      A pre-execution hook is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    -/description-
  -/property-
  -property-
    -name-hive.exec.post.hooks-/name-
    -value/-
    -description-
      Comma-separated list of post-execution hooks to be invoked for each statement. 
      A post-execution hook is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    -/description-
  -/property-
  -property-
    -name-hive.exec.failure.hooks-/name-
    -value/-
    -description-
      Comma-separated list of on-failure hooks to be invoked for each statement. 
      An on-failure hook is specified as the name of Java class which implements the 
      org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext interface.
    -/description-
  -/property-
  -property-
    -name-hive.exec.query.redactor.hooks-/name-
    -value/-
    -description-
      Comma-separated list of hooks to be invoked for each query which can 
      tranform the query before it's placed in the job.xml file. Must be a Java class which 
      extends from the org.apache.hadoop.hive.ql.hooks.Redactor abstract class.
    -/description-
  -/property-
  -property-
    -name-hive.client.stats.publishers-/name-
    -value/-
    -description-
      Comma-separated list of statistics publishers to be invoked on counters on each job. 
      A client stats publisher is specified as the name of a Java class which implements the 
      org.apache.hadoop.hive.ql.stats.ClientStatsPublisher interface.
    -/description-
  -/property-
  -property-
    -name-hive.exec.parallel-/name-
    -value-false-/value-
    -description-Whether to execute jobs in parallel-/description-
  -/property-
  -property-
    -name-hive.exec.parallel.thread.number-/name-
    -value-8-/value-
    -description-How many jobs at most can be executed in parallel-/description-
  -/property-
  -property-
    -name-hive.mapred.reduce.tasks.speculative.execution-/name-
    -value-true-/value-
    -description-Whether speculative execution for reducers should be turned on. -/description-
  -/property-
  -property-
    -name-hive.exec.counters.pull.interval-/name-
    -value-1000-/value-
    -description-
      The interval with which to poll the JobTracker for the counters the running job. 
      The smaller it is the more load there will be on the jobtracker, the higher it is the less granular the caught will be.
    -/description-
  -/property-
  -property-
    -name-hive.exec.dynamic.partition-/name-
    -value-true-/value-
    -description-Whether or not to allow dynamic partitions in DML/DDL.-/description-
  -/property-
  -property-
    -name-hive.exec.dynamic.partition.mode-/name-
    -value-strict-/value-
    -description-
      In strict mode, the user must specify at least one static partition
      in case the user accidentally overwrites all partitions.
      In nonstrict mode all partitions are allowed to be dynamic.
    -/description-
  -/property-
  -property-
    -name-hive.exec.max.dynamic.partitions-/name-
    -value-1000-/value-
    -description-Maximum number of dynamic partitions allowed to be created in total.-/description-
  -/property-
  -property-
    -name-hive.exec.max.dynamic.partitions.pernode-/name-
    -value-100-/value-
    -description-Maximum number of dynamic partitions allowed to be created in each mapper/reducer node.-/description-
  -/property-
  -property-
    -name-hive.exec.max.created.files-/name-
    -value-100000-/value-
    -description-Maximum number of HDFS files created by all mappers/reducers in a MapReduce job.-/description-
  -/property-
  -property-
    -name-hive.exec.default.partition.name-/name-
    -value-__HIVE_DEFAULT_PARTITION__-/value-
    -description-
      The default partition name in case the dynamic partition column value is null/empty string or any other values that cannot be escaped. 
      This value must not contain any special character used in HDFS URI (e.g., ':', '%', '/' etc). 
      The user has to be aware that the dynamic partition value should not contain this value to avoid confusions.
    -/description-
  -/property-
  -property-
    -name-hive.lockmgr.zookeeper.default.partition.name-/name-
    -value-__HIVE_DEFAULT_ZOOKEEPER_PARTITION__-/value-
    -description/-
  -/property-
  -property-
    -name-hive.exec.show.job.failure.debug.info-/name-
    -value-true-/value-
    -description-
      If a job fails, whether to provide a link in the CLI to the task with the
      most failures, along with debugging hints if applicable.
    -/description-
  -/property-
  -property-
    -name-hive.exec.job.debug.capture.stacktraces-/name-
    -value-true-/value-
    -description-
      Whether or not stack traces parsed from the task logs of a sampled failed task 
      for each failed job should be stored in the SessionState
    -/description-
  -/property-
  -property-
    -name-hive.exec.job.debug.timeout-/name-
    -value-30000-/value-
    -description/-
  -/property-
  -property-
    -name-hive.exec.tasklog.debug.timeout-/name-
    -value-20000-/value-
    -description/-
  -/property-
  -property-
    -name-hive.output.file.extension-/name-
    -value/-
    -description-
      String used as a file extension for output files. 
      If not set, defaults to the codec extension for text files (e.g. ".gz"), or no extension otherwise.
    -/description-
  -/property-
  -property-
    -name-hive.exec.mode.local.auto-/name-
    -value-false-/value-
    -description-Let Hive determine whether to run in local mode automatically-/description-
  -/property-
  -property-
    -name-hive.exec.mode.local.auto.inputbytes.max-/name-
    -value-134217728-/value-
    -description-When hive.exec.mode.local.auto is true, input bytes should less than this for local mode.-/description-
  -/property-
  -property-
    -name-hive.exec.mode.local.auto.input.files.max-/name-
    -value-4-/value-
    -description-When hive.exec.mode.local.auto is true, the number of tasks should less than this for local mode.-/description-
  -/property-
  -property-
    -name-hive.exec.drop.ignorenonexistent-/name-
    -value-true-/value-
    -description-Do not report an error if DROP TABLE/VIEW/Index/Function specifies a non-existent table/view/index/function-/description-
  -/property-
  -property-
    -name-hive.ignore.mapjoin.hint-/name-
    -value-true-/value-
    -description-Ignore the mapjoin hint-/description-
  -/property-
  -property-
    -name-hive.file.max.footer-/name-
    -value-100-/value-
    -description-maximum number of lines for footer user can define for a table file-/description-
  -/property-
  -property-
    -name-hive.resultset.use.unique.column.names-/name-
    -value-true-/value-
    -description-
      Make column names unique in the result set by qualifying column names with table alias if needed.
      Table alias will be added to column names for queries of type "select *" or 
      if query explicitly uses table alias "select r1.x..".
    -/description-
  -/property-
  -property-
    -name-fs.har.impl-/name-
    -value-org.apache.hadoop.hive.shims.HiveHarFileSystem-/value-
    -description-The implementation for accessing Hadoop Archives. Note that this won't be applicable to Hadoop versions less than 0.20-/description-
  -/property-
  -property-
    -name-hive.metastore.warehouse.dir-/name-
    -value-/user/hive/warehouse-/value-
    -description-location of default database for the warehouse-/description-
  -/property-
  -property-
    -name-hive.metastore.uris-/name-
    -value/-
    -description-Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.-/description-
  -/property-
  -property-
    -name-hive.metastore.connect.retries-/name-
    -value-3-/value-
    -description-Number of retries while opening a connection to metastore-/description-
  -/property-
  -property-
    -name-hive.metastore.failure.retries-/name-
    -value-1-/value-
    -description-Number of retries upon failure of Thrift metastore calls-/description-
  -/property-
  -property-
    -name-hive.metastore.client.connect.retry.delay-/name-
    -value-1s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds for the client to wait between consecutive connection attempts
    -/description-
  -/property-
  -property-
    -name-hive.metastore.client.socket.timeout-/name-
    -value-600s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      MetaStore Client socket timeout in seconds
    -/description-
  -/property-
  -property-
    -name-hive.metastore.client.socket.lifetime-/name-
    -value-0s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      MetaStore Client socket lifetime in seconds. After this time is exceeded, client
      reconnects on the next MetaStore operation. A value of 0s means the connection
      has an infinite lifetime.
    -/description-
  -/property-
  -property-
    -name-javax.jdo.option.ConnectionPassword-/name-
    -value-mine-/value-
    -description-password to use against metastore database-/description-
  -/property-
  -property-
    -name-hive.metastore.ds.connection.url.hook-/name-
    -value/-
    -description-Name of the hook to use for retrieving the JDO connection URL. If empty, the value in javax.jdo.option.ConnectionURL is used-/description-
  -/property-
  -property-
    -name-javax.jdo.option.Multithreaded-/name-
    -value-true-/value-
    -description-Set this to true if multiple threads access metastore through JDO concurrently.-/description-
  -/property-
  -property-
    -name-javax.jdo.option.ConnectionURL-/name-
    -value-jdbc:derby:;databaseName=metastore_db;create=true-/value-
    -description-JDBC connect string for a JDBC metastore-/description-
  -/property-
  -property-
    -name-hive.hmshandler.retry.attempts-/name-
    -value-10-/value-
    -description-The number of times to retry a HMSHandler call if there were a connection error.-/description-
  -/property-
  -property-
    -name-hive.hmshandler.retry.interval-/name-
    -value-2000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      The time between HMSHandler retry attempts on failure.
    -/description-
  -/property-
  -property-
    -name-hive.hmshandler.force.reload.conf-/name-
    -value-false-/value-
    -description-
      Whether to force reloading of the HMSHandler configuration (including
      the connection URL, before the next metastore query that accesses the
      datastore. Once reloaded, this value is reset to false. Used for
      testing only.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.server.max.message.size-/name-
    -value-104857600-/value-
    -description-Maximum message size in bytes a HMS will accept.-/description-
  -/property-
  -property-
    -name-hive.metastore.server.min.threads-/name-
    -value-200-/value-
    -description-Minimum number of worker threads in the Thrift server's pool.-/description-
  -/property-
  -property-
    -name-hive.metastore.server.max.threads-/name-
    -value-1000-/value-
    -description-Maximum number of worker threads in the Thrift server's pool.-/description-
  -/property-
  -property-
    -name-hive.metastore.server.tcp.keepalive-/name-
    -value-true-/value-
    -description-Whether to enable TCP keepalive for the metastore server. Keepalive will prevent accumulation of half-open connections.-/description-
  -/property-
  -property-
    -name-hive.metastore.archive.intermediate.original-/name-
    -value-_INTERMEDIATE_ORIGINAL-/value-
    -description-
      Intermediate dir suffixes used for archiving. Not important what they
      are, as long as collisions are avoided
    -/description-
  -/property-
  -property-
    -name-hive.metastore.archive.intermediate.archived-/name-
    -value-_INTERMEDIATE_ARCHIVED-/value-
    -description/-
  -/property-
  -property-
    -name-hive.metastore.archive.intermediate.extracted-/name-
    -value-_INTERMEDIATE_EXTRACTED-/value-
    -description/-
  -/property-
  -property-
    -name-hive.metastore.kerberos.keytab.file-/name-
    -value/-
    -description-The path to the Kerberos Keytab file containing the metastore Thrift server's service principal.-/description-
  -/property-
  -property-
    -name-hive.metastore.kerberos.principal-/name-
    -value-hive-metastore/_HOST@EXAMPLE.COM-/value-
    -description-
      The service principal for the metastore Thrift server. 
      The special string _HOST will be replaced automatically with the correct host name.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.sasl.enabled-/name-
    -value-false-/value-
    -description-If true, the metastore Thrift interface will be secured with SASL. Clients must authenticate with Kerberos.-/description-
  -/property-
  -property-
    -name-hive.metastore.thrift.framed.transport.enabled-/name-
    -value-false-/value-
    -description-If true, the metastore Thrift interface will use TFramedTransport. When false (default) a standard TTransport is used.-/description-
  -/property-
  -property-
    -name-hive.metastore.thrift.compact.protocol.enabled-/name-
    -value-false-/value-
    -description-
      If true, the metastore Thrift interface will use TCompactProtocol. When false (default) TBinaryProtocol will be used.
      Setting it to true will break compatibility with older clients running TBinaryProtocol.
    -/description-
  -/property-
  -property-
    -name-hive.cluster.delegation.token.store.class-/name-
    -value-org.apache.hadoop.hive.thrift.MemoryTokenStore-/value-
    -description-The delegation token store implementation. Set to org.apache.hadoop.hive.thrift.ZooKeeperTokenStore for load-balanced cluster.-/description-
  -/property-
  -property-
    -name-hive.cluster.delegation.token.store.zookeeper.connectString-/name-
    -value/-
    -description-
      The ZooKeeper token store connect string. You can re-use the configuration value
      set in hive.zookeeper.quorum, by leaving this parameter unset.
    -/description-
  -/property-
  -property-
    -name-hive.cluster.delegation.token.store.zookeeper.znode-/name-
    -value-/hivedelegation-/value-
    -description-
      The root path for token store data. Note that this is used by both HiveServer2 and
      MetaStore to store delegation Token. One directory gets created for each of them.
      The final directory names would have the servername appended to it (HIVESERVER2,
      METASTORE).
    -/description-
  -/property-
  -property-
    -name-hive.cluster.delegation.token.store.zookeeper.acl-/name-
    -value/-
    -description-
      ACL for token store entries. Comma separated list of ACL entries. For example:
      sasl:hive/host1@MY.DOMAIN:cdrwa,sasl:hive/host2@MY.DOMAIN:cdrwa
      Defaults to all permissions for the hiveserver2/metastore process user.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.cache.pinobjtypes-/name-
    -value-Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order-/value-
    -description-List of comma separated metastore object types that should be pinned in the cache-/description-
  -/property-
  -property-
    -name-datanucleus.connectionPoolingType-/name-
    -value-BONECP-/value-
    -description-Specify connection pool library for datanucleus-/description-
  -/property-
  -property-
    -name-datanucleus.validateTables-/name-
    -value-false-/value-
    -description-validates existing schema against code. turn this on if you want to verify existing schema-/description-
  -/property-
  -property-
    -name-datanucleus.validateColumns-/name-
    -value-false-/value-
    -description-validates existing schema against code. turn this on if you want to verify existing schema-/description-
  -/property-
  -property-
    -name-datanucleus.validateConstraints-/name-
    -value-false-/value-
    -description-validates existing schema against code. turn this on if you want to verify existing schema-/description-
  -/property-
  -property-
    -name-datanucleus.storeManagerType-/name-
    -value-rdbms-/value-
    -description-metadata store type-/description-
  -/property-
  -property-
    -name-datanucleus.autoCreateSchema-/name-
    -value-true-/value-
    -description-creates necessary schema on a startup if one doesn't exist. set this to false, after creating it once-/description-
  -/property-
  -property-
    -name-datanucleus.fixedDatastore-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.metastore.schema.verification-/name-
    -value-false-/value-
    -description-
      Enforce metastore schema version consistency.
      True: Verify that version information stored in metastore matches with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.schema.verification.record.version-/name-
    -value-true-/value-
    -description-
      When true the current MS version is recorded in the VERSION table. If this is disabled and verification is
       enabled the MS will be unusable.
    -/description-
  -/property-
  -property-
    -name-datanucleus.autoStartMechanismMode-/name-
    -value-checked-/value-
    -description-throw exception if metadata tables are incorrect-/description-
  -/property-
  -property-
    -name-datanucleus.transactionIsolation-/name-
    -value-read-committed-/value-
    -description-Default transaction isolation level for identity generation.-/description-
  -/property-
  -property-
    -name-datanucleus.cache.level2-/name-
    -value-false-/value-
    -description-Use a level 2 cache. Turn this off if metadata is changed independently of Hive metastore server-/description-
  -/property-
  -property-
    -name-datanucleus.cache.level2.type-/name-
    -value-none-/value-
    -description/-
  -/property-
  -property-
    -name-datanucleus.identifierFactory-/name-
    -value-datanucleus1-/value-
    -description-
      Name of the identifier factory to use when generating table/column names etc. 
      'datanucleus1' is used for backward compatibility with DataNucleus v1
    -/description-
  -/property-
  -property-
    -name-datanucleus.rdbms.useLegacyNativeValueStrategy-/name-
    -value-true-/value-
    -description/-
  -/property-
  -property-
    -name-datanucleus.plugin.pluginRegistryBundleCheck-/name-
    -value-LOG-/value-
    -description-Defines what happens when plugin bundles are found and are duplicated [EXCEPTION|LOG|NONE]-/description-
  -/property-
  -property-
    -name-hive.metastore.batch.retrieve.max-/name-
    -value-300-/value-
    -description-
      Maximum number of objects (tables/partitions) can be retrieved from metastore in one batch. 
      The higher the number, the less the number of round trips is needed to the Hive metastore server, 
      but it may also cause higher memory requirement at the client side.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.batch.retrieve.table.partition.max-/name-
    -value-1000-/value-
    -description-Maximum number of table partitions that metastore internally retrieves in one batch.-/description-
  -/property-
  -property-
    -name-hive.metastore.init.hooks-/name-
    -value/-
    -description-
      A comma separated list of hooks to be invoked at the beginning of HMSHandler initialization. 
      An init hook is specified as the name of Java class which extends org.apache.hadoop.hive.metastore.MetaStoreInitListener.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.pre.event.listeners-/name-
    -value/-
    -description-List of comma separated listeners for metastore events.-/description-
  -/property-
  -property-
    -name-hive.metastore.event.listeners-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.metastore.event.db.listener.timetolive-/name-
    -value-86400s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      time after which events will be removed from the database listener queue
    -/description-
  -/property-
  -property-
    -name-hive.metastore.authorization.storage.checks-/name-
    -value-false-/value-
    -description-
      Should the metastore do authorization checks against the underlying storage (usually hdfs) 
      for operations like drop-partition (disallow the drop-partition if the user in
      question doesn't have permissions to delete the corresponding directory
      on the storage).
    -/description-
  -/property-
  -property-
    -name-hive.metastore.event.clean.freq-/name-
    -value-0s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Frequency at which timer task runs to purge expired events in metastore.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.event.expiry.duration-/name-
    -value-0s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Duration after which events expire from events table
    -/description-
  -/property-
  -property-
    -name-hive.metastore.execute.setugi-/name-
    -value-true-/value-
    -description-
      In unsecure mode, setting this property to true will cause the metastore to execute DFS operations using 
      the client's reported user and group permissions. Note that this property must be set on 
      both the client and server sides. Further note that its best effort. 
      If client sets its to true and server sets it to false, client setting will be ignored.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.partition.name.whitelist.pattern-/name-
    -value/-
    -description-Partition names will be checked against this regex pattern and rejected if not matched.-/description-
  -/property-
  -property-
    -name-hive.metastore.integral.jdo.pushdown-/name-
    -value-false-/value-
    -description-
      Allow JDO query pushdown for integral partition columns in metastore. Off by default. This
      improves metastore perf for integral columns, especially if there's a large number of partitions.
      However, it doesn't work correctly with integral values that are not normalized (e.g. have
      leading zeroes, like 0012). If metastore direct SQL is enabled and works, this optimization
      is also irrelevant.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.try.direct.sql-/name-
    -value-true-/value-
    -description-
      Whether the Hive metastore should try to use direct SQL queries instead of the
      DataNucleus for certain read paths. This can improve metastore performance when
      fetching many partitions or column statistics by orders of magnitude; however, it
      is not guaranteed to work on all RDBMS-es and all versions. In case of SQL failures,
      the metastore will fall back to the DataNucleus, so it's safe even if SQL doesn't
      work for all queries on your datastore. If all SQL queries fail (for example, your
      metastore is backed by MongoDB), you might want to disable this to save the
      try-and-fall-back cost.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.direct.sql.batch.size-/name-
    -value-0-/value-
    -description-
      Batch size for partition and other object retrieval from the underlying DB in direct
      SQL. For some DBs like Oracle and MSSQL, there are hardcoded or perf-based limitations
      that necessitate this. For DBs that can handle the queries, this isn't necessary and
      may impede performance. -1 means no batching, 0 means automatic batching.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.try.direct.sql.ddl-/name-
    -value-true-/value-
    -description-
      Same as hive.metastore.try.direct.sql, for read statements within a transaction that
      modifies metastore data. Due to non-standard behavior in Postgres, if a direct SQL
      select query has incorrect syntax or something similar inside a transaction, the
      entire transaction will fail and fall-back to DataNucleus will not be possible. You
      should disable the usage of direct SQL inside transactions if that happens in your case.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.orm.retrieveMapNullsAsEmptyStrings-/name-
    -value-false-/value-
    -description-Thrift does not support nulls in maps, so any nulls present in maps retrieved from ORM must either be pruned or converted to empty strings. Some backing dbs such as Oracle persist empty strings as nulls, so we should set this parameter if we wish to reverse that behaviour. For others, pruning is the correct behaviour-/description-
  -/property-
  -property-
    -name-hive.metastore.disallow.incompatible.col.type.changes-/name-
    -value-false-/value-
    -description-
      If true (default is false), ALTER TABLE operations which change the type of a
      column (say STRING) to an incompatible type (say MAP) are disallowed.
      RCFile default SerDe (ColumnarSerDe) serializes the values in such a way that the
      datatypes can be converted from string to any type. The map is also serialized as
      a string, which can be read as a string as well. However, with any binary
      serialization, this is not true. Blocking the ALTER TABLE prevents ClassCastExceptions
      when subsequently trying to access old partitions.
      
      Primitive types like INT, STRING, BIGINT, etc., are compatible with each other and are
      not blocked.
      
      See HIVE-4409 for more details.
    -/description-
  -/property-
  -property-
    -name-hive.table.parameters.default-/name-
    -value/-
    -description-Default property values for newly created tables-/description-
  -/property-
  -property-
    -name-hive.ddl.createtablelike.properties.whitelist-/name-
    -value/-
    -description-Table Properties to copy over when executing a Create Table Like.-/description-
  -/property-
  -property-
    -name-hive.metastore.rawstore.impl-/name-
    -value-org.apache.hadoop.hive.metastore.ObjectStore-/value-
    -description-
      Name of the class that implements org.apache.hadoop.hive.metastore.rawstore interface. 
      This class is used to store and retrieval of raw metadata objects such as table, database
    -/description-
  -/property-
  -property-
    -name-javax.jdo.option.ConnectionDriverName-/name-
    -value-org.apache.derby.jdbc.EmbeddedDriver-/value-
    -description-Driver class name for a JDBC metastore-/description-
  -/property-
  -property-
    -name-javax.jdo.PersistenceManagerFactoryClass-/name-
    -value-org.datanucleus.api.jdo.JDOPersistenceManagerFactory-/value-
    -description-class implementing the jdo persistence-/description-
  -/property-
  -property-
    -name-hive.metastore.expression.proxy-/name-
    -value-org.apache.hadoop.hive.ql.optimizer.ppr.PartitionExpressionForMetastore-/value-
    -description/-
  -/property-
  -property-
    -name-javax.jdo.option.DetachAllOnCommit-/name-
    -value-true-/value-
    -description-Detaches all objects from session so that they can be used after transaction is committed-/description-
  -/property-
  -property-
    -name-javax.jdo.option.NonTransactionalRead-/name-
    -value-true-/value-
    -description-Reads outside of transactions-/description-
  -/property-
  -property-
    -name-javax.jdo.option.ConnectionUserName-/name-
    -value-APP-/value-
    -description-Username to use against metastore database-/description-
  -/property-
  -property-
    -name-hive.metastore.end.function.listeners-/name-
    -value/-
    -description-List of comma separated listeners for the end of metastore functions.-/description-
  -/property-
  -property-
    -name-hive.metastore.partition.inherit.table.properties-/name-
    -value/-
    -description-
      List of comma separated keys occurring in table properties which will get inherited to newly created partitions. 
      * implies all the keys will get inherited.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.filter.hook-/name-
    -value-org.apache.hadoop.hive.metastore.DefaultMetaStoreFilterHookImpl-/value-
    -description-Metastore hook class for filtering the metadata read results. If hive.security.authorization.manageris set to instance of HiveAuthorizerFactory, then this value is ignored.-/description-
  -/property-
  -property-
    -name-hive.metastore.dml.events-/name-
    -value-false-/value-
    -description-If true, the metastore will be asked to fire events for DML operations-/description-
  -/property-
  -property-
    -name-hive.metastore.client.drop.partitions.using.expressions-/name-
    -value-true-/value-
    -description-Choose whether dropping partitions with HCatClient pushes the partition-predicate to the metastore, or drops partitions iteratively-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.enabled-/name-
    -value-true-/value-
    -description-Whether aggregate stats caching is enabled or not.-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.size-/name-
    -value-10000-/value-
    -description-Maximum number of aggregate stats nodes that we will place in the metastore aggregate stats cache.-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.max.partitions-/name-
    -value-10000-/value-
    -description-Maximum number of partitions that are aggregated per cache node.-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.fpp-/name-
    -value-0.01-/value-
    -description-Maximum false positive probability for the Bloom Filter used in each aggregate stats cache node (default 1%).-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.max.variance-/name-
    -value-0.01-/value-
    -description-Maximum tolerable variance in number of partitions between a cached node and our request (default 1%).-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.ttl-/name-
    -value-600s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Number of seconds for a cached node to be active in the cache before they become stale.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.max.writer.wait-/name-
    -value-5000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a writer will wait to acquire the writelock before giving up.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.max.reader.wait-/name-
    -value-1000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Number of milliseconds a reader will wait to acquire the readlock before giving up.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.max.full-/name-
    -value-0.9-/value-
    -description-Maximum cache full % after which the cache cleaner thread kicks in.-/description-
  -/property-
  -property-
    -name-hive.metastore.aggregate.stats.cache.clean.until-/name-
    -value-0.8-/value-
    -description-The cleaner thread cleans until cache reaches this % full size.-/description-
  -/property-
  -property-
    -name-hive.metadata.export.location-/name-
    -value/-
    -description-
      When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, 
      it is the location to which the metadata will be exported. The default is an empty string, which results in the 
      metadata being exported to the current user's home directory on HDFS.
    -/description-
  -/property-
  -property-
    -name-hive.metadata.move.exported.metadata.to.trash-/name-
    -value-true-/value-
    -description-
      When used in conjunction with the org.apache.hadoop.hive.ql.parse.MetaDataExportListener pre event listener, 
      this setting determines if the metadata that is exported will subsequently be moved to the user's trash directory 
      alongside the dropped table data. This ensures that the metadata will be cleaned up along with the dropped table data.
    -/description-
  -/property-
  -property-
    -name-hive.cli.errors.ignore-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.cli.print.current.db-/name-
    -value-false-/value-
    -description-Whether to include the current database in the Hive prompt.-/description-
  -/property-
  -property-
    -name-hive.cli.prompt-/name-
    -value-hive-/value-
    -description-
      Command line prompt configuration value. Other hiveconf can be used in this configuration value. 
      Variable substitution will only be invoked at the Hive CLI startup.
    -/description-
  -/property-
  -property-
    -name-hive.cli.pretty.output.num.cols-/name-
    -value--1-/value-
    -description-
      The number of columns to use when formatting output generated by the DESCRIBE PRETTY table_name command.
      If the value of this property is -1, then Hive will use the auto-detected terminal width.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.fs.handler.class-/name-
    -value-org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl-/value-
    -description/-
  -/property-
  -property-
    -name-hive.session.id-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.session.silent-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.session.history.enabled-/name-
    -value-false-/value-
    -description-Whether to log Hive query, query plan, runtime statistics etc.-/description-
  -/property-
  -property-
    -name-hive.query.string-/name-
    -value/-
    -description-Query being executed (might be multiple per a session)-/description-
  -/property-
  -property-
    -name-hive.query.id-/name-
    -value/-
    -description-ID for query being executed (might be multiple per a session)-/description-
  -/property-
  -property-
    -name-hive.jobname.length-/name-
    -value-50-/value-
    -description-max jobname length-/description-
  -/property-
  -property-
    -name-hive.jar.path-/name-
    -value/-
    -description-The location of hive_cli.jar that is used when submitting jobs in a separate jvm.-/description-
  -/property-
  -property-
    -name-hive.aux.jars.path-/name-
    -value/-
    -description-The location of the plugin jars that contain implementations of user defined functions and serdes.-/description-
  -/property-
  -property-
    -name-hive.reloadable.aux.jars.path-/name-
    -value/-
    -description-Jars can be renewed by executing reload command. And these jars can be used as the auxiliary classes like creating a UDF or SerDe.-/description-
  -/property-
  -property-
    -name-hive.added.files.path-/name-
    -value/-
    -description-This an internal parameter.-/description-
  -/property-
  -property-
    -name-hive.added.jars.path-/name-
    -value/-
    -description-This an internal parameter.-/description-
  -/property-
  -property-
    -name-hive.added.archives.path-/name-
    -value/-
    -description-This an internal parameter.-/description-
  -/property-
  -property-
    -name-hive.auto.progress.timeout-/name-
    -value-0s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      How long to run autoprogressor for the script/UDTF operators.
      Set to 0 for forever.
    -/description-
  -/property-
  -property-
    -name-hive.script.auto.progress-/name-
    -value-false-/value-
    -description-
      Whether Hive Transform/Map/Reduce Clause should automatically send progress information to TaskTracker 
      to avoid the task getting killed because of inactivity.  Hive sends progress information when the script is 
      outputting to stderr.  This option removes the need of periodically producing stderr messages, 
      but users should be cautious because this may prevent infinite loops in the scripts to be killed by TaskTracker.
    -/description-
  -/property-
  -property-
    -name-hive.script.operator.id.env.var-/name-
    -value-HIVE_SCRIPT_OPERATOR_ID-/value-
    -description-
      Name of the environment variable that holds the unique script operator ID in the user's 
      transform function (the custom mapper/reducer that the user has specified in the query)
    -/description-
  -/property-
  -property-
    -name-hive.script.operator.truncate.env-/name-
    -value-false-/value-
    -description-Truncate each environment variable for external script in scripts operator to 20KB (to fit system limits)-/description-
  -/property-
  -property-
    -name-hive.script.operator.env.blacklist-/name-
    -value-hive.txn.valid.txns,hive.script.operator.env.blacklist-/value-
    -description-Comma separated list of keys from the configuration file not to convert to environment variables when envoking the script operator-/description-
  -/property-
  -property-
    -name-hive.mapred.mode-/name-
    -value-nonstrict-/value-
    -description-
      The mode in which the Hive operations are being performed. 
      In strict mode, some risky queries are not allowed to run. They include:
        Cartesian Product.
        No partition being picked up for a query.
        Comparing bigints and strings.
        Comparing bigints and doubles.
        Orderby without limit.
    -/description-
  -/property-
  -property-
    -name-hive.alias-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.map.aggr-/name-
    -value-true-/value-
    -description-Whether to use map-side aggregation in Hive Group By queries-/description-
  -/property-
  -property-
    -name-hive.groupby.skewindata-/name-
    -value-false-/value-
    -description-Whether there is skew in data to optimize group by queries-/description-
  -/property-
  -property-
    -name-hive.join.emit.interval-/name-
    -value-1000-/value-
    -description-How many rows in the right-most join operand Hive should buffer before emitting the join result.-/description-
  -/property-
  -property-
    -name-hive.join.cache.size-/name-
    -value-25000-/value-
    -description-How many rows in the joining tables (except the streaming table) should be cached in memory.-/description-
  -/property-
  -property-
    -name-hive.cbo.enable-/name-
    -value-true-/value-
    -description-Flag to control enabling Cost Based Optimizations using Calcite framework.-/description-
  -/property-
  -property-
    -name-hive.cbo.returnpath.hiveop-/name-
    -value-false-/value-
    -description-Flag to control calcite plan to hive operator conversion-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.extended-/name-
    -value-false-/value-
    -description-Flag to control enabling the extended cost model based onCPU, IO and cardinality. Otherwise, the cost model is based on cardinality.-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.cpu-/name-
    -value-0.000001-/value-
    -description-Default cost of a comparison-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.network-/name-
    -value-150.0-/value-
    -description-Default cost of a transfering a byte over network; expressed as multiple of CPU cost-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.local.fs.write-/name-
    -value-4.0-/value-
    -description-Default cost of writing a byte to local FS; expressed as multiple of NETWORK cost-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.local.fs.read-/name-
    -value-4.0-/value-
    -description-Default cost of reading a byte from local FS; expressed as multiple of NETWORK cost-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.hdfs.write-/name-
    -value-10.0-/value-
    -description-Default cost of writing a byte to HDFS; expressed as multiple of Local FS write cost-/description-
  -/property-
  -property-
    -name-hive.cbo.costmodel.hdfs.read-/name-
    -value-1.5-/value-
    -description-Default cost of reading a byte from HDFS; expressed as multiple of Local FS read cost-/description-
  -/property-
  -property-
    -name-hive.mapjoin.bucket.cache.size-/name-
    -value-100-/value-
    -description/-
  -/property-
  -property-
    -name-hive.mapjoin.optimized.hashtable-/name-
    -value-true-/value-
    -description-
      Whether Hive should use memory-optimized hash table for MapJoin. Only works on Tez,
      because memory-optimized hashtable cannot be serialized.
    -/description-
  -/property-
  -property-
    -name-hive.mapjoin.hybridgrace.hashtable-/name-
    -value-true-/value-
    -description-Whether to use hybridgrace hash join as the join method for mapjoin. Tez only.-/description-
  -/property-
  -property-
    -name-hive.mapjoin.hybridgrace.memcheckfrequency-/name-
    -value-1024-/value-
    -description-For hybrid grace hash join, how often (how many rows apart) we check if memory is full. This number should be power of 2.-/description-
  -/property-
  -property-
    -name-hive.mapjoin.hybridgrace.minwbsize-/name-
    -value-524288-/value-
    -description-For hybrid grace hash join, the minimum write buffer size used by optimized hashtable. Default is 512 KB.-/description-
  -/property-
  -property-
    -name-hive.mapjoin.hybridgrace.minnumpartitions-/name-
    -value-16-/value-
    -description-For hybrid grace hash join, the minimum number of partitions to create.-/description-
  -/property-
  -property-
    -name-hive.mapjoin.optimized.hashtable.wbsize-/name-
    -value-10485760-/value-
    -description-
      Optimized hashtable (see hive.mapjoin.optimized.hashtable) uses a chain of buffers to
      store data. This is one buffer size. HT may be slightly faster if this is larger, but for small
      joins unnecessary memory will be allocated and then trimmed.
    -/description-
  -/property-
  -property-
    -name-hive.smbjoin.cache.rows-/name-
    -value-10000-/value-
    -description-How many rows with the same key value should be cached in memory per smb joined table.-/description-
  -/property-
  -property-
    -name-hive.groupby.mapaggr.checkinterval-/name-
    -value-100000-/value-
    -description-Number of rows after which size of the grouping keys/aggregation classes is performed-/description-
  -/property-
  -property-
    -name-hive.map.aggr.hash.percentmemory-/name-
    -value-0.5-/value-
    -description-Portion of total memory to be used by map-side group aggregation hash table-/description-
  -/property-
  -property-
    -name-hive.mapjoin.followby.map.aggr.hash.percentmemory-/name-
    -value-0.3-/value-
    -description-Portion of total memory to be used by map-side group aggregation hash table, when this group by is followed by map join-/description-
  -/property-
  -property-
    -name-hive.map.aggr.hash.force.flush.memory.threshold-/name-
    -value-0.9-/value-
    -description-
      The max memory to be used by map-side group aggregation hash table.
      If the memory usage is higher than this number, force to flush data
    -/description-
  -/property-
  -property-
    -name-hive.map.aggr.hash.min.reduction-/name-
    -value-0.5-/value-
    -description-
      Hash aggregation will be turned off if the ratio between hash  table size and input rows is bigger than this number. 
      Set to 1 to make sure hash aggregation is never turned off.
    -/description-
  -/property-
  -property-
    -name-hive.multigroupby.singlereducer-/name-
    -value-true-/value-
    -description-
      Whether to optimize multi group by query to generate single M/R  job plan. If the multi group by query has 
      common group by keys, it will be optimized to generate single M/R job.
    -/description-
  -/property-
  -property-
    -name-hive.map.groupby.sorted-/name-
    -value-false-/value-
    -description-
      If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform 
      the group by in the mapper by using BucketizedHiveInputFormat. The only downside to this
      is that it limits the number of mappers to the number of files.
    -/description-
  -/property-
  -property-
    -name-hive.map.groupby.sorted.testmode-/name-
    -value-false-/value-
    -description-
      If the bucketing/sorting properties of the table exactly match the grouping key, whether to perform 
      the group by in the mapper by using BucketizedHiveInputFormat. If the test mode is set, the plan
      is not converted, but a query property is set to denote the same.
    -/description-
  -/property-
  -property-
    -name-hive.groupby.orderby.position.alias-/name-
    -value-false-/value-
    -description-Whether to enable using Column Position Alias in Group By or Order By-/description-
  -/property-
  -property-
    -name-hive.new.job.grouping.set.cardinality-/name-
    -value-30-/value-
    -description-
      Whether a new map-reduce job should be launched for grouping sets/rollups/cubes.
      For a query like: select a, b, c, count(1) from T group by a, b, c with rollup;
      4 rows are created per row: (a, b, c), (a, b, null), (a, null, null), (null, null, null).
      This can lead to explosion across map-reduce boundary if the cardinality of T is very high,
      and map-side aggregation does not do a very good job. 
      
      This parameter decides if Hive should add an additional map-reduce job. If the grouping set
      cardinality (4 in the example above), is more than this value, a new MR job is added under the
      assumption that the original group by will reduce the data size.
    -/description-
  -/property-
  -property-
    -name-hive.exec.copyfile.maxsize-/name-
    -value-33554432-/value-
    -description-Maximum file size (in Mb) that Hive uses to do single HDFS copies between directories.Distributed copies (distcp) will be used instead for bigger files so that copies can be done faster.-/description-
  -/property-
  -property-
    -name-hive.udtf.auto.progress-/name-
    -value-false-/value-
    -description-
      Whether Hive should automatically send progress information to TaskTracker 
      when using UDTF's to prevent the task getting killed because of inactivity.  Users should be cautious 
      because this may prevent TaskTracker from killing tasks with infinite loops.
    -/description-
  -/property-
  -property-
    -name-hive.default.fileformat-/name-
    -value-TextFile-/value-
    -description-
      Expects one of [textfile, sequencefile, rcfile, orc].
      Default file format for CREATE TABLE statement. Users can explicitly override it by CREATE TABLE ... STORED AS [FORMAT]
    -/description-
  -/property-
  -property-
    -name-hive.default.fileformat.managed-/name-
    -value-none-/value-
    -description-
      Expects one of [none, textfile, sequencefile, rcfile, orc].
      Default file format for CREATE TABLE statement applied to managed tables only. External tables will be 
      created with format specified by hive.default.fileformat. Leaving this null will result in using hive.default.fileformat 
      for all tables.
    -/description-
  -/property-
  -property-
    -name-hive.query.result.fileformat-/name-
    -value-TextFile-/value-
    -description-
      Expects one of [textfile, sequencefile, rcfile].
      Default file format for storing result of the query.
    -/description-
  -/property-
  -property-
    -name-hive.fileformat.check-/name-
    -value-true-/value-
    -description-Whether to check file format or not when loading data files-/description-
  -/property-
  -property-
    -name-hive.default.rcfile.serde-/name-
    -value-org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe-/value-
    -description-The default SerDe Hive will use for the RCFile format-/description-
  -/property-
  -property-
    -name-hive.default.serde-/name-
    -value-org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe-/value-
    -description-The default SerDe Hive will use for storage formats that do not specify a SerDe.-/description-
  -/property-
  -property-
    -name-hive.serdes.using.metastore.for.schema-/name-
    -value-org.apache.hadoop.hive.ql.io.orc.OrcSerde,org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe,org.apache.hadoop.hive.serde2.dynamic_type.DynamicSerDe,org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,org.apache.hadoop.hive.serde2.columnar.LazyBinaryColumnarSerDe,org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe,org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe-/value-
    -description-SerDes retriving schema from metastore. This an internal parameter. Check with the hive dev. team-/description-
  -/property-
  -property-
    -name-hive.querylog.location-/name-
    -value-${system:java.io.tmpdir}/${system:user.name}-/value-
    -description-Location of Hive run time structured log file-/description-
  -/property-
  -property-
    -name-hive.querylog.enable.plan.progress-/name-
    -value-true-/value-
    -description-
      Whether to log the plan's progress every time a job's progress is checked.
      These logs are written to the location specified by hive.querylog.location
    -/description-
  -/property-
  -property-
    -name-hive.querylog.plan.progress.interval-/name-
    -value-60000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      The interval to wait between logging the plan's progress.
      If there is a whole number percentage change in the progress of the mappers or the reducers,
      the progress is logged regardless of this value.
      The actual interval will be the ceiling of (this value divided by the value of
      hive.exec.counters.pull.interval) multiplied by the value of hive.exec.counters.pull.interval
      I.e. if it is not divide evenly by the value of hive.exec.counters.pull.interval it will be
      logged less frequently than specified.
      This only has an effect if hive.querylog.enable.plan.progress is set to true.
    -/description-
  -/property-
  -property-
    -name-hive.script.serde-/name-
    -value-org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe-/value-
    -description-The default SerDe for transmitting input data to and reading output data from the user scripts. -/description-
  -/property-
  -property-
    -name-hive.script.recordreader-/name-
    -value-org.apache.hadoop.hive.ql.exec.TextRecordReader-/value-
    -description-The default record reader for reading data from the user scripts. -/description-
  -/property-
  -property-
    -name-hive.script.recordwriter-/name-
    -value-org.apache.hadoop.hive.ql.exec.TextRecordWriter-/value-
    -description-The default record writer for writing data to the user scripts. -/description-
  -/property-
  -property-
    -name-hive.transform.escape.input-/name-
    -value-false-/value-
    -description-
      This adds an option to escape special chars (newlines, carriage returns and
      tabs) when they are passed to the user script. This is useful if the Hive tables
      can contain data that contains special characters.
    -/description-
  -/property-
  -property-
    -name-hive.binary.record.max.length-/name-
    -value-1000-/value-
    -description-
      Read from a binary stream and treat each hive.binary.record.max.length bytes as a record. 
      The last record before the end of stream can have less than hive.binary.record.max.length bytes
    -/description-
  -/property-
  -property-
    -name-hive.hwi.listen.host-/name-
    -value-0.0.0.0-/value-
    -description-This is the host address the Hive Web Interface will listen on-/description-
  -/property-
  -property-
    -name-hive.hwi.listen.port-/name-
    -value-9999-/value-
    -description-This is the port the Hive Web Interface will listen on-/description-
  -/property-
  -property-
    -name-hive.hwi.war.file-/name-
    -value-${env:HWI_WAR_FILE}-/value-
    -description-This sets the path to the HWI war file, relative to ${HIVE_HOME}. -/description-
  -/property-
  -property-
    -name-hive.mapred.local.mem-/name-
    -value-0-/value-
    -description-mapper/reducer memory in local mode-/description-
  -/property-
  -property-
    -name-hive.mapjoin.smalltable.filesize-/name-
    -value-25000000-/value-
    -description-
      The threshold for the input file size of the small tables; if the file size is smaller 
      than this threshold, it will try to convert the common join into map join
    -/description-
  -/property-
  -property-
    -name-hive.sample.seednumber-/name-
    -value-0-/value-
    -description-A number used to percentage sampling. By changing this number, user will change the subsets of data sampled.-/description-
  -/property-
  -property-
    -name-hive.test.mode-/name-
    -value-false-/value-
    -description-Whether Hive is running in test mode. If yes, it turns on sampling and prefixes the output tablename.-/description-
  -/property-
  -property-
    -name-hive.test.mode.prefix-/name-
    -value-test_-/value-
    -description-In test mode, specfies prefixes for the output table-/description-
  -/property-
  -property-
    -name-hive.test.mode.samplefreq-/name-
    -value-32-/value-
    -description-
      In test mode, specfies sampling frequency for table, which is not bucketed,
      For example, the following query:
        INSERT OVERWRITE TABLE dest SELECT col1 from src
      would be converted to
        INSERT OVERWRITE TABLE test_dest
        SELECT col1 from src TABLESAMPLE (BUCKET 1 out of 32 on rand(1))
    -/description-
  -/property-
  -property-
    -name-hive.test.mode.nosamplelist-/name-
    -value/-
    -description-In test mode, specifies comma separated table names which would not apply sampling-/description-
  -/property-
  -property-
    -name-hive.test.dummystats.aggregator-/name-
    -value/-
    -description-internal variable for test-/description-
  -/property-
  -property-
    -name-hive.test.dummystats.publisher-/name-
    -value/-
    -description-internal variable for test-/description-
  -/property-
  -property-
    -name-hive.test.currenttimestamp-/name-
    -value/-
    -description-current timestamp for test-/description-
  -/property-
  -property-
    -name-hive.merge.mapfiles-/name-
    -value-true-/value-
    -description-Merge small files at the end of a map-only job-/description-
  -/property-
  -property-
    -name-hive.merge.mapredfiles-/name-
    -value-false-/value-
    -description-Merge small files at the end of a map-reduce job-/description-
  -/property-
  -property-
    -name-hive.merge.tezfiles-/name-
    -value-false-/value-
    -description-Merge small files at the end of a Tez DAG-/description-
  -/property-
  -property-
    -name-hive.merge.sparkfiles-/name-
    -value-false-/value-
    -description-Merge small files at the end of a Spark DAG Transformation-/description-
  -/property-
  -property-
    -name-hive.merge.size.per.task-/name-
    -value-256000000-/value-
    -description-Size of merged files at the end of the job-/description-
  -/property-
  -property-
    -name-hive.merge.smallfiles.avgsize-/name-
    -value-16000000-/value-
    -description-
      When the average output file size of a job is less than this number, Hive will start an additional 
      map-reduce job to merge the output files into bigger files. This is only done for map-only jobs 
      if hive.merge.mapfiles is true, and for map-reduce jobs if hive.merge.mapredfiles is true.
    -/description-
  -/property-
  -property-
    -name-hive.merge.rcfile.block.level-/name-
    -value-true-/value-
    -description/-
  -/property-
  -property-
    -name-hive.merge.orcfile.stripe.level-/name-
    -value-true-/value-
    -description-
      When hive.merge.mapfiles, hive.merge.mapredfiles or hive.merge.tezfiles is enabled
      while writing a table with ORC file format, enabling this config will do stripe-level
      fast merge for small ORC files. Note that enabling this config will not honor the
      padding tolerance config (hive.exec.orc.block.padding.tolerance).
    -/description-
  -/property-
  -property-
    -name-hive.exec.rcfile.use.explicit.header-/name-
    -value-true-/value-
    -description-
      If this is set the header for RCFiles will simply be RCF.  If this is not
      set the header will be that borrowed from sequence files, e.g. SEQ- followed
      by the input and output RCFile formats.
    -/description-
  -/property-
  -property-
    -name-hive.exec.rcfile.use.sync.cache-/name-
    -value-true-/value-
    -description/-
  -/property-
  -property-
    -name-hive.io.rcfile.record.interval-/name-
    -value-2147483647-/value-
    -description/-
  -/property-
  -property-
    -name-hive.io.rcfile.column.number.conf-/name-
    -value-0-/value-
    -description/-
  -/property-
  -property-
    -name-hive.io.rcfile.tolerate.corruptions-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.io.rcfile.record.buffer.size-/name-
    -value-4194304-/value-
    -description/-
  -/property-
  -property-
    -name-parquet.memory.pool.ratio-/name-
    -value-0.5-/value-
    -description-
      Maximum fraction of heap that can be used by Parquet file writers in one task.
      It is for avoiding OutOfMemory error in tasks. Work with Parquet 1.6.0 and above.
      This config parameter is defined in Parquet, so that it does not start with 'hive.'.
    -/description-
  -/property-
  -property-
    -name-hive.parquet.timestamp.skip.conversion-/name-
    -value-true-/value-
    -description-Current Hive implementation of parquet stores timestamps to UTC, this flag allows skipping of the conversionon reading parquet files from other tools-/description-
  -/property-
  -property-
    -name-hive.int.timestamp.conversion.in.seconds-/name-
    -value-false-/value-
    -description-
      Boolean/tinyint/smallint/int/bigint value is interpreted as milliseconds during the timestamp conversion.
      Set this flag to true to interpret the value as seconds to be consistent with float/double.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.memory.pool-/name-
    -value-0.5-/value-
    -description-Maximum fraction of heap that can be used by ORC file writers-/description-
  -/property-
  -property-
    -name-hive.exec.orc.write.format-/name-
    -value/-
    -description-
      Define the version of the file to write. Possible values are 0.11 and 0.12.
      If this parameter is not defined, ORC will use the run length encoding (RLE)
      introduced in Hive 0.12. Any value other than 0.11 results in the 0.12 encoding.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.stripe.size-/name-
    -value-67108864-/value-
    -description-Define the default ORC stripe size, in bytes.-/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.block.size-/name-
    -value-268435456-/value-
    -description-Define the default file system block size for ORC files.-/description-
  -/property-
  -property-
    -name-hive.exec.orc.dictionary.key.size.threshold-/name-
    -value-0.8-/value-
    -description-
      If the number of keys in a dictionary is greater than this fraction of the total number of
      non-null rows, turn off dictionary encoding.  Use 1 to always use dictionary encoding.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.row.index.stride-/name-
    -value-10000-/value-
    -description-
      Define the default ORC index stride in number of rows. (Stride is the number of rows
      an index entry represents.)
    -/description-
  -/property-
  -property-
    -name-hive.orc.row.index.stride.dictionary.check-/name-
    -value-true-/value-
    -description-
      If enabled dictionary check will happen after first row index stride (default 10000 rows)
      else dictionary check will happen before writing first stripe. In both cases, the decision
      to use dictionary or not will be retained thereafter.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.buffer.size-/name-
    -value-262144-/value-
    -description-Define the default ORC buffer size, in bytes.-/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.block.padding-/name-
    -value-true-/value-
    -description-Define the default block padding, which pads stripes to the HDFS block boundaries.-/description-
  -/property-
  -property-
    -name-hive.exec.orc.block.padding.tolerance-/name-
    -value-0.05-/value-
    -description-
      Define the tolerance for block padding as a decimal fraction of stripe size (for
      example, the default value 0.05 is 5% of the stripe size). For the defaults of 64Mb
      ORC stripe and 256Mb HDFS blocks, the default block padding tolerance of 5% will
      reserve a maximum of 3.2Mb for padding within the 256Mb block. In that case, if the
      available size within the block is more than 3.2Mb, a new smaller stripe will be
      inserted to fit within that space. This will make sure that no stripe written will
      cross block boundaries and cause remote reads within a node local task.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.default.compress-/name-
    -value-ZLIB-/value-
    -description-Define the default compression codec for ORC file-/description-
  -/property-
  -property-
    -name-hive.exec.orc.encoding.strategy-/name-
    -value-SPEED-/value-
    -description-
      Expects one of [speed, compression].
      Define the encoding strategy to use while writing data. Changing this will
      only affect the light weight encoding for integers. This flag will not
      change the compression level of higher level compression codec (like ZLIB).
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.compression.strategy-/name-
    -value-SPEED-/value-
    -description-
      Expects one of [speed, compression].
      Define the compression strategy to use while writing data. 
      This changes the compression level of higher level compression codec (like ZLIB).
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.split.strategy-/name-
    -value-HYBRID-/value-
    -description-
      Expects one of [hybrid, bi, etl].
      This is not a user level config. BI strategy is used when the requirement is to spend less time in split generation as opposed to query execution (split generation does not read or cache file footers). ETL strategy is used when spending little more time in split generation is acceptable (split generation reads and caches file footers). HYBRID chooses between the above strategies based on heuristics.
    -/description-
  -/property-
  -property-
    -name-hive.orc.splits.include.file.footer-/name-
    -value-false-/value-
    -description-
      If turned on splits generated by orc will include metadata about the stripes in the file. This
      data is read remotely (from the client or HS2 machine) and sent to all the tasks.
    -/description-
  -/property-
  -property-
    -name-hive.orc.cache.stripe.details.size-/name-
    -value-10000-/value-
    -description-Cache size for keeping meta info about orc splits cached in the client.-/description-
  -/property-
  -property-
    -name-hive.orc.compute.splits.num.threads-/name-
    -value-10-/value-
    -description-How many threads orc should use to create splits in parallel.-/description-
  -/property-
  -property-
    -name-hive.exec.orc.skip.corrupt.data-/name-
    -value-false-/value-
    -description-
      If ORC reader encounters corrupt data, this value will be used to determine
      whether to skip the corrupt data or throw exception. The default behavior is to throw exception.
    -/description-
  -/property-
  -property-
    -name-hive.exec.orc.zerocopy-/name-
    -value-false-/value-
    -description-Use zerocopy reads with ORC. (This requires Hadoop 2.3 or later.)-/description-
  -/property-
  -property-
    -name-hive.lazysimple.extended_boolean_literal-/name-
    -value-false-/value-
    -description-
      LazySimpleSerde uses this property to determine if it treats 'T', 't', 'F', 'f',
      '1', and '0' as extened, legal boolean literal, in addition to 'TRUE' and 'FALSE'.
      The default is false, which means only 'TRUE' and 'FALSE' are treated as legal
      boolean literal.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.skewjoin-/name-
    -value-false-/value-
    -description-
      Whether to enable skew join optimization. 
      The algorithm is as follows: At runtime, detect the keys with a large skew. Instead of
      processing those keys, store them temporarily in an HDFS directory. In a follow-up map-reduce
      job, process those skewed keys. The same key need not be skewed for all the tables, and so,
      the follow-up map-reduce job (for the skewed keys) would be much faster, since it would be a
      map-join.
    -/description-
  -/property-
  -property-
    -name-hive.auto.convert.join-/name-
    -value-true-/value-
    -description-Whether Hive enables the optimization about converting common join into mapjoin based on the input file size-/description-
  -/property-
  -property-
    -name-hive.auto.convert.join.noconditionaltask-/name-
    -value-true-/value-
    -description-
      Whether Hive enables the optimization about converting common join into mapjoin based on the input file size. 
      If this parameter is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than the
      specified size, the join is directly converted to a mapjoin (there is no conditional task).
    -/description-
  -/property-
  -property-
    -name-hive.auto.convert.join.noconditionaltask.size-/name-
    -value-10000000-/value-
    -description-
      If hive.auto.convert.join.noconditionaltask is off, this parameter does not take affect. 
      However, if it is on, and the sum of size for n-1 of the tables/partitions for a n-way join is smaller than this size, 
      the join is directly converted to a mapjoin(there is no conditional task). The default is 10MB
    -/description-
  -/property-
  -property-
    -name-hive.auto.convert.join.use.nonstaged-/name-
    -value-false-/value-
    -description-
      For conditional joins, if input stream from a small alias can be directly applied to join operator without 
      filtering or projection, the alias need not to be pre-staged in distributed cache via mapred local task.
      Currently, this is not working with vectorization or tez execution engine.
    -/description-
  -/property-
  -property-
    -name-hive.skewjoin.key-/name-
    -value-100000-/value-
    -description-
      Determine if we get a skew key in join. If we see more than the specified number of rows with the same key in join operator,
      we think the key as a skew join key. 
    -/description-
  -/property-
  -property-
    -name-hive.skewjoin.mapjoin.map.tasks-/name-
    -value-10000-/value-
    -description-
      Determine the number of map task used in the follow up map join job for a skew join.
      It should be used together with hive.skewjoin.mapjoin.min.split to perform a fine grained control.
    -/description-
  -/property-
  -property-
    -name-hive.skewjoin.mapjoin.min.split-/name-
    -value-33554432-/value-
    -description-
      Determine the number of map task at most used in the follow up map join job for a skew join by specifying 
      the minimum split size. It should be used together with hive.skewjoin.mapjoin.map.tasks to perform a fine grained control.
    -/description-
  -/property-
  -property-
    -name-hive.heartbeat.interval-/name-
    -value-1000-/value-
    -description-Send a heartbeat after this interval - used by mapjoin and filter operators-/description-
  -/property-
  -property-
    -name-hive.limit.row.max.size-/name-
    -value-100000-/value-
    -description-When trying a smaller subset of data for simple LIMIT, how much size we need to guarantee each row to have at least.-/description-
  -/property-
  -property-
    -name-hive.limit.optimize.limit.file-/name-
    -value-10-/value-
    -description-When trying a smaller subset of data for simple LIMIT, maximum number of files we can sample.-/description-
  -/property-
  -property-
    -name-hive.limit.optimize.enable-/name-
    -value-false-/value-
    -description-Whether to enable to optimization to trying a smaller subset of data for simple LIMIT first.-/description-
  -/property-
  -property-
    -name-hive.limit.optimize.fetch.max-/name-
    -value-50000-/value-
    -description-
      Maximum number of rows allowed for a smaller subset of data for simple LIMIT, if it is a fetch query. 
      Insert queries are not restricted by this limit.
    -/description-
  -/property-
  -property-
    -name-hive.limit.pushdown.memory.usage-/name-
    -value--1.0-/value-
    -description-The max memory to be used for hash in RS operator for top K selection.-/description-
  -/property-
  -property-
    -name-hive.limit.query.max.table.partition-/name-
    -value--1-/value-
    -description-
      This controls how many partitions can be scanned for each partitioned table.
      The default value "-1" means no limit.
    -/description-
  -/property-
  -property-
    -name-hive.hashtable.key.count.adjustment-/name-
    -value-1.0-/value-
    -description-Adjustment to mapjoin hashtable size derived from table and column statistics; the estimate of the number of keys is divided by this value. If the value is 0, statistics are not usedand hive.hashtable.initialCapacity is used instead.-/description-
  -/property-
  -property-
    -name-hive.hashtable.initialCapacity-/name-
    -value-100000-/value-
    -description-Initial capacity of mapjoin hashtable if statistics are absent, or if hive.hashtable.stats.key.estimate.adjustment is set to 0-/description-
  -/property-
  -property-
    -name-hive.hashtable.loadfactor-/name-
    -value-0.75-/value-
    -description/-
  -/property-
  -property-
    -name-hive.mapjoin.followby.gby.localtask.max.memory.usage-/name-
    -value-0.55-/value-
    -description-
      This number means how much memory the local task can take to hold the key/value into an in-memory hash table 
      when this map join is followed by a group by. If the local task's memory usage is more than this number, 
      the local task will abort by itself. It means the data of the small table is too large to be held in memory.
    -/description-
  -/property-
  -property-
    -name-hive.mapjoin.localtask.max.memory.usage-/name-
    -value-0.9-/value-
    -description-
      This number means how much memory the local task can take to hold the key/value into an in-memory hash table. 
      If the local task's memory usage is more than this number, the local task will abort by itself. 
      It means the data of the small table is too large to be held in memory.
    -/description-
  -/property-
  -property-
    -name-hive.mapjoin.check.memory.rows-/name-
    -value-100000-/value-
    -description-The number means after how many rows processed it needs to check the memory usage-/description-
  -/property-
  -property-
    -name-hive.debug.localtask-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.input.format-/name-
    -value-org.apache.hadoop.hive.ql.io.CombineHiveInputFormat-/value-
    -description-The default input format. Set this to HiveInputFormat if you encounter problems with CombineHiveInputFormat.-/description-
  -/property-
  -property-
    -name-hive.tez.input.format-/name-
    -value-org.apache.hadoop.hive.ql.io.HiveInputFormat-/value-
    -description-The default input format for tez. Tez groups splits in the AM.-/description-
  -/property-
  -property-
    -name-hive.tez.container.size-/name-
    -value--1-/value-
    -description-By default Tez will spawn containers of the size of a mapper. This can be used to overwrite.-/description-
  -/property-
  -property-
    -name-hive.tez.cpu.vcores-/name-
    -value--1-/value-
    -description-
      By default Tez will ask for however many cpus map-reduce is configured to use per container.
      This can be used to overwrite.
    -/description-
  -/property-
  -property-
    -name-hive.tez.java.opts-/name-
    -value/-
    -description-By default Tez will use the Java options from map tasks. This can be used to overwrite.-/description-
  -/property-
  -property-
    -name-hive.tez.log.level-/name-
    -value-INFO-/value-
    -description-
      The log level to use for tasks executing as part of the DAG.
      Used only if hive.tez.java.opts is used to configure Java options.
    -/description-
  -/property-
  -property-
    -name-hive.enforce.bucketing-/name-
    -value-false-/value-
    -description-Whether bucketing is enforced. If true, while inserting into the table, bucketing is enforced.-/description-
  -/property-
  -property-
    -name-hive.enforce.sorting-/name-
    -value-false-/value-
    -description-Whether sorting is enforced. If true, while inserting into the table, sorting is enforced.-/description-
  -/property-
  -property-
    -name-hive.optimize.bucketingsorting-/name-
    -value-true-/value-
    -description-
      If hive.enforce.bucketing or hive.enforce.sorting is true, don't create a reducer for enforcing 
      bucketing/sorting for queries of the form: 
      insert overwrite table T2 select * from T1;
      where T1 and T2 are bucketed/sorted by the same keys into the same number of buckets.
    -/description-
  -/property-
  -property-
    -name-hive.mapred.partitioner-/name-
    -value-org.apache.hadoop.hive.ql.io.DefaultHivePartitioner-/value-
    -description/-
  -/property-
  -property-
    -name-hive.enforce.sortmergebucketmapjoin-/name-
    -value-false-/value-
    -description-If the user asked for sort-merge bucketed map-side join, and it cannot be performed, should the query fail or not ?-/description-
  -/property-
  -property-
    -name-hive.enforce.bucketmapjoin-/name-
    -value-false-/value-
    -description-
      If the user asked for bucketed map-side join, and it cannot be performed, 
      should the query fail or not ? For example, if the buckets in the tables being joined are
      not a multiple of each other, bucketed map-side join cannot be performed, and the
      query will fail if hive.enforce.bucketmapjoin is set to true.
    -/description-
  -/property-
  -property-
    -name-hive.auto.convert.sortmerge.join-/name-
    -value-false-/value-
    -description-Will the join be automatically converted to a sort-merge join, if the joined tables pass the criteria for sort-merge join.-/description-
  -/property-
  -property-
    -name-hive.auto.convert.sortmerge.join.bigtable.selection.policy-/name-
    -value-org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ-/value-
    -description-
      The policy to choose the big table for automatic conversion to sort-merge join. 
      By default, the table with the largest partitions is assigned the big table. All policies are:
      . based on position of the table - the leftmost table is selected
      org.apache.hadoop.hive.ql.optimizer.LeftmostBigTableSMJ.
      . based on total size (all the partitions selected in the query) of the table 
      org.apache.hadoop.hive.ql.optimizer.TableSizeBasedBigTableSelectorForAutoSMJ.
      . based on average size (all the partitions selected in the query) of the table 
      org.apache.hadoop.hive.ql.optimizer.AvgPartitionSizeBasedBigTableSelectorForAutoSMJ.
      New policies can be added in future.
    -/description-
  -/property-
  -property-
    -name-hive.auto.convert.sortmerge.join.to.mapjoin-/name-
    -value-false-/value-
    -description-
      If hive.auto.convert.sortmerge.join is set to true, and a join was converted to a sort-merge join, 
      this parameter decides whether each table should be tried as a big table, and effectively a map-join should be
      tried. That would create a conditional task with n+1 children for a n-way join (1 child for each table as the
      big table), and the backup task will be the sort-merge join. In some cases, a map-join would be faster than a
      sort-merge join, if there is no advantage of having the output bucketed and sorted. For example, if a very big sorted
      and bucketed table with few files (say 10 files) are being joined with a very small sorter and bucketed table
      with few files (10 files), the sort-merge join will only use 10 mappers, and a simple map-only join might be faster
      if the complete small table can fit in memory, and a map-join can be performed.
    -/description-
  -/property-
  -property-
    -name-hive.exec.script.trust-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.exec.rowoffset-/name-
    -value-false-/value-
    -description-Whether to provide the row offset virtual column-/description-
  -/property-
  -property-
    -name-hive.hadoop.supports.splittable.combineinputformat-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.optimize.index.filter-/name-
    -value-false-/value-
    -description-Whether to enable automatic use of indexes-/description-
  -/property-
  -property-
    -name-hive.optimize.index.autoupdate-/name-
    -value-false-/value-
    -description-Whether to update stale indexes automatically-/description-
  -/property-
  -property-
    -name-hive.optimize.ppd-/name-
    -value-true-/value-
    -description-Whether to enable predicate pushdown-/description-
  -/property-
  -property-
    -name-hive.ppd.recognizetransivity-/name-
    -value-true-/value-
    -description-Whether to transitively replicate predicate filters over equijoin conditions.-/description-
  -/property-
  -property-
    -name-hive.ppd.remove.duplicatefilters-/name-
    -value-true-/value-
    -description-Whether to push predicates down into storage handlers.  Ignored when hive.optimize.ppd is false.-/description-
  -/property-
  -property-
    -name-hive.optimize.constant.propagation-/name-
    -value-true-/value-
    -description-Whether to enable constant propagation optimizer-/description-
  -/property-
  -property-
    -name-hive.optimize.remove.identity.project-/name-
    -value-true-/value-
    -description-Removes identity project from operator tree-/description-
  -/property-
  -property-
    -name-hive.optimize.metadataonly-/name-
    -value-true-/value-
    -description/-
  -/property-
  -property-
    -name-hive.optimize.null.scan-/name-
    -value-true-/value-
    -description-Dont scan relations which are guaranteed to not generate any rows-/description-
  -/property-
  -property-
    -name-hive.optimize.ppd.storage-/name-
    -value-true-/value-
    -description-Whether to push predicates down to storage handlers-/description-
  -/property-
  -property-
    -name-hive.optimize.groupby-/name-
    -value-true-/value-
    -description-Whether to enable the bucketed group by from bucketed partitions/tables.-/description-
  -/property-
  -property-
    -name-hive.optimize.bucketmapjoin-/name-
    -value-false-/value-
    -description-Whether to try bucket mapjoin-/description-
  -/property-
  -property-
    -name-hive.optimize.bucketmapjoin.sortedmerge-/name-
    -value-false-/value-
    -description-Whether to try sorted bucket merge map join-/description-
  -/property-
  -property-
    -name-hive.optimize.reducededuplication-/name-
    -value-true-/value-
    -description-
      Remove extra map-reduce jobs if the data is already clustered by the same key which needs to be used again. 
      This should always be set to true. Since it is a new feature, it has been made configurable.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.reducededuplication.min.reducer-/name-
    -value-4-/value-
    -description-
      Reduce deduplication merges two RSs by moving key/parts/reducer-num of the child RS to parent RS. 
      That means if reducer-num of the child RS is fixed (order by or forced bucketing) and small, it can make very slow, single MR.
      The optimization will be automatically disabled if number of reducers would be less than specified value.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.sort.dynamic.partition-/name-
    -value-false-/value-
    -description-
      When enabled dynamic partitioning column will be globally sorted.
      This way we can keep only one record writer open for each partition value
      in the reducer thereby reducing the memory pressure on reducers.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.sampling.orderby-/name-
    -value-false-/value-
    -description-Uses sampling on order-by clause for parallel execution.-/description-
  -/property-
  -property-
    -name-hive.optimize.sampling.orderby.number-/name-
    -value-1000-/value-
    -description-Total number of samples to be obtained.-/description-
  -/property-
  -property-
    -name-hive.optimize.sampling.orderby.percent-/name-
    -value-0.1-/value-
    -description-
      Expects value between 0.0f and 1.0f.
      Probability with which a row will be chosen.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.distinct.rewrite-/name-
    -value-true-/value-
    -description-When applicable this optimization rewrites distinct aggregates from a single stage to multi-stage aggregation. This may not be optimal in all cases. Ideally, whether to trigger it or not should be cost based decision. Until Hive formalizes cost model for this, this is config driven.-/description-
  -/property-
  -property-
    -name-hive.optimize.union.remove-/name-
    -value-false-/value-
    -description-
      Whether to remove the union and push the operators between union and the filesink above union. 
      This avoids an extra scan of the output by union. This is independently useful for union
      queries, and specially useful when hive.optimize.skewjoin.compiletime is set to true, since an
      extra union is inserted.
      
      The merge is triggered if either of hive.merge.mapfiles or hive.merge.mapredfiles is set to true.
      If the user has set hive.merge.mapfiles to true and hive.merge.mapredfiles to false, the idea was the
      number of reducers are few, so the number of files anyway are small. However, with this optimization,
      we are increasing the number of files possibly by a big margin. So, we merge aggressively.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.correlation-/name-
    -value-false-/value-
    -description-exploit intra-query correlations.-/description-
  -/property-
  -property-
    -name-hive.mapred.supports.subdirectories-/name-
    -value-false-/value-
    -description-
      Whether the version of Hadoop which is running supports sub-directories for tables/partitions. 
      Many Hive optimizations can be applied if the Hadoop version supports sub-directories for
      tables/partitions. It was added by MAPREDUCE-1501
    -/description-
  -/property-
  -property-
    -name-hive.optimize.skewjoin.compiletime-/name-
    -value-false-/value-
    -description-
      Whether to create a separate plan for skewed keys for the tables in the join.
      This is based on the skewed keys stored in the metadata. At compile time, the plan is broken
      into different joins: one for the skewed keys, and the other for the remaining keys. And then,
      a union is performed for the 2 joins generated above. So unless the same skewed key is present
      in both the joined tables, the join for the skewed key will be performed as a map-side join.
      
      The main difference between this parameter and hive.optimize.skewjoin is that this parameter
      uses the skew information stored in the metastore to optimize the plan at compile time itself.
      If there is no skew information in the metadata, this parameter will not have any affect.
      Both hive.optimize.skewjoin.compiletime and hive.optimize.skewjoin should be set to true.
      Ideally, hive.optimize.skewjoin should be renamed as hive.optimize.skewjoin.runtime, but not doing
      so for backward compatibility.
      
      If the skew information is correctly stored in the metadata, hive.optimize.skewjoin.compiletime
      would change the query plan to take care of it, and hive.optimize.skewjoin will be a no-op.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.index.filter.compact.minsize-/name-
    -value-5368709120-/value-
    -description-Minimum size (in bytes) of the inputs on which a compact index is automatically used.-/description-
  -/property-
  -property-
    -name-hive.optimize.index.filter.compact.maxsize-/name-
    -value--1-/value-
    -description-Maximum size (in bytes) of the inputs on which a compact index is automatically used.  A negative number is equivalent to infinity.-/description-
  -/property-
  -property-
    -name-hive.index.compact.query.max.entries-/name-
    -value-10000000-/value-
    -description-The maximum number of index entries to read during a query that uses the compact index. Negative value is equivalent to infinity.-/description-
  -/property-
  -property-
    -name-hive.index.compact.query.max.size-/name-
    -value-10737418240-/value-
    -description-The maximum number of bytes that a query using the compact index can read. Negative value is equivalent to infinity.-/description-
  -/property-
  -property-
    -name-hive.index.compact.binary.search-/name-
    -value-true-/value-
    -description-Whether or not to use a binary search to find the entries in an index table that match the filter, where possible-/description-
  -/property-
  -property-
    -name-hive.stats.autogather-/name-
    -value-true-/value-
    -description-A flag to gather statistics automatically during the INSERT OVERWRITE command.-/description-
  -/property-
  -property-
    -name-hive.stats.dbclass-/name-
    -value-fs-/value-
    -description-
      Expects one of the pattern in [jdbc(:.*), hbase, counter, custom, fs].
      The storage that stores temporary Hive statistics. In filesystem based statistics collection ('fs'), 
      each task writes statistics it has collected in a file on the filesystem, which will be aggregated 
      after the job has finished. Supported values are fs (filesystem), jdbc:database (where database 
      can be derby, mysql, etc.), hbase, counter, and custom as defined in StatsSetupConst.java.
    -/description-
  -/property-
  -property-
    -name-hive.stats.jdbcdriver-/name-
    -value-org.apache.derby.jdbc.EmbeddedDriver-/value-
    -description-The JDBC driver for the database that stores temporary Hive statistics.-/description-
  -/property-
  -property-
    -name-hive.stats.dbconnectionstring-/name-
    -value-jdbc:derby:;databaseName=TempStatsStore;create=true-/value-
    -description-The default connection string for the database that stores temporary Hive statistics.-/description-
  -/property-
  -property-
    -name-hive.stats.default.publisher-/name-
    -value/-
    -description-The Java class (implementing the StatsPublisher interface) that is used by default if hive.stats.dbclass is custom type.-/description-
  -/property-
  -property-
    -name-hive.stats.default.aggregator-/name-
    -value/-
    -description-The Java class (implementing the StatsAggregator interface) that is used by default if hive.stats.dbclass is custom type.-/description-
  -/property-
  -property-
    -name-hive.stats.jdbc.timeout-/name-
    -value-30s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Timeout value used by JDBC connection and statements.
    -/description-
  -/property-
  -property-
    -name-hive.stats.atomic-/name-
    -value-false-/value-
    -description-whether to update metastore stats only if all stats are available-/description-
  -/property-
  -property-
    -name-hive.stats.retries.max-/name-
    -value-0-/value-
    -description-
      Maximum number of retries when stats publisher/aggregator got an exception updating intermediate database. 
      Default is no tries on failures.
    -/description-
  -/property-
  -property-
    -name-hive.stats.retries.wait-/name-
    -value-3000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      The base waiting window before the next retry. The actual wait time is calculated by baseWindow * failures baseWindow * (failure + 1) * (random number between [0.0,1.0]).
    -/description-
  -/property-
  -property-
    -name-hive.stats.collect.rawdatasize-/name-
    -value-true-/value-
    -description-should the raw data size be collected when analyzing tables-/description-
  -/property-
  -property-
    -name-hive.client.stats.counters-/name-
    -value/-
    -description-
      Subset of counters that should be of interest for hive.client.stats.publishers (when one wants to limit their publishing). 
      Non-display names should be used
    -/description-
  -/property-
  -property-
    -name-hive.stats.reliable-/name-
    -value-false-/value-
    -description-
      Whether queries will fail because stats cannot be collected completely accurately. 
      If this is set to true, reading/writing from/into a partition may fail because the stats
      could not be computed accurately.
    -/description-
  -/property-
  -property-
    -name-hive.analyze.stmt.collect.partlevel.stats-/name-
    -value-true-/value-
    -description-analyze table T compute statistics for columns. Queries like these should compute partitionlevel stats for partitioned table even when no part spec is specified.-/description-
  -/property-
  -property-
    -name-hive.stats.gather.num.threads-/name-
    -value-10-/value-
    -description-
      Number of threads used by partialscan/noscan analyze command for partitioned tables.
      This is applicable only for file formats that implement StatsProvidingRecordReader (like ORC).
    -/description-
  -/property-
  -property-
    -name-hive.stats.collect.tablekeys-/name-
    -value-false-/value-
    -description-
      Whether join and group by keys on tables are derived and maintained in the QueryPlan.
      This is useful to identify how tables are accessed and to determine if they should be bucketed.
    -/description-
  -/property-
  -property-
    -name-hive.stats.collect.scancols-/name-
    -value-false-/value-
    -description-
      Whether column accesses are tracked in the QueryPlan.
      This is useful to identify how tables are accessed and to determine if there are wasted columns that can be trimmed.
    -/description-
  -/property-
  -property-
    -name-hive.stats.ndv.error-/name-
    -value-20.0-/value-
    -description-
      Standard error expressed in percentage. Provides a tradeoff between accuracy and compute cost. 
      A lower value for error indicates higher accuracy and a higher compute cost.
    -/description-
  -/property-
  -property-
    -name-hive.metastore.stats.ndv.densityfunction-/name-
    -value-false-/value-
    -description-Whether to use density function to estimate the NDV for the whole table based on the NDV of partitions-/description-
  -/property-
  -property-
    -name-hive.stats.key.prefix.max.length-/name-
    -value-150-/value-
    -description-
      Determines if when the prefix of the key used for intermediate stats collection
      exceeds a certain length, a hash of the key is used instead.  If the value < 0 then hashing
    -/description-
  -/property-
  -property-
    -name-hive.stats.key.prefix.reserve.length-/name-
    -value-24-/value-
    -description-
      Reserved length for postfix of stats key. Currently only meaningful for counter type which should
      keep length of full stats key smaller than max length configured by hive.stats.key.prefix.max.length.
      For counter type, it should be bigger than the length of LB spec if exists.
    -/description-
  -/property-
  -property-
    -name-hive.stats.max.variable.length-/name-
    -value-100-/value-
    -description-
      To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.),
      average row size is multiplied with the total number of rows coming out of each operator.
      Average row size is computed from average column size of all columns in the row. In the absence
      of column statistics, for variable length columns (like string, bytes etc.), this value will be
      used. For fixed length columns their corresponding Java equivalent sizes are used
      (float - 4 bytes, double - 8 bytes etc.).
    -/description-
  -/property-
  -property-
    -name-hive.stats.list.num.entries-/name-
    -value-10-/value-
    -description-
      To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.),
      average row size is multiplied with the total number of rows coming out of each operator.
      Average row size is computed from average column size of all columns in the row. In the absence
      of column statistics and for variable length complex columns like list, the average number of
      entries/values can be specified using this config.
    -/description-
  -/property-
  -property-
    -name-hive.stats.map.num.entries-/name-
    -value-10-/value-
    -description-
      To estimate the size of data flowing through operators in Hive/Tez(for reducer estimation etc.),
      average row size is multiplied with the total number of rows coming out of each operator.
      Average row size is computed from average column size of all columns in the row. In the absence
      of column statistics and for variable length complex columns like map, the average number of
      entries/values can be specified using this config.
    -/description-
  -/property-
  -property-
    -name-hive.stats.fetch.partition.stats-/name-
    -value-true-/value-
    -description-
      Annotation of operator tree with statistics information requires partition level basic
      statistics like number of rows, data size and file size. Partition statistics are fetched from
      metastore. Fetching partition statistics for each needed partition can be expensive when the
      number of partitions is high. This flag can be used to disable fetching of partition statistics
      from metastore. When this flag is disabled, Hive will make calls to filesystem to get file sizes
      and will estimate the number of rows from row schema.
    -/description-
  -/property-
  -property-
    -name-hive.stats.fetch.column.stats-/name-
    -value-false-/value-
    -description-
      Annotation of operator tree with statistics information requires column statistics.
      Column statistics are fetched from metastore. Fetching column statistics for each needed column
      can be expensive when the number of columns is high. This flag can be used to disable fetching
      of column statistics from metastore.
    -/description-
  -/property-
  -property-
    -name-hive.stats.join.factor-/name-
    -value-1.1-/value-
    -description-
      Hive/Tez optimizer estimates the data size flowing through each of the operators. JOIN operator
      uses column statistics to estimate the number of rows flowing out of it and hence the data size.
      In the absence of column statistics, this factor determines the amount of rows that flows out
      of JOIN operator.
    -/description-
  -/property-
  -property-
    -name-hive.stats.deserialization.factor-/name-
    -value-1.0-/value-
    -description-
      Hive/Tez optimizer estimates the data size flowing through each of the operators. In the absence
      of basic statistics like number of rows and data size, file size is used to estimate the number
      of rows and data size. Since files in tables/partitions are serialized (and optionally
      compressed) the estimates of number of rows and data size cannot be reliably determined.
      This factor is multiplied with the file size to account for serialization and compression.
    -/description-
  -/property-
  -property-
    -name-hive.support.concurrency-/name-
    -value-false-/value-
    -description-
      Whether Hive supports concurrency control or not. 
      A ZooKeeper instance must be up and running when using zookeeper Hive lock manager 
    -/description-
  -/property-
  -property-
    -name-hive.lock.manager-/name-
    -value-org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager-/value-
    -description/-
  -/property-
  -property-
    -name-hive.lock.numretries-/name-
    -value-100-/value-
    -description-The number of times you want to try to get all the locks-/description-
  -/property-
  -property-
    -name-hive.unlock.numretries-/name-
    -value-10-/value-
    -description-The number of times you want to retry to do one unlock-/description-
  -/property-
  -property-
    -name-hive.lock.sleep.between.retries-/name-
    -value-60s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      The sleep time between various retries
    -/description-
  -/property-
  -property-
    -name-hive.lock.mapred.only.operation-/name-
    -value-false-/value-
    -description-
      This param is to control whether or not only do lock on queries
      that need to execute at least one mapred job.
    -/description-
  -/property-
  -property-
    -name-hive.zookeeper.quorum-/name-
    -value/-
    -description-
      List of ZooKeeper servers to talk to. This is needed for: 
      1. Read/write locks - when hive.lock.manager is set to 
      org.apache.hadoop.hive.ql.lockmgr.zookeeper.ZooKeeperHiveLockManager, 
      2. When HiveServer2 supports service discovery via Zookeeper.
      3. For delegation token storage if zookeeper store is used, if
      hive.cluster.delegation.token.store.zookeeper.connectString is not set
    -/description-
  -/property-
  -property-
    -name-hive.zookeeper.client.port-/name-
    -value-2181-/value-
    -description-
      The port of ZooKeeper servers to talk to.
      If the list of Zookeeper servers specified in hive.zookeeper.quorum
      does not contain port numbers, this value is used.
    -/description-
  -/property-
  -property-
    -name-hive.zookeeper.session.timeout-/name-
    -value-1200000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      ZooKeeper client's session timeout (in milliseconds). The client is disconnected, and as a result, all locks released, 
      if a heartbeat is not sent in the timeout.
    -/description-
  -/property-
  -property-
    -name-hive.zookeeper.namespace-/name-
    -value-hive_zookeeper_namespace-/value-
    -description-The parent node under which all ZooKeeper nodes are created.-/description-
  -/property-
  -property-
    -name-hive.zookeeper.clean.extra.nodes-/name-
    -value-false-/value-
    -description-Clean extra nodes at the end of the session.-/description-
  -/property-
  -property-
    -name-hive.zookeeper.connection.max.retries-/name-
    -value-3-/value-
    -description-Max number of times to retry when connecting to the ZooKeeper server.-/description-
  -/property-
  -property-
    -name-hive.zookeeper.connection.basesleeptime-/name-
    -value-1000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Initial amount of time (in milliseconds) to wait between retries
      when connecting to the ZooKeeper server when using ExponentialBackoffRetry policy.
    -/description-
  -/property-
  -property-
    -name-hive.txn.manager-/name-
    -value-org.apache.hadoop.hive.ql.lockmgr.DummyTxnManager-/value-
    -description-
      Set to org.apache.hadoop.hive.ql.lockmgr.DbTxnManager as part of turning on Hive
      transactions, which also requires appropriate settings for hive.compactor.initiator.on,
      hive.compactor.worker.threads, hive.support.concurrency (true), hive.enforce.bucketing
      (true), and hive.exec.dynamic.partition.mode (nonstrict).
      The default DummyTxnManager replicates pre-Hive-0.13 behavior and provides
      no transactions.
    -/description-
  -/property-
  -property-
    -name-hive.txn.timeout-/name-
    -value-300s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      time after which transactions are declared aborted if the client has not sent a heartbeat.
    -/description-
  -/property-
  -property-
    -name-hive.txn.max.open.batch-/name-
    -value-1000-/value-
    -description-
      Maximum number of transactions that can be fetched in one call to open_txns().
      This controls how many transactions streaming agents such as Flume or Storm open
      simultaneously. The streaming agent then writes that number of entries into a single
      file (per Flume agent or Storm bolt). Thus increasing this value decreases the number
      of delta files created by streaming agents. But it also increases the number of open
      transactions that Hive has to track at any given time, which may negatively affect
      read performance.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.initiator.on-/name-
    -value-false-/value-
    -description-
      Whether to run the initiator and cleaner threads on this metastore instance or not.
      Set this to true on one instance of the Thrift metastore service as part of turning
      on Hive transactions. For a complete list of parameters required for turning on
      transactions, see hive.txn.manager.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.worker.threads-/name-
    -value-0-/value-
    -description-
      How many compactor worker threads to run on this metastore instance. Set this to a
      positive number on one or more instances of the Thrift metastore service as part of
      turning on Hive transactions. For a complete list of parameters required for turning
      on transactions, see hive.txn.manager.
      Worker threads spawn MapReduce jobs to do compactions. They do not do the compactions
      themselves. Increasing the number of worker threads will decrease the time it takes
      tables or partitions to be compacted once they are determined to need compaction.
      It will also increase the background load on the Hadoop cluster as more MapReduce jobs
      will be running in the background.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.worker.timeout-/name-
    -value-86400s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Time in seconds after which a compaction job will be declared failed and the
      compaction re-queued.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.check.interval-/name-
    -value-300s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Time in seconds between checks to see if any tables or partitions need to be
      compacted. This should be kept high because each check for compaction requires
      many calls against the NameNode.
      Decreasing this value will reduce the time it takes for compaction to be started
      for a table or partition that requires compaction. However, checking if compaction
      is needed requires several calls to the NameNode for each table or partition that
      has had a transaction done on it since the last major compaction. So decreasing this
      value will increase the load on the NameNode.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.delta.num.threshold-/name-
    -value-10-/value-
    -description-
      Number of delta directories in a table or partition that will trigger a minor
      compaction.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.delta.pct.threshold-/name-
    -value-0.1-/value-
    -description-
      Percentage (fractional) size of the delta files relative to the base that will trigger
      a major compaction. (1.0 = 100%, so the default 0.1 = 10%.)
    -/description-
  -/property-
  -property-
    -name-hive.compactor.abortedtxn.threshold-/name-
    -value-1000-/value-
    -description-
      Number of aborted transactions involving a given table or partition that will trigger
      a major compaction.
    -/description-
  -/property-
  -property-
    -name-hive.compactor.cleaner.run.interval-/name-
    -value-5000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Time between runs of the cleaner thread
    -/description-
  -/property-
  -property-
    -name-hive.hbase.wal.enabled-/name-
    -value-true-/value-
    -description-
      Whether writes to HBase should be forced to the write-ahead log. 
      Disabling this improves HBase write performance at the risk of lost writes in case of a crash.
    -/description-
  -/property-
  -property-
    -name-hive.hbase.generatehfiles-/name-
    -value-false-/value-
    -description-True when HBaseStorageHandler should generate hfiles instead of operate against the online table.-/description-
  -/property-
  -property-
    -name-hive.hbase.snapshot.name-/name-
    -value/-
    -description-The HBase table snapshot name to use.-/description-
  -/property-
  -property-
    -name-hive.hbase.snapshot.restoredir-/name-
    -value-/tmp-/value-
    -description-The directory in which to restore the HBase table snapshot.-/description-
  -/property-
  -property-
    -name-hive.archive.enabled-/name-
    -value-false-/value-
    -description-Whether archiving operations are permitted-/description-
  -/property-
  -property-
    -name-hive.optimize.index.groupby-/name-
    -value-false-/value-
    -description-Whether to enable optimization of group-by queries using Aggregate indexes.-/description-
  -/property-
  -property-
    -name-hive.outerjoin.supports.filters-/name-
    -value-true-/value-
    -description/-
  -/property-
  -property-
    -name-hive.fetch.task.conversion-/name-
    -value-more-/value-
    -description-
      Expects one of [none, minimal, more].
      Some select queries can be converted to single FETCH task minimizing latency.
      Currently the query should be single sourced not having any subquery and should not have
      any aggregations or distincts (which incurs RS), lateral views and joins.
      0. none : disable hive.fetch.task.conversion
      1. minimal : SELECT STAR, FILTER on partition columns, LIMIT only
      2. more    : SELECT, FILTER, LIMIT only (support TABLESAMPLE and virtual columns)
    -/description-
  -/property-
  -property-
    -name-hive.fetch.task.conversion.threshold-/name-
    -value-1073741824-/value-
    -description-
      Input threshold for applying hive.fetch.task.conversion. If target table is native, input length
      is calculated by summation of file lengths. If it's not native, storage handler for the table
      can optionally implement org.apache.hadoop.hive.ql.metadata.InputEstimator interface.
    -/description-
  -/property-
  -property-
    -name-hive.fetch.task.aggr-/name-
    -value-false-/value-
    -description-
      Aggregation queries with no group-by clause (for example, select count(*) from src) execute
      final aggregations in single reduce task. If this is set true, Hive delegates final aggregation
      stage to fetch task, possibly decreasing the query time.
    -/description-
  -/property-
  -property-
    -name-hive.compute.query.using.stats-/name-
    -value-false-/value-
    -description-
      When set to true Hive will answer a few queries like count(1) purely using stats
      stored in metastore. For basic stats collection turn on the config hive.stats.autogather to true.
      For more advanced stats collection need to run analyze table queries.
    -/description-
  -/property-
  -property-
    -name-hive.fetch.output.serde-/name-
    -value-org.apache.hadoop.hive.serde2.DelimitedJSONSerDe-/value-
    -description-The SerDe used by FetchTask to serialize the fetch output.-/description-
  -/property-
  -property-
    -name-hive.cache.expr.evaluation-/name-
    -value-true-/value-
    -description-
      If true, the evaluation result of a deterministic expression referenced twice or more
      will be cached.
      For example, in a filter condition like '.. where key + 10 = 100 or key + 10 = 0'
      the expression 'key + 10' will be evaluated/cached once and reused for the following
      expression ('key + 10 = 0'). Currently, this is applied only to expressions in select
      or filter operators.
    -/description-
  -/property-
  -property-
    -name-hive.variable.substitute-/name-
    -value-true-/value-
    -description-This enables substitution using syntax like ${var} ${system:var} and ${env:var}.-/description-
  -/property-
  -property-
    -name-hive.variable.substitute.depth-/name-
    -value-40-/value-
    -description-The maximum replacements the substitution engine will do.-/description-
  -/property-
  -property-
    -name-hive.conf.validation-/name-
    -value-true-/value-
    -description-Enables type checking for registered Hive configurations-/description-
  -/property-
  -property-
    -name-hive.semantic.analyzer.hook-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.security.authorization.enabled-/name-
    -value-false-/value-
    -description-enable or disable the Hive client authorization-/description-
  -/property-
  -property-
    -name-hive.security.authorization.manager-/name-
    -value-org.apache.hadoop.hive.ql.security.authorization.DefaultHiveAuthorizationProvider-/value-
    -description-
      The Hive client authorization manager class name. The user defined authorization class should implement 
      interface org.apache.hadoop.hive.ql.security.authorization.HiveAuthorizationProvider.
    -/description-
  -/property-
  -property-
    -name-hive.security.authenticator.manager-/name-
    -value-org.apache.hadoop.hive.ql.security.HadoopDefaultAuthenticator-/value-
    -description-
      hive client authenticator manager class name. The user defined authenticator should implement 
      interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
    -/description-
  -/property-
  -property-
    -name-hive.security.metastore.authorization.manager-/name-
    -value-org.apache.hadoop.hive.ql.security.authorization.DefaultHiveMetastoreAuthorizationProvider-/value-
    -description-
      Names of authorization manager classes (comma separated) to be used in the metastore
      for authorization. The user defined authorization class should implement interface
      org.apache.hadoop.hive.ql.security.authorization.HiveMetastoreAuthorizationProvider.
      All authorization manager classes have to successfully authorize the metastore API
      call for the command execution to be allowed.
    -/description-
  -/property-
  -property-
    -name-hive.security.metastore.authorization.auth.reads-/name-
    -value-true-/value-
    -description-If this is true, metastore authorizer authorizes read actions on database, table-/description-
  -/property-
  -property-
    -name-hive.security.metastore.authenticator.manager-/name-
    -value-org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator-/value-
    -description-
      authenticator manager class name to be used in the metastore for authentication. 
      The user defined authenticator should implement interface org.apache.hadoop.hive.ql.security.HiveAuthenticationProvider.
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.createtable.user.grants-/name-
    -value/-
    -description-
      the privileges automatically granted to some users whenever a table gets created.
      An example like "userX,userY:select;userZ:create" will grant select privilege to userX and userY,
      and grant create privilege to userZ whenever a new table created.
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.createtable.group.grants-/name-
    -value/-
    -description-
      the privileges automatically granted to some groups whenever a table gets created.
      An example like "groupX,groupY:select;groupZ:create" will grant select privilege to groupX and groupY,
      and grant create privilege to groupZ whenever a new table created.
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.createtable.role.grants-/name-
    -value/-
    -description-
      the privileges automatically granted to some roles whenever a table gets created.
      An example like "roleX,roleY:select;roleZ:create" will grant select privilege to roleX and roleY,
      and grant create privilege to roleZ whenever a new table created.
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.createtable.owner.grants-/name-
    -value/-
    -description-
      The privileges automatically granted to the owner whenever a table gets created.
      An example like "select,drop" will grant select and drop privilege to the owner
      of the table. Note that the default gives the creator of a table no access to the
      table (but see HIVE-8067).
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.task.factory-/name-
    -value-org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl-/value-
    -description-Authorization DDL task factory implementation-/description-
  -/property-
  -property-
    -name-hive.security.authorization.sqlstd.confwhitelist-/name-
    -value/-
    -description-
      List of comma separated Java regexes. Configurations parameters that match these
      regexes can be modified by user when SQL standard authorization is enabled.
      To get the default value, use the 'set <param>' command.
      Note that the hive.conf.restricted.list checks are still enforced after the white list
      check
    -/description-
  -/property-
  -property-
    -name-hive.security.authorization.sqlstd.confwhitelist.append-/name-
    -value/-
    -description-
      List of comma separated Java regexes, to be appended to list set in
      hive.security.authorization.sqlstd.confwhitelist. Using this list instead
      of updating the original list means that you can append to the defaults
      set by SQL standard authorization instead of replacing it entirely.
    -/description-
  -/property-
  -property-
    -name-hive.cli.print.header-/name-
    -value-false-/value-
    -description-Whether to print the names of the columns in query output.-/description-
  -/property-
  -property-
    -name-hive.error.on.empty.partition-/name-
    -value-false-/value-
    -description-Whether to throw an exception if dynamic partition insert generates empty results.-/description-
  -/property-
  -property-
    -name-hive.index.compact.file-/name-
    -value/-
    -description-internal variable-/description-
  -/property-
  -property-
    -name-hive.index.blockfilter.file-/name-
    -value/-
    -description-internal variable-/description-
  -/property-
  -property-
    -name-hive.index.compact.file.ignore.hdfs-/name-
    -value-false-/value-
    -description-
      When true the HDFS location stored in the index file will be ignored at runtime.
      If the data got moved or the name of the cluster got changed, the index data should still be usable.
    -/description-
  -/property-
  -property-
    -name-hive.exim.uri.scheme.whitelist-/name-
    -value-hdfs,pfile-/value-
    -description-A comma separated list of acceptable URI schemes for import and export.-/description-
  -/property-
  -property-
    -name-hive.exim.strict.repl.tables-/name-
    -value-true-/value-
    -description-
      Parameter that determines if 'regular' (non-replication) export dumps can be
      imported on to tables that are the target of replication. If this parameter is
      set, regular imports will check if the destination table(if it exists) has a 'repl.last.id' set on it. If so, it will fail.
    -/description-
  -/property-
  -property-
    -name-hive.repl.task.factory-/name-
    -value-org.apache.hive.hcatalog.api.repl.exim.EximReplicationTaskFactory-/value-
    -description-
      Parameter that can be used to override which ReplicationTaskFactory will be
      used to instantiate ReplicationTask events. Override for third party repl plugins
    -/description-
  -/property-
  -property-
    -name-hive.mapper.cannot.span.multiple.partitions-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.rework.mapredwork-/name-
    -value-false-/value-
    -description-
      should rework the mapred work or not.
      This is first introduced by SymlinkTextInputFormat to replace symlink files with real paths at compile time.
    -/description-
  -/property-
  -property-
    -name-hive.exec.concatenate.check.index-/name-
    -value-true-/value-
    -description-
      If this is set to true, Hive will throw error when doing
      'alter table tbl_name [partSpec] concatenate' on a table/partition
      that has indexes on it. The reason the user want to set this to true
      is because it can help user to avoid handling all index drop, recreation,
      rebuild work. This is very helpful for tables with thousands of partitions.
    -/description-
  -/property-
  -property-
    -name-hive.io.exception.handlers-/name-
    -value/-
    -description-
      A list of io exception handler class names. This is used
      to construct a list exception handlers to handle exceptions thrown
      by record readers
    -/description-
  -/property-
  -property-
    -name-hive.server2.logging.operation.enabled-/name-
    -value-true-/value-
    -description-When true, HS2 will save operation logs and make them available for clients-/description-
  -/property-
  -property-
    -name-hive.server2.logging.operation.log.location-/name-
    -value-${system:java.io.tmpdir}/${system:user.name}/operation_logs-/value-
    -description-Top level directory where operation logs are stored if logging functionality is enabled-/description-
  -/property-
  -property-
    -name-hive.server2.logging.operation.level-/name-
    -value-EXECUTION-/value-
    -description-
      Expects one of [none, execution, performance, verbose].
      HS2 operation logging mode available to clients to be set at session level.
      For this to work, hive.server2.logging.operation.enabled should be set to true.
        NONE: Ignore any logging
        EXECUTION: Log completion of tasks
        PERFORMANCE: Execution + Performance logs 
        VERBOSE: All logs
    -/description-
  -/property-
  -property-
    -name-hive.log4j.file-/name-
    -value/-
    -description-
      Hive log4j configuration file.
      If the property is not set, then logging will be initialized using hive-log4j.properties found on the classpath.
      If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.properties"), 
      which you can then extract a URL from and pass to PropertyConfigurator.configure(URL).
    -/description-
  -/property-
  -property-
    -name-hive.exec.log4j.file-/name-
    -value/-
    -description-
      Hive log4j configuration file for execution mode(sub command).
      If the property is not set, then logging will be initialized using hive-exec-log4j.properties found on the classpath.
      If the property is set, the value must be a valid URI (java.net.URI, e.g. "file:///tmp/my-logging.properties"), 
      which you can then extract a URL from and pass to PropertyConfigurator.configure(URL).
    -/description-
  -/property-
  -property-
    -name-hive.log.explain.output-/name-
    -value-false-/value-
    -description-
      Whether to log explain output for every query.
      When enabled, will log EXPLAIN EXTENDED output for the query at INFO log4j log level.
    -/description-
  -/property-
  -property-
    -name-hive.explain.user-/name-
    -value-true-/value-
    -description-
      Whether to show explain result at user level.
      When enabled, will log EXPLAIN output for the query at user level.
    -/description-
  -/property-
  -property-
    -name-hive.autogen.columnalias.prefix.label-/name-
    -value-_c-/value-
    -description-
      String used as a prefix when auto generating column alias.
      By default the prefix label will be appended with a column position number to form the column alias. 
      Auto generation would happen if an aggregate function is used in a select clause without an explicit alias.
    -/description-
  -/property-
  -property-
    -name-hive.autogen.columnalias.prefix.includefuncname-/name-
    -value-false-/value-
    -description-Whether to include function name in the column alias auto generated by Hive.-/description-
  -/property-
  -property-
    -name-hive.exec.perf.logger-/name-
    -value-org.apache.hadoop.hive.ql.log.PerfLogger-/value-
    -description-
      The class responsible for logging client side performance metrics. 
      Must be a subclass of org.apache.hadoop.hive.ql.log.PerfLogger
    -/description-
  -/property-
  -property-
    -name-hive.start.cleanup.scratchdir-/name-
    -value-false-/value-
    -description-To cleanup the Hive scratchdir when starting the Hive Server-/description-
  -/property-
  -property-
    -name-hive.insert.into.multilevel.dirs-/name-
    -value-false-/value-
    -description-
      Where to insert into multilevel directories like
      "insert directory '/HIVEFT25686/chinna/' from table"
    -/description-
  -/property-
  -property-
    -name-hive.warehouse.subdir.inherit.perms-/name-
    -value-true-/value-
    -description-
      Set this to false if the table directories should be created
      with the permissions derived from dfs umask instead of
      inheriting the permission of the warehouse or database directory.
    -/description-
  -/property-
  -property-
    -name-hive.insert.into.external.tables-/name-
    -value-true-/value-
    -description-whether insert into external tables is allowed-/description-
  -/property-
  -property-
    -name-hive.exec.temporary.table.storage-/name-
    -value-default-/value-
    -description-
      Expects one of [memory, ssd, default].
      Define the storage policy for temporary tables.Choices between memory, ssd and default
    -/description-
  -/property-
  -property-
    -name-hive.exec.driver.run.hooks-/name-
    -value/-
    -description-A comma separated list of hooks which implement HiveDriverRunHook. Will be run at the beginning and end of Driver.run, these will be run in the order specified.-/description-
  -/property-
  -property-
    -name-hive.ddl.output.format-/name-
    -value/-
    -description-
      The data format to use for DDL output.  One of "text" (for human
      readable text) or "json" (for a json object).
    -/description-
  -/property-
  -property-
    -name-hive.entity.separator-/name-
    -value-@-/value-
    -description-Separator used to construct names of tables and partitions. For example, dbname@tablename@partitionname-/description-
  -/property-
  -property-
    -name-hive.entity.capture.transform-/name-
    -value-false-/value-
    -description-Compiler to capture transform URI referred in the query-/description-
  -/property-
  -property-
    -name-hive.display.partition.cols.separately-/name-
    -value-true-/value-
    -description-
      In older Hive version (0.10 and earlier) no distinction was made between
      partition columns or non-partition columns while displaying columns in describe
      table. From 0.12 onwards, they are displayed separately. This flag will let you
      get old behavior, if desired. See, test-case in patch for HIVE-6689.
    -/description-
  -/property-
  -property-
    -name-hive.ssl.protocol.blacklist-/name-
    -value-SSLv2,SSLv3-/value-
    -description-SSL Versions to disable for all Hive Servers-/description-
  -/property-
  -property-
    -name-hive.server2.max.start.attempts-/name-
    -value-30-/value-
    -description-
      Expects value bigger than 0.
      Number of times HiveServer2 will attempt to start before exiting, sleeping 60 seconds between retries. 
       The default of 30 will keep trying for 30 minutes.
    -/description-
  -/property-
  -property-
    -name-hive.server2.support.dynamic.service.discovery-/name-
    -value-false-/value-
    -description-Whether HiveServer2 supports dynamic service discovery for its clients. To support this, each instance of HiveServer2 currently uses ZooKeeper to register itself, when it is brought up. JDBC/ODBC clients should use the ZooKeeper ensemble: hive.zookeeper.quorum in their connection string.-/description-
  -/property-
  -property-
    -name-hive.server2.zookeeper.namespace-/name-
    -value-hiveserver2-/value-
    -description-The parent node in ZooKeeper used by HiveServer2 when supporting dynamic service discovery.-/description-
  -/property-
  -property-
    -name-hive.server2.global.init.file.location-/name-
    -value-${env:HIVE_CONF_DIR}-/value-
    -description-
      Either the location of a HS2 global init file or a directory containing a .hiverc file. If the 
      property is set, the value must be a valid path to an init file or directory where the init file is located.
    -/description-
  -/property-
  -property-
    -name-hive.server2.transport.mode-/name-
    -value-binary-/value-
    -description-
      Expects one of [binary, http].
      Transport mode of HiveServer2.
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.bind.host-/name-
    -value/-
    -description-Bind host on which to run the HiveServer2 Thrift service.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.port-/name-
    -value-10001-/value-
    -description-Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'http'.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.path-/name-
    -value-cliservice-/value-
    -description-Path component of URL endpoint when in HTTP mode.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.max.message.size-/name-
    -value-104857600-/value-
    -description-Maximum message size in bytes a HS2 server will accept.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.max.idle.time-/name-
    -value-1800s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Maximum idle time for a connection on the server when in HTTP mode.
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.worker.keepalive.time-/name-
    -value-60s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Keepalive time for an idle http worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.auth.enabled-/name-
    -value-true-/value-
    -description-When true, HiveServer2 in HTTP transport mode, will use cookie based authentication mechanism.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.max.age-/name-
    -value-86400s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Maximum age in seconds for server side cookie used by HS2 in HTTP mode.
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.domain-/name-
    -value/-
    -description-Domain for the HS2 generated cookies-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.path-/name-
    -value/-
    -description-Path for the HS2 generated cookies-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.is.secure-/name-
    -value-true-/value-
    -description-Secure attribute of the HS2 generated cookie.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.http.cookie.is.httponly-/name-
    -value-true-/value-
    -description-HttpOnly attribute of the HS2 generated cookie.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.port-/name-
    -value-10000-/value-
    -description-Port number of HiveServer2 Thrift interface when hive.server2.transport.mode is 'binary'.-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.sasl.qop-/name-
    -value-auth-/value-
    -description-
      Expects one of [auth, auth-int, auth-conf].
      Sasl QOP value; set it to one of following values to enable higher levels of
      protection for HiveServer2 communication with clients.
      Setting hadoop.rpc.protection to a higher level than HiveServer2 does not
      make sense in most situations. HiveServer2 ignores hadoop.rpc.protection in favor
      of hive.server2.thrift.sasl.qop.
        "auth" - authentication only (default)
        "auth-int" - authentication plus integrity protection
        "auth-conf" - authentication plus integrity and confidentiality protection
      This is applicable only if HiveServer2 is configured to use Kerberos authentication.
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.min.worker.threads-/name-
    -value-5-/value-
    -description-Minimum number of Thrift worker threads-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.max.worker.threads-/name-
    -value-500-/value-
    -description-Maximum number of Thrift worker threads-/description-
  -/property-
  -property-
    -name-hive.server2.thrift.exponential.backoff.slot.length-/name-
    -value-100ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Binary exponential backoff slot time for Thrift clients during login to HiveServer2,
      for retries until hitting Thrift client timeout
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.login.timeout-/name-
    -value-20s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Timeout for Thrift clients during login to HiveServer2
    -/description-
  -/property-
  -property-
    -name-hive.server2.thrift.worker.keepalive.time-/name-
    -value-60s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Keepalive time (in seconds) for an idle worker thread. When the number of workers exceeds min workers, excessive threads are killed after this time interval.
    -/description-
  -/property-
  -property-
    -name-hive.server2.async.exec.threads-/name-
    -value-100-/value-
    -description-Number of threads in the async thread pool for HiveServer2-/description-
  -/property-
  -property-
    -name-hive.server2.async.exec.shutdown.timeout-/name-
    -value-10s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      How long HiveServer2 shutdown will wait for async threads to terminate.
    -/description-
  -/property-
  -property-
    -name-hive.server2.async.exec.wait.queue.size-/name-
    -value-100-/value-
    -description-
      Size of the wait queue for async thread pool in HiveServer2.
      After hitting this limit, the async thread pool will reject new requests.
    -/description-
  -/property-
  -property-
    -name-hive.server2.async.exec.keepalive.time-/name-
    -value-10s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Time that an idle HiveServer2 async thread (from the thread pool) will wait for a new task
      to arrive before terminating
    -/description-
  -/property-
  -property-
    -name-hive.server2.long.polling.timeout-/name-
    -value-5000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Time that HiveServer2 will wait before responding to asynchronous calls that use long polling
    -/description-
  -/property-
  -property-
    -name-hive.server2.authentication-/name-
    -value-NONE-/value-
    -description-
      Expects one of [nosasl, none, ldap, kerberos, pam, custom].
      Client authentication types.
        NONE: no authentication check
        LDAP: LDAP/AD based authentication
        KERBEROS: Kerberos/GSSAPI authentication
        CUSTOM: Custom authentication provider
                (Use with property hive.server2.custom.authentication.class)
        PAM: Pluggable authentication module
        NOSASL:  Raw transport
    -/description-
  -/property-
  -property-
    -name-hive.server2.allow.user.substitution-/name-
    -value-true-/value-
    -description-Allow alternate user to be specified as part of HiveServer2 open connection request.-/description-
  -/property-
  -property-
    -name-hive.server2.authentication.kerberos.keytab-/name-
    -value/-
    -description-Kerberos keytab file for server principal-/description-
  -/property-
  -property-
    -name-hive.server2.authentication.kerberos.principal-/name-
    -value/-
    -description-Kerberos server principal-/description-
  -/property-
  -property-
    -name-hive.server2.authentication.spnego.keytab-/name-
    -value/-
    -description-
      keytab file for SPNego principal, optional,
      typical value would look like /etc/security/keytabs/spnego.service.keytab,
      This keytab would be used by HiveServer2 when Kerberos security is enabled and 
      HTTP transport mode is used.
      This needs to be set only if SPNEGO is to be used in authentication.
      SPNego authentication would be honored only if valid
        hive.server2.authentication.spnego.principal
      and
        hive.server2.authentication.spnego.keytab
      are specified.
    -/description-
  -/property-
  -property-
    -name-hive.server2.authentication.spnego.principal-/name-
    -value/-
    -description-
      SPNego service principal, optional,
      typical value would look like HTTP/_HOST@EXAMPLE.COM
      SPNego service principal would be used by HiveServer2 when Kerberos security is enabled
      and HTTP transport mode is used.
      This needs to be set only if SPNEGO is to be used in authentication.
    -/description-
  -/property-
  -property-
    -name-hive.server2.authentication.ldap.url-/name-
    -value/-
    -description-
      LDAP connection URL(s),
      this value could contain URLs to mutiple LDAP servers instances for HA,
      each LDAP URL is separated by a SPACE character. URLs are used in the 
       order specified until a connection is successful.
    -/description-
  -/property-
  -property-
    -name-hive.server2.authentication.ldap.baseDN-/name-
    -value/-
    -description-LDAP base DN-/description-
  -/property-
  -property-
    -name-hive.server2.authentication.ldap.Domain-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.server2.custom.authentication.class-/name-
    -value/-
    -description-
      Custom authentication class. Used when property
      'hive.server2.authentication' is set to 'CUSTOM'. Provided class
      must be a proper implementation of the interface
      org.apache.hive.service.auth.PasswdAuthenticationProvider. HiveServer2
      will call its Authenticate(user, passed) method to authenticate requests.
      The implementation may optionally implement Hadoop's
      org.apache.hadoop.conf.Configurable class to grab Hive's Configuration object.
    -/description-
  -/property-
  -property-
    -name-hive.server2.authentication.pam.services-/name-
    -value/-
    -description-
      List of the underlying pam services that should be used when auth type is PAM
      A file with the same name must exist in /etc/pam.d
    -/description-
  -/property-
  -property-
    -name-hive.server2.enable.doAs-/name-
    -value-true-/value-
    -description-
      Setting this property to true will have HiveServer2 execute
      Hive operations as the user making the calls to it.
    -/description-
  -/property-
  -property-
    -name-hive.server2.table.type.mapping-/name-
    -value-CLASSIC-/value-
    -description-
      Expects one of [classic, hive].
      This setting reflects how HiveServer2 will report the table types for JDBC and other
      client implementations that retrieve the available tables and supported table types
        HIVE : Exposes Hive's native table types like MANAGED_TABLE, EXTERNAL_TABLE, VIRTUAL_VIEW
        CLASSIC : More generic types like TABLE and VIEW
    -/description-
  -/property-
  -property-
    -name-hive.server2.session.hook-/name-
    -value/-
    -description/-
  -/property-
  -property-
    -name-hive.server2.use.SSL-/name-
    -value-false-/value-
    -description-Set this to true for using SSL encryption in HiveServer2.-/description-
  -/property-
  -property-
    -name-hive.server2.keystore.path-/name-
    -value/-
    -description-SSL certificate keystore location.-/description-
  -/property-
  -property-
    -name-hive.server2.keystore.password-/name-
    -value/-
    -description-SSL certificate keystore password.-/description-
  -/property-
  -property-
    -name-hive.server2.map.fair.scheduler.queue-/name-
    -value-true-/value-
    -description-
      If the YARN fair scheduler is configured and HiveServer2 is running in non-impersonation mode,
      this setting determines the user for fair scheduler queue mapping.
      If set to true (default), the logged-in user determines the fair scheduler queue
      for submitted jobs, so that map reduce resource usage can be tracked by user.
      If set to false, all Hive jobs go to the 'hive' user's queue.
    -/description-
  -/property-
  -property-
    -name-hive.server2.builtin.udf.whitelist-/name-
    -value/-
    -description-
      Comma separated list of builtin udf names allowed in queries.
      An empty whitelist allows all builtin udfs to be executed.  The udf black list takes precedence over udf white list
    -/description-
  -/property-
  -property-
    -name-hive.server2.builtin.udf.blacklist-/name-
    -value/-
    -description-Comma separated list of udfs names. These udfs will not be allowed in queries. The udf black list takes precedence over udf white list-/description-
  -/property-
  -property-
    -name-hive.security.command.whitelist-/name-
    -value-set,reset,dfs,add,list,delete,reload,compile-/value-
    -description-Comma separated list of non-SQL Hive commands users are authorized to execute-/description-
  -/property-
  -property-
    -name-hive.server2.session.check.interval-/name-
    -value-6h-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      The time should be bigger than or equal to 3000 msec.
      The check interval for session/operation timeout, which can be disabled by setting to zero or negative value.
    -/description-
  -/property-
  -property-
    -name-hive.server2.idle.session.timeout-/name-
    -value-7d-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Session will be closed when it's not accessed for this duration, which can be disabled by setting to zero or negative value.
    -/description-
  -/property-
  -property-
    -name-hive.server2.idle.operation.timeout-/name-
    -value-5d-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Operation will be closed when it's not accessed for this duration of time, which can be disabled by setting to zero value.
        With positive value, it's checked for operations in terminal state only (FINISHED, CANCELED, CLOSED, ERROR).
        With negative value, it's checked for all of the operations regardless of state.
    -/description-
  -/property-
  -property-
    -name-hive.server2.idle.session.check.operation-/name-
    -value-true-/value-
    -description-
      Session will be considered to be idle only if there is no activity, and there is no pending operation.
       This setting takes effect only if session idle timeout (hive.server2.idle.session.timeout) and checking
      (hive.server2.session.check.interval) are enabled.
    -/description-
  -/property-
  -property-
    -name-hive.conf.restricted.list-/name-
    -value-hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role-/value-
    -description-Comma separated list of configuration options which are immutable at runtime-/description-
  -/property-
  -property-
    -name-hive.multi.insert.move.tasks.share.dependencies-/name-
    -value-false-/value-
    -description-
      If this is set all move tasks for tables/partitions (not directories) at the end of a
      multi-insert query will only begin once the dependencies for all these move tasks have been
      met.
      Advantages: If concurrency is enabled, the locks will only be released once the query has
                  finished, so with this config enabled, the time when the table/partition is
                  generated will be much closer to when the lock on it is released.
      Disadvantages: If concurrency is not enabled, with this disabled, the tables/partitions which
                     are produced by this query and finish earlier will be available for querying
                     much earlier.  Since the locks are only released once the query finishes, this
                     does not apply if concurrency is enabled.
    -/description-
  -/property-
  -property-
    -name-hive.exec.infer.bucket.sort-/name-
    -value-false-/value-
    -description-
      If this is set, when writing partitions, the metadata will include the bucketing/sorting
      properties with which the data was written if any (this will not overwrite the metadata
      inherited from the table if the table is bucketed/sorted)
    -/description-
  -/property-
  -property-
    -name-hive.exec.infer.bucket.sort.num.buckets.power.two-/name-
    -value-false-/value-
    -description-
      If this is set, when setting the number of reducers for the map reduce task which writes the
      final output files, it will choose a number which is a power of two, unless the user specifies
      the number of reducers to use using mapred.reduce.tasks.  The number of reducers
      may be set to a power of two, only to be followed by a merge task meaning preventing
      anything from being inferred.
      With hive.exec.infer.bucket.sort set to true:
      Advantages:  If this is not set, the number of buckets for partitions will seem arbitrary,
                   which means that the number of mappers used for optimized joins, for example, will
                   be very low.  With this set, since the number of buckets used for any partition is
                   a power of two, the number of mappers used for optimized joins will be the least
                   number of buckets used by any partition being joined.
      Disadvantages: This may mean a much larger or much smaller number of reducers being used in the
                     final map reduce job, e.g. if a job was originally going to take 257 reducers,
                     it will now take 512 reducers, similarly if the max number of reducers is 511,
                     and a job was going to use this many, it will now use 256 reducers.
    -/description-
  -/property-
  -property-
    -name-hive.optimize.listbucketing-/name-
    -value-false-/value-
    -description-Enable list bucketing optimizer. Default value is false so that we disable it by default.-/description-
  -/property-
  -property-
    -name-hive.server.read.socket.timeout-/name-
    -value-10s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Timeout for the HiveServer to close the connection if no response from the client. By default, 10 seconds.
    -/description-
  -/property-
  -property-
    -name-hive.server.tcp.keepalive-/name-
    -value-true-/value-
    -description-Whether to enable TCP keepalive for the Hive Server. Keepalive will prevent accumulation of half-open connections.-/description-
  -/property-
  -property-
    -name-hive.decode.partition.name-/name-
    -value-false-/value-
    -description-Whether to show the unquoted partition names in query results.-/description-
  -/property-
  -property-
    -name-hive.execution.engine-/name-
    -value-mr-/value-
    -description-
      Expects one of [mr, tez, spark].
      Chooses execution engine. Options are: mr (Map reduce, default), tez (hadoop 2 only), spark
    -/description-
  -/property-
  -property-
    -name-hive.jar.directory-/name-
    -value/-
    -description-
      This is the location hive in tez mode will look for to find a site wide 
      installed hive instance.
    -/description-
  -/property-
  -property-
    -name-hive.user.install.directory-/name-
    -value-hdfs:///user/-/value-
    -description-
      If hive (in tez mode only) cannot find a usable hive jar in "hive.jar.directory", 
      it will upload the hive jar to "hive.user.install.directory/user.name"
      and use it to run queries.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.enabled-/name-
    -value-false-/value-
    -description-
      This flag should be set to true to enable vectorized mode of query execution.
      The default value is false.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.reduce.enabled-/name-
    -value-true-/value-
    -description-
      This flag should be set to true to enable vectorized mode of the reduce-side of query execution.
      The default value is true.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.reduce.groupby.enabled-/name-
    -value-true-/value-
    -description-
      This flag should be set to true to enable vectorized mode of the reduce-side GROUP BY query execution.
      The default value is true.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.mapjoin.native.enabled-/name-
    -value-true-/value-
    -description-
      This flag should be set to true to enable native (i.e. non-pass through) vectorization
      of queries using MapJoin.
      The default value is true.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.mapjoin.native.multikey.only.enabled-/name-
    -value-false-/value-
    -description-
      This flag should be set to true to restrict use of native vector map join hash tables to
      the MultiKey in queries using MapJoin.
      The default value is false.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.mapjoin.minmax.enabled-/name-
    -value-false-/value-
    -description-
      This flag should be set to true to enable vector map join hash tables to
      use max / max filtering for integer join queries using MapJoin.
      The default value is false.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.mapjoin.overflow.repeated.threshold-/name-
    -value--1-/value-
    -description-
      The number of small table rows for a match in vector map join hash tables
      where we use the repeated field optimization in overflow vectorized row batch for join queries using MapJoin.
      A value of -1 means do use the join result optimization.  Otherwise, threshold value can be 0 to maximum integer.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled-/name-
    -value-false-/value-
    -description-
      This flag should be set to true to enable use of native fast vector map join hash tables in
      queries using MapJoin.
      The default value is false.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.groupby.checkinterval-/name-
    -value-100000-/value-
    -description-Number of entries added to the group by aggregation hash before a recomputation of average entry size is performed.-/description-
  -/property-
  -property-
    -name-hive.vectorized.groupby.maxentries-/name-
    -value-1000000-/value-
    -description-
      Max number of entries in the vector group by aggregation hashtables. 
      Exceeding this will trigger a flush irrelevant of memory pressure condition.
    -/description-
  -/property-
  -property-
    -name-hive.vectorized.groupby.flush.percent-/name-
    -value-0.1-/value-
    -description-Percent of entries in the group by aggregation hash flushed when the memory threshold is exceeded.-/description-
  -/property-
  -property-
    -name-hive.typecheck.on.insert-/name-
    -value-true-/value-
    -description-This property has been extended to control whether to check, convert, and normalize partition value to conform to its column type in partition operations including but not limited to insert, such as alter, describe etc.-/description-
  -/property-
  -property-
    -name-hive.hadoop.classpath-/name-
    -value/-
    -description-
      For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH Java parameter while starting HiveServer2 
      using "-hiveconf hive.hadoop.classpath=%HIVE_LIB%".
    -/description-
  -/property-
  -property-
    -name-hive.rpc.query.plan-/name-
    -value-false-/value-
    -description-Whether to send the query plan via local resource or RPC-/description-
  -/property-
  -property-
    -name-hive.compute.splits.in.am-/name-
    -value-true-/value-
    -description-Whether to generate the splits locally or in the AM (tez only)-/description-
  -/property-
  -property-
    -name-hive.prewarm.enabled-/name-
    -value-false-/value-
    -description-Enables container prewarm for Tez (Hadoop 2 only)-/description-
  -/property-
  -property-
    -name-hive.prewarm.numcontainers-/name-
    -value-10-/value-
    -description-Controls the number of containers to prewarm for Tez (Hadoop 2 only)-/description-
  -/property-
  -property-
    -name-hive.stageid.rearrange-/name-
    -value-none-/value-
    -description-
      Expects one of [none, idonly, traverse, execution].
    -/description-
  -/property-
  -property-
    -name-hive.explain.dependency.append.tasktype-/name-
    -value-false-/value-
    -description/-
  -/property-
  -property-
    -name-hive.counters.group.name-/name-
    -value-HIVE-/value-
    -description-The name of counter group for internal Hive variables (CREATED_FILE, FATAL_ERROR, etc.)-/description-
  -/property-
  -property-
    -name-hive.server2.tez.default.queues-/name-
    -value/-
    -description-
      A list of comma separated values corresponding to YARN queues of the same name.
      When HiveServer2 is launched in Tez mode, this configuration needs to be set
      for multiple Tez sessions to run in parallel on the cluster.
    -/description-
  -/property-
  -property-
    -name-hive.server2.tez.sessions.per.default.queue-/name-
    -value-1-/value-
    -description-
      A positive integer that determines the number of Tez sessions that should be
      launched on each of the queues specified by "hive.server2.tez.default.queues".
      Determines the parallelism on each queue.
    -/description-
  -/property-
  -property-
    -name-hive.server2.tez.initialize.default.sessions-/name-
    -value-false-/value-
    -description-
      This flag is used in HiveServer2 to enable a user to use HiveServer2 without
      turning on Tez for HiveServer2. The user could potentially want to run queries
      over Tez without the pool of sessions.
    -/description-
  -/property-
  -property-
    -name-hive.support.quoted.identifiers-/name-
    -value-column-/value-
    -description-
      Expects one of [none, column].
      Whether to use quoted identifier. 'none' or 'column' can be used. 
        none: default(past) behavior. Implies only alphaNumeric and underscore are valid characters in identifiers.
        column: implies column names can contain any character.
    -/description-
  -/property-
  -property-
    -name-hive.support.sql11.reserved.keywords-/name-
    -value-true-/value-
    -description-
      This flag should be set to true to enable support for SQL2011 reserved keywords.
      The default value is true.
    -/description-
  -/property-
  -property-
    -name-hive.users.in.admin.role-/name-
    -value/-
    -description-
      Comma separated list of users who are in admin role for bootstrapping.
      More users can be added in ADMIN role later.
    -/description-
  -/property-
  -property-
    -name-hive.compat-/name-
    -value-0.12-/value-
    -description-
      Enable (configurable) deprecated behaviors by setting desired level of backward compatibility.
      Setting to 0.12:
        Maintains division behavior: int / int = double
    -/description-
  -/property-
  -property-
    -name-hive.convert.join.bucket.mapjoin.tez-/name-
    -value-false-/value-
    -description-
      Whether joins can be automatically converted to bucket map joins in hive 
      when tez is used as the execution engine.
    -/description-
  -/property-
  -property-
    -name-hive.exec.check.crossproducts-/name-
    -value-true-/value-
    -description-Check if a plan contains a Cross Product. If there is one, output a warning to the Session's console.-/description-
  -/property-
  -property-
    -name-hive.localize.resource.wait.interval-/name-
    -value-5000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Time to wait for another thread to localize the same resource for hive-tez.
    -/description-
  -/property-
  -property-
    -name-hive.localize.resource.num.wait.attempts-/name-
    -value-5-/value-
    -description-The number of attempts waiting for localizing a resource in hive-tez.-/description-
  -/property-
  -property-
    -name-hive.tez.auto.reducer.parallelism-/name-
    -value-false-/value-
    -description-
      Turn on Tez' auto reducer parallelism feature. When enabled, Hive will still estimate data sizes
      and set parallelism estimates. Tez will sample source vertices' output sizes and adjust the estimates at runtime as
      necessary.
    -/description-
  -/property-
  -property-
    -name-hive.tez.max.partition.factor-/name-
    -value-2.0-/value-
    -description-When auto reducer parallelism is enabled this factor will be used to over-partition data in shuffle edges.-/description-
  -/property-
  -property-
    -name-hive.tez.min.partition.factor-/name-
    -value-0.25-/value-
    -description-
      When auto reducer parallelism is enabled this factor will be used to put a lower limit to the number
      of reducers that tez specifies.
    -/description-
  -/property-
  -property-
    -name-hive.tez.dynamic.partition.pruning-/name-
    -value-true-/value-
    -description-
      When dynamic pruning is enabled, joins on partition keys will be processed by sending
      events from the processing vertices to the Tez application master. These events will be
      used to prune unnecessary partitions.
    -/description-
  -/property-
  -property-
    -name-hive.tez.dynamic.partition.pruning.max.event.size-/name-
    -value-1048576-/value-
    -description-Maximum size of events sent by processors in dynamic pruning. If this size is crossed no pruning will take place.-/description-
  -/property-
  -property-
    -name-hive.tez.dynamic.partition.pruning.max.data.size-/name-
    -value-104857600-/value-
    -description-Maximum total data size of events in dynamic pruning.-/description-
  -/property-
  -property-
    -name-hive.tez.smb.number.waves-/name-
    -value-0.5-/value-
    -description-The number of waves in which to run the SMB join. Account for cluster being occupied. Ideally should be 1 wave.-/description-
  -/property-
  -property-
    -name-hive.tez.exec.print.summary-/name-
    -value-false-/value-
    -description-Display breakdown of execution steps, for every query executed by the shell.-/description-
  -/property-
  -property-
    -name-hive.tez.exec.inplace.progress-/name-
    -value-true-/value-
    -description-Updates tez job execution progress in-place in the terminal.-/description-
  -/property-
  -property-
    -name-hive.spark.client.future.timeout-/name-
    -value-60s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Timeout for requests from Hive client to remote Spark driver.
    -/description-
  -/property-
  -property-
    -name-hive.spark.job.monitor.timeout-/name-
    -value-60s-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is sec if not specified.
      Timeout for job monitor to get Spark job state.
    -/description-
  -/property-
  -property-
    -name-hive.spark.client.connect.timeout-/name-
    -value-1000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Timeout for remote Spark driver in connecting back to Hive client.
    -/description-
  -/property-
  -property-
    -name-hive.spark.client.server.connect.timeout-/name-
    -value-90000ms-/value-
    -description-
      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.
      Timeout for handshake between Hive client and remote Spark driver.  Checked by both processes.
    -/description-
  -/property-
  -property-
    -name-hive.spark.client.secret.bits-/name-
    -value-256-/value-
    -description-Number of bits of randomness in the generated secret for communication between Hive client and remote Spark driver. Rounded down to the nearest multiple of 8.-/description-
  -/property-
  -property-
    -name-hive.spark.client.rpc.threads-/name-
    -value-8-/value-
    -description-Maximum number of threads for remote Spark driver's RPC event loop.-/description-
  -/property-
  -property-
    -name-hive.spark.client.rpc.max.size-/name-
    -value-52428800-/value-
    -description-Maximum message size in bytes for communication between Hive client and remote Spark driver. Default is 50MB.-/description-
  -/property-
  -property-
    -name-hive.spark.client.channel.log.level-/name-
    -value/-
    -description-Channel logging level for remote Spark driver.  One of {DEBUG, ERROR, INFO, TRACE, WARN}.-/description-
  -/property-
  -property-
    -name-hive.spark.client.rpc.sasl.mechanisms-/name-
    -value-DIGEST-MD5-/value-
    -description-Name of the SASL mechanism to use for authentication.-/description-
  -/property-
  -property-
    -name-hive.reorder.nway.joins-/name-
    -value-true-/value-
    -description-Runs reordering of tables within single n-way join (i.e.: picks streamtable)-/description-
  -/property-
  -property-
    -name-hive.log.every.n.records-/name-
    -value-0-/value-
    -description-
      Expects value bigger than 0.
      If value is greater than 0 logs in fixed intervals of size n rather than exponentially.
    -/description-
  -/property-
  -property-
    -name-hive.msck.path.validation-/name-
    -value-throw-/value-
    -description-
      Expects one of [throw, skip, ignore].
      The approach msck should take with HDFS directories that are partition-like but contain unsupported characters. 'throw' (an exception) is the default; 'skip' will skip the invalid directories and still repair the others; 'ignore' will skip the validation (legacy behavior, causes bugs in many cases)
    -/description-
  -/property-
  -property-
    -name-hive.tez.enable.memory.manager-/name-
    -value-true-/value-
    -description-Enable memory manager for tez-/description-
  -/property-
  -property-
    -name-hive.hash.table.inflation.factor-/name-
    -value-2.0-/value-
    -description-Expected inflation factor between disk/in memory representation of hash tables-/description-
  -/property-
-/configuration-

hive-env.sh

 if [ "$SERVICE" = "cli" ]; then
   if [ -z "$DEBUG" ]; then
     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:+UseNUMA -XX:+UseParallelGC -XX:-UseGCOverheadLimit"
   else
     export HADOOP_OPTS="$HADOOP_OPTS -XX:NewRatio=12 -XX:MaxHeapFreeRatio=40 -XX:MinHeapFreeRatio=15 -XX:-UseGCOverheadLimit"
   fi
 fi
HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
export HIVE_CONF_DIR=/usr/hdp/current/hive-client/conf
if [ "${HIVE_AUX_JARS_PATH}" != "" ]; then
  if [ -f "${HIVE_AUX_JARS_PATH}" ]; then    
    export HIVE_AUX_JARS_PATH=${HIVE_AUX_JARS_PATH}
  elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
    export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
  fi
elif [ -d "/usr/hdp/current/hive-webhcat/share/hcatalog" ]; then
  export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-webhcat/share/hcatalog/hive-hcatalog-core.jar
fi      
export METASTORE_PORT=9083
    

hive-env.sh.template


hive-exec-log4j.properties

hive.log.threshold=ALL
hive.root.logger=INFO,FA
hive.log.dir=${java.io.tmpdir}/${user.name}
hive.query.id=hadoop
hive.log.file=${hive.query.id}.log
log4j.rootLogger=${hive.root.logger}, EventCounter
log4j.threshhold=${hive.log.threshold}
log4j.appender.FA=org.apache.log4j.FileAppender
log4j.appender.FA.File=${hive.log.dir}/${hive.log.file}
log4j.appender.FA.layout=org.apache.log4j.PatternLayout
log4j.appender.FA.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter
log4j.category.DataNucleus=ERROR,FA
log4j.category.Datastore=ERROR,FA
log4j.category.Datastore.Schema=ERROR,FA
log4j.category.JPOX.Datastore=ERROR,FA
log4j.category.JPOX.Plugin=ERROR,FA
log4j.category.JPOX.MetaData=ERROR,FA
log4j.category.JPOX.Query=ERROR,FA
log4j.category.JPOX.General=ERROR,FA
log4j.category.JPOX.Enhancer=ERROR,FA
log4j.logger.org.apache.zookeeper.server.NIOServerCnxn=WARN,FA
log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,FA
    

hive-log4j.properties

hive.log.threshold=ALL
hive.root.logger=INFO,DRFA
hive.log.dir=${java.io.tmpdir}/${user.name}
hive.log.file=hive.log
log4j.rootLogger=${hive.root.logger}, EventCounter
log4j.threshold=${hive.log.threshold}
log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}
log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} [%t]: %p %c{2}: %m%n
log4j.appender.console.encoding=UTF-8
log4j.appender.EventCounter=org.apache.hadoop.hive.shims.HiveEventCounter
log4j.category.DataNucleus=ERROR,DRFA
log4j.category.Datastore=ERROR,DRFA
log4j.category.Datastore.Schema=ERROR,DRFA
log4j.category.JPOX.Datastore=ERROR,DRFA
log4j.category.JPOX.Plugin=ERROR,DRFA
log4j.category.JPOX.MetaData=ERROR,DRFA
log4j.category.JPOX.Query=ERROR,DRFA
log4j.category.JPOX.General=ERROR,DRFA
log4j.category.JPOX.Enhancer=ERROR,DRFA
log4j.logger.org.apache.zookeeper.server.NIOServerCnxn=WARN,DRFA
log4j.logger.org.apache.zookeeper.ClientCnxnSocketNIO=WARN,DRFA
    

hiveserver2-site.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?--!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
----configuration-
-property-
        -name-hive.security.authorization.enabled-/name-
        -value-true-/value-
    -/property-
    -property-
        -name-hive.security.authorization.manager-/name-
        -value-org.apache.ranger.authorization.hive.authorizer.RangerHiveAuthorizerFactory-/value-
    -/property-
    -property-
        -name-hive.security.authenticator.manager-/name-
        -value-org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator-/value-
    -/property-
    -property-
        -name-hive.conf.restricted.list-/name-
        -value-hive.security.authorization.enabled,hive.security.authorization.manager,hive.security.authenticator.manager-/value-
    -/property-
-/configuration-

hive-site.xml

-!--Tue Jul 21 16:44:50 2015---
    -configuration-
    
    -property-
      -name-ambari.hive.db.schema.name-/name-
      -value-hive-/value-
    -/property-
    
    -property-
      -name-datanucleus.autoCreateSchema-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-datanucleus.cache.level2.type-/name-
      -value-none-/value-
    -/property-
    
    -property-
      -name-hive.auto.convert.join-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.auto.convert.join.noconditionaltask-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.auto.convert.join.noconditionaltask.size-/name-
      -value-52428800-/value-
    -/property-
    
    -property-
      -name-hive.auto.convert.sortmerge.join-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.auto.convert.sortmerge.join.to.mapjoin-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.cbo.enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.cli.print.header-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.cluster.delegation.token.store.class-/name-
      -value-org.apache.hadoop.hive.thrift.ZooKeeperTokenStore-/value-
    -/property-
    
    -property-
      -name-hive.cluster.delegation.token.store.zookeeper.connectString-/name-
      -value-sandbox.hortonworks.com:2181-/value-
    -/property-
    
    -property-
      -name-hive.cluster.delegation.token.store.zookeeper.znode-/name-
      -value-/hive/cluster/delegation-/value-
    -/property-
    
    -property-
      -name-hive.compactor.abortedtxn.threshold-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-hive.compactor.check.interval-/name-
      -value-300s-/value-
    -/property-
    
    -property-
      -name-hive.compactor.delta.num.threshold-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-hive.compactor.delta.pct.threshold-/name-
      -value-0.1f-/value-
    -/property-
    
    -property-
      -name-hive.compactor.initiator.on-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.compactor.worker.threads-/name-
      -value-0-/value-
    -/property-
    
    -property-
      -name-hive.compactor.worker.timeout-/name-
      -value-86400s-/value-
    -/property-
    
    -property-
      -name-hive.compute.query.using.stats-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.conf.restricted.list-/name-
      -value-hive.security.authenticator.manager,hive.security.authorization.manager,hive.users.in.admin.role-/value-
    -/property-
    
    -property-
      -name-hive.convert.join.bucket.mapjoin.tez-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.default.fileformat-/name-
      -value-TextFile-/value-
    -/property-
    
    -property-
      -name-hive.default.fileformat.managed-/name-
      -value-TextFile-/value-
    -/property-
    
    -property-
      -name-hive.enforce.bucketing-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.enforce.sorting-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.enforce.sortmergebucketmapjoin-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.exec.compress.intermediate-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.exec.compress.output-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.exec.dynamic.partition-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.exec.dynamic.partition.mode-/name-
      -value-nonstrict-/value-
    -/property-
    
    -property-
      -name-hive.exec.failure.hooks-/name-
      -value-org.apache.hadoop.hive.ql.hooks.ATSHook-/value-
    -/property-
    
    -property-
      -name-hive.exec.max.created.files-/name-
      -value-100000-/value-
    -/property-
    
    -property-
      -name-hive.exec.max.dynamic.partitions-/name-
      -value-5000-/value-
    -/property-
    
    -property-
      -name-hive.exec.max.dynamic.partitions.pernode-/name-
      -value-2000-/value-
    -/property-
    
    -property-
      -name-hive.exec.orc.compression.strategy-/name-
      -value-SPEED-/value-
    -/property-
    
    -property-
      -name-hive.exec.orc.default.compress-/name-
      -value-ZLIB-/value-
    -/property-
    
    -property-
      -name-hive.exec.orc.default.stripe.size-/name-
      -value-67108864-/value-
    -/property-
    
    -property-
      -name-hive.exec.orc.encoding.strategy-/name-
      -value-SPEED-/value-
    -/property-
    
    -property-
      -name-hive.exec.parallel-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.exec.parallel.thread.number-/name-
      -value-8-/value-
    -/property-
    
    -property-
      -name-hive.exec.post.hooks-/name-
      -value-org.apache.hadoop.hive.ql.hooks.ATSHook-/value-
    -/property-
    
    -property-
      -name-hive.exec.pre.hooks-/name-
      -value-org.apache.hadoop.hive.ql.hooks.ATSHook-/value-
    -/property-
    
    -property-
      -name-hive.exec.reducers.bytes.per.reducer-/name-
      -value-67108864-/value-
    -/property-
    
    -property-
      -name-hive.exec.reducers.max-/name-
      -value-1009-/value-
    -/property-
    
    -property-
      -name-hive.exec.scratchdir-/name-
      -value-/tmp/hive-/value-
    -/property-
    
    -property-
      -name-hive.exec.submit.local.task.via.child-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.exec.submitviachild-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.execution.engine-/name-
      -value-tez-/value-
    -/property-
    
    -property-
      -name-hive.fetch.task.aggr-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.fetch.task.conversion-/name-
      -value-more-/value-
    -/property-
    
    -property-
      -name-hive.fetch.task.conversion.threshold-/name-
      -value-1073741824-/value-
    -/property-
    
    -property-
      -name-hive.limit.optimize.enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.limit.pushdown.memory.usage-/name-
      -value-0.04-/value-
    -/property-
    
    -property-
      -name-hive.map.aggr-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.map.aggr.hash.force.flush.memory.threshold-/name-
      -value-0.9-/value-
    -/property-
    
    -property-
      -name-hive.map.aggr.hash.min.reduction-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-hive.map.aggr.hash.percentmemory-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-hive.mapjoin.bucket.cache.size-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-hive.mapjoin.optimized.hashtable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.mapred.reduce.tasks.speculative.execution-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.merge.mapfiles-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.merge.mapredfiles-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.merge.orcfile.stripe.level-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.merge.rcfile.block.level-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.merge.size.per.task-/name-
      -value-256000000-/value-
    -/property-
    
    -property-
      -name-hive.merge.smallfiles.avgsize-/name-
      -value-16000000-/value-
    -/property-
    
    -property-
      -name-hive.merge.tezfiles-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.metastore.authorization.storage.checks-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.metastore.cache.pinobjtypes-/name-
      -value-Table,Database,Type,FieldSchema,Order-/value-
    -/property-
    
    -property-
      -name-hive.metastore.client.connect.retry.delay-/name-
      -value-5s-/value-
    -/property-
    
    -property-
      -name-hive.metastore.client.socket.timeout-/name-
      -value-1800s-/value-
    -/property-
    
    -property-
      -name-hive.metastore.connect.retries-/name-
      -value-24-/value-
    -/property-
    
    -property-
      -name-hive.metastore.execute.setugi-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.metastore.failure.retries-/name-
      -value-24-/value-
    -/property-
    
    -property-
      -name-hive.metastore.kerberos.keytab.file-/name-
      -value-/etc/security/keytabs/hive.service.keytab-/value-
    -/property-
    
    -property-
      -name-hive.metastore.kerberos.principal-/name-
      -value-hive/_HOST@EXAMPLE.COM-/value-
    -/property-
    
    -property-
      -name-hive.metastore.pre.event.listeners-/name-
      -value-org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener-/value-
    -/property-
    
    -property-
      -name-hive.metastore.sasl.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.metastore.server.max.threads-/name-
      -value-100000-/value-
    -/property-
    
    -property-
      -name-hive.metastore.uris-/name-
      -value-thrift://sandbox.hortonworks.com:9083-/value-
    -/property-
    
    -property-
      -name-hive.metastore.warehouse.dir-/name-
      -value-/apps/hive/warehouse-/value-
    -/property-
    
    -property-
      -name-hive.optimize.bucketmapjoin-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.bucketmapjoin.sortedmerge-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.optimize.constant.propagation-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.index.filter-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.metadataonly-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.null.scan-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.reducededuplication-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.optimize.reducededuplication.min.reducer-/name-
      -value-4-/value-
    -/property-
    
    -property-
      -name-hive.optimize.sort.dynamic.partition-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.orc.compute.splits.num.threads-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-hive.orc.splits.include.file.footer-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.prewarm.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.prewarm.numcontainers-/name-
      -value-3-/value-
    -/property-
    
    -property-
      -name-hive.security.authenticator.manager-/name-
      -value-org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator-/value-
    -/property-
    
    -property-
      -name-hive.security.authorization.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.security.authorization.manager-/name-
      -value-org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory-/value-
    -/property-
    
    -property-
      -name-hive.security.metastore.authenticator.manager-/name-
      -value-org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator-/value-
    -/property-
    
    -property-
      -name-hive.security.metastore.authorization.auth.reads-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.security.metastore.authorization.manager-/name-
      -value-org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider-/value-
    -/property-
    
    -property-
      -name-hive.server2.allow.user.substitution-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.server2.authentication-/name-
      -value-NONE-/value-
    -/property-
    
    -property-
      -name-hive.server2.authentication.spnego.keytab-/name-
      -value-HTTP/_HOST@EXAMPLE.COM-/value-
    -/property-
    
    -property-
      -name-hive.server2.authentication.spnego.principal-/name-
      -value-/etc/security/keytabs/spnego.service.keytab-/value-
    -/property-
    
    -property-
      -name-hive.server2.enable.doAs-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.server2.logging.operation.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.server2.logging.operation.log.location-/name-
      -value-${system:java.io.tmpdir}/${system:user.name}/operation_logs-/value-
    -/property-
    
    -property-
      -name-hive.server2.support.dynamic.service.discovery-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.server2.table.type.mapping-/name-
      -value-CLASSIC-/value-
    -/property-
    
    -property-
      -name-hive.server2.tez.default.queues-/name-
      -value-default-/value-
    -/property-
    
    -property-
      -name-hive.server2.tez.initialize.default.sessions-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.server2.tez.sessions.per.default.queue-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-hive.server2.thrift.http.path-/name-
      -value-cliservice-/value-
    -/property-
    
    -property-
      -name-hive.server2.thrift.http.port-/name-
      -value-10001-/value-
    -/property-
    
    -property-
      -name-hive.server2.thrift.max.worker.threads-/name-
      -value-500-/value-
    -/property-
    
    -property-
      -name-hive.server2.thrift.port-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-hive.server2.thrift.sasl.qop-/name-
      -value-auth-/value-
    -/property-
    
    -property-
      -name-hive.server2.transport.mode-/name-
      -value-binary-/value-
    -/property-
    
    -property-
      -name-hive.server2.use.SSL-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.server2.zookeeper.namespace-/name-
      -value-hiveserver2-/value-
    -/property-
    
    -property-
      -name-hive.smbjoin.cache.rows-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-hive.stats.autogather-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.stats.dbclass-/name-
      -value-fs-/value-
    -/property-
    
    -property-
      -name-hive.stats.fetch.column.stats-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.stats.fetch.partition.stats-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.support.concurrency-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.tez.auto.reducer.parallelism-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.tez.container.size-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-hive.tez.cpu.vcores-/name-
      -value--1-/value-
    -/property-
    
    -property-
      -name-hive.tez.dynamic.partition.pruning-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.tez.dynamic.partition.pruning.max.data.size-/name-
      -value-104857600-/value-
    -/property-
    
    -property-
      -name-hive.tez.dynamic.partition.pruning.max.event.size-/name-
      -value-1048576-/value-
    -/property-
    
    -property-
      -name-hive.tez.input.format-/name-
      -value-org.apache.hadoop.hive.ql.io.HiveInputFormat-/value-
    -/property-
    
    -property-
      -name-hive.tez.java.opts-/name-
      -value--server -Xmx200m -Djava.net.preferIPv4Stack=true-/value-
    -/property-
    
    -property-
      -name-hive.tez.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-hive.tez.max.partition.factor-/name-
      -value-2.0-/value-
    -/property-
    
    -property-
      -name-hive.tez.min.partition.factor-/name-
      -value-0.25-/value-
    -/property-
    
    -property-
      -name-hive.tez.smb.number.waves-/name-
      -value-0.5-/value-
    -/property-
    
    -property-
      -name-hive.txn.manager-/name-
      -value-org.apache.hadoop.hive.ql.lockmgr.DbTxnManager-/value-
    -/property-
    
    -property-
      -name-hive.txn.max.open.batch-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-hive.txn.timeout-/name-
      -value-300-/value-
    -/property-
    
    -property-
      -name-hive.user.install.directory-/name-
      -value-/user/-/value-
    -/property-
    
    -property-
      -name-hive.users.in.admin.role-/name-
      -value-hue,hive-/value-
    -/property-
    
    -property-
      -name-hive.vectorized.execution.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-hive.vectorized.execution.reduce.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-hive.vectorized.groupby.checkinterval-/name-
      -value-4096-/value-
    -/property-
    
    -property-
      -name-hive.vectorized.groupby.flush.percent-/name-
      -value-0.1-/value-
    -/property-
    
    -property-
      -name-hive.vectorized.groupby.maxentries-/name-
      -value-100000-/value-
    -/property-
    
    -property-
      -name-hive.zookeeper.client.port-/name-
      -value-2181-/value-
    -/property-
    
    -property-
      -name-hive.zookeeper.namespace-/name-
      -value-hive_zookeeper_namespace-/value-
    -/property-
    
    -property-
      -name-hive.zookeeper.quorum-/name-
      -value-sandbox.hortonworks.com:2181-/value-
    -/property-
    
    -property-
      -name-hive_metastore_user_passwd-/name-
      -value-hive-/value-
    -/property-
    
    -property-
      -name-javax.jdo.option.ConnectionDriverName-/name-
      -value-com.mysql.jdbc.Driver-/value-
    -/property-
    
    -property-
      -name-javax.jdo.option.ConnectionPassword-/name-
      -value-hive-/value-
    -/property-
    
    -property-
      -name-javax.jdo.option.ConnectionURL-/name-
      -value-jdbc:mysql://sandbox.hortonworks.com/hive?createDatabaseIfNotExist=true-/value-
    -/property-
    
    -property-
      -name-javax.jdo.option.ConnectionUserName-/name-
      -value-hive-/value-
    -/property-
    
  -/configuration-

ivysettings.xml

-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
   ---
-!--This file is used by grapes to download dependencies from a maven repository.
    This is just a template and can be edited to add more repositories.
---
-ivysettings-
  -!--name of the defaultResolver should always be 'downloadGrapes'. ---
  -settings defaultResolver="downloadGrapes"/-
  -resolvers-
    -!-- more resolvers can be added here ---
    -chain name="downloadGrapes"-
      -!-- This resolver uses ibiblio to find artifacts, compatible with maven2 repository ---
      -ibiblio name="central" m2compatible="true"/-
      -!-- File resolver to add jars from the local system. ---
      -filesystem name="test" checkmodified="true"-
        -artifact pattern="/tmp/[module]-[revision](-[classifier]).jar" /-
      -/filesystem-
    -/chain-
  -/resolvers-
-/ivysettings-

mapred-site.xml

-!--Tue Jul 21 16:44:09 2015---
    -configuration-
    
    -property-
      -name-io.sort.mb-/name-
      -value-64-/value-
    -/property-
    
    -property-
      -name-mapred.child.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapred.job.map.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapred.job.reduce.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.map.child.java.opts-/name-
      -value--server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.reduce.child.java.opts-/name-
      -value--server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-mapreduce.admin.user.env-/name-
      -value-LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64-/value-
    -/property-
    
    -property-
      -name-mapreduce.am.max-attempts-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-mapreduce.application.classpath-/name-
      -value-$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure-/value-
    -/property-
    
    -property-
      -name-mapreduce.application.framework.path-/name-
      -value-/hdp/apps/${hdp.version}/mapreduce/mapreduce.tar.gz#mr-framework-/value-
    -/property-
    
    -property-
      -name-mapreduce.cluster.administrators-/name-
      -value- hadoop-/value-
    -/property-
    
    -property-
      -name-mapreduce.framework.name-/name-
      -value-yarn-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.counters.max-/name-
      -value-130-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.emit-timeline-data-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.job.reduce.slowstart.completedmaps-/name-
      -value-0.05-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.address-/name-
      -value-sandbox.hortonworks.com:10020-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.bind-host-/name-
      -value-0.0.0.0-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.done-dir-/name-
      -value-/mr-history/done-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.intermediate-done-dir-/name-
      -value-/mr-history/tmp-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.enable-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.store.class-/name-
      -value-org.apache.hadoop.mapreduce.v2.hs.HistoryServerLeveldbStateStoreService-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.recovery.store.leveldb.path-/name-
      -value-/hadoop/mapreduce/jhs-/value-
    -/property-
    
    -property-
      -name-mapreduce.jobhistory.webapp.address-/name-
      -value-sandbox.hortonworks.com:19888-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.output.compress-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.sort.spill.percent-/name-
      -value-0.7-/value-
    -/property-
    
    -property-
      -name-mapreduce.map.speculative-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.output.fileoutputformat.compress-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.output.fileoutputformat.compress.type-/name-
      -value-BLOCK-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.input.buffer.percent-/name-
      -value-0.0-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.java.opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.enabled-/name-
      -value-1-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.interval-ms-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.fetch.retry.timeout-ms-/name-
      -value-30000-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.input.buffer.percent-/name-
      -value-0.7-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.merge.percent-/name-
      -value-0.66-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.shuffle.parallelcopies-/name-
      -value-30-/value-
    -/property-
    
    -property-
      -name-mapreduce.reduce.speculative-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-mapreduce.shuffle.port-/name-
      -value-13562-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.io.sort.factor-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.io.sort.mb-/name-
      -value-64-/value-
    -/property-
    
    -property-
      -name-mapreduce.task.timeout-/name-
      -value-300000-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.admin-command-opts-/name-
      -value--Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.command-opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.resource.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.staging-dir-/name-
      -value-/user-/value-
    -/property-
    
  -/configuration-

ranger-hive-audit.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-xasecure.audit.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-!-- DB audit provider configuration ---
	-property-
		-name-xasecure.audit.db.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.batch.size-/name-
		-value-100-/value-
	-/property-	
	-!--  Properties whose name begin with "xasecure.audit.jpa." are used to configure JPA ---
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.url-/name-
		-value-jdbc:mysql://localhost/ranger_audit-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.user-/name-
		-value-rangerlogger-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.password-/name-
		-value-crypted-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.driver-/name-
		-value-com.mysql.jdbc.Driver-/value-
	-/property-
    -property-
		-name-xasecure.audit.credential.provider.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hive/cred.jceks-/value-
	-/property-
	-!-- HDFS audit provider configuration ---
	-property-
		-name-xasecure.audit.hdfs.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.hdfs.async.max.queue.size-/name-
		-value-1048576-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.encoding-/name-
		-value/-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.directory-/name-
		-value-hdfs://sandbox.hortonworks.com:8020/ranger/audit/%app-type%/%time:yyyyMMdd%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.file-/name-
		-value-%hostname%-audit.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.flush.interval.seconds-/name-
		-value-900-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.rollover.interval.seconds-/name-
		-value-86400-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.open.retry.interval.seconds-/name-
		-value-60-/value-
	-/property-
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.directory-/name-
		-value-/var/log/hive/audit/%app-type%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file-/name-
		-value-%time:yyyyMMdd-HHmm.ss%.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file.buffer.size.bytes-/name-
		-value-8192-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.flush.interval.seconds-/name-
		-value-60-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.rollover.interval.seconds-/name-
		-value-600-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.directory-/name-
		-value-/var/log/hive/audit/archive/%app-type%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.max.file.count-/name-
		-value-10-/value-
	-/property-	
	
	-!-- Log4j audit provider configuration ---
	-property-
		-name-xasecure.audit.log4j.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.is.async-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.log4j.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	
	-!-- Kafka audit provider configuration ---
	-property-
		-name-xasecure.audit.kafka.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.kafka.broker_list-/name-
		-value-localhost:9092-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.topic_name-/name-
		-value-ranger_audits-/value-
	-/property-	
	
	-!-- Ranger audit provider configuration ---
	-property-
		-name-xasecure.audit.solr.is.enabled-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.solr.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.solr_url-/name-
		-value-http://localhost:6083/solr/ranger_audits-/value-
	-/property-	
	
-property-
        -name-xasecure.audit.destination.solr-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.urls-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.user-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.password-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.zookeepers-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/solr/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/hdfs/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.dir-/name-
        -value-hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit-/value-
    -/property-
-/configuration-

ranger-hive-security.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-ranger.plugin.hive.service.name-/name-
		-value-sandbox_hive-/value-
		-description-
			Name of the Ranger service containing policies for this YARN instance
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hive.policy.source.impl-/name-
		-value-org.apache.ranger.admin.client.RangerAdminRESTClient-/value-
		-description-
			Class to retrieve policies from the source
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hive.policy.rest.url-/name-
		-value-http://sandbox.hortonworks.com:6080-/value-
		-description-
			URL to Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hive.policy.rest.ssl.config.file-/name-
		-value-/etc/hive/conf/ranger-policymgr-ssl.xml-/value-
		-description-
			Path to the file containing SSL details to contact Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hive.policy.pollIntervalMs-/name-
		-value-5000-/value-
		-description-
			How often to poll for changes in policies?
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.hive.policy.cache.dir-/name-
		-value-/etc/ranger/sandbox_hive/policycache-/value-
		-description-
			Directory where Ranger policies are cached after successful retrieval from the source
		-/description-
	-/property-
	-property-
		-name-xasecure.hive.update.xapolicies.on.grant.revoke-/name-
		-value-true-/value-
		-description-Should Hive plugin update Ranger policies for updates to permissions done using GRANT/REVOKE?-/description-
	-/property-
-/configuration-

ranger-policymgr-ssl.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-!--  The following properties are used for 2-way SSL client server validation ---
	-property-
		-name-xasecure.policymgr.clientssl.keystore-/name-
		-value-/etc/hive/conf/ranger-plugin-keystore.jks-/value-
		-description- 
			Java Keystore files 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.password-/name-
		-value-myKeyFilePassword-/value-
		-description- 
			password for keystore 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore-/name-
		-value-/etc/hive/conf/ranger-plugin-truststore.jks-/value-
		-description- 
			java truststore file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.password-/name-
		-value-changeit-/value-
		-description- 
			java  truststore password
		-/description-
	-/property-
    -property-
		-name-xasecure.policymgr.clientssl.keystore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hive/cred.jceks-/value-
		-description- 
			java  keystore credential file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_hive/cred.jceks-/value-
		-description- 
			java  truststore credential file
		-/description-
	-/property-
-/configuration-

ranger-security.xml

-ranger-\n-enabled-Tue Jul 21 20:16:25 UTC 2015-/enabled-\n-/ranger-

hue

/etc/hue/conf:
-rwxr-xr-x 1 root    root     1785 2015-07-14 16:50 hue_httpd.conf
-rw-r--r-- 1 vagrant vagrant 18469 2015-07-21 16:17 hue.ini
-rwxr-xr-x 1 root    root     1984 2015-07-14 16:50 log.conf

hue_httpd.conf

Listen 8000
WSGIPythonHome /usr/lib/hue/build/env
WSGIPythonPath /usr/lib/hue/build/env/bin/python
-VirtualHost *:8000-
  ServerName -FQDN-
  ## WSGI settings
  WSGIDaemonProcess hue_httpd display-name=hue_httpd processes=8 threads=10 user=hue
  WSGIScriptAlias / /usr/lib/hue/desktop/core/src/desktop/wsgi.py
  -Directory /usr/lib/hue/desktop/core/src/desktop-
    Order deny,allow
    Allow from all
  -/Directory-
  Alias "/static/" "/usr/lib/hue/desktop/core/static/"
  Alias "/about/static/" "/usr/lib/hue/apps/about/static/"
  Alias "/beeswax/static/" "/usr/lib/hue/apps/beeswax/static/"
  Alias "/filebrowser/static/" "/usr/lib/hue/apps/filebrowser/src/filebrowser/static/"
  Alias "/hcatalog/static/" "/usr/lib/hue/apps/hcatalog/src/hcatalog/static/"
  Alias "/help/static/" "/usr/lib/hue/apps/help/src/help/static/"
  Alias "/jobbrowser/static/" "/usr/lib/hue/apps/jobbrowser/static/"
  Alias "/jobsub/static/" "/usr/lib/hue/apps/jobsub/static/"
  Alias "/oozie/static/" "/usr/lib/hue/apps/oozie/static/"
  Alias "/pig/static/" "/usr/lib/hue/apps/pig/src/pig/static/"
  Alias "/shell/static/" "/usr/lib/hue/apps/shell/src/shell/static/"
  Alias "/useradmin/static/" "/usr/lib/hue/apps/useradmin/static/"
  -IfModule mod_expires.c-
    -FilesMatch "\.(jpg|gif|png|css|js)$"-
      ExpiresActive on
      ExpiresDefault "access plus 1 day"
    -/FilesMatch-
  -/IfModule-
  ## SSL part
  # SSLEngine on
  # SSLOptions +StrictRequire
  # SSLProtocol -all +TLSv1 +SSLv3
  # SSLCipherSuite HIGH:MEDIUM:!aNULL:+SHA1:+MD5:+HIGH:+MEDIUM
  # SSLCertificateFile /etc/ssl/hue.crt
  # SSLCertificateKeyFile /etc/ssl/hue.key
  # SSLProxyEngine off
  ## Logging
  ErrorLog /var/log/httpd/error_hue_httpd_log
  CustomLog /var/log/httpd/access_hue_httpd_log combined
-/VirtualHost-

hue.ini

[desktop]
  
  kredentials_dir="/tmp"
  send_dbug_messages=1
  # To show database transactions, set database_logging to 1
  database_logging=0
  # Set this to a random string, the longer the better.
  # This is used for secure hashing in the session store.
  secret_key=secretkeysecretkeysecretkeysecretkey
  # Webserver listens on this address and port
  http_host=0.0.0.0
  http_port=8000
  # Time zone name
  time_zone=America/Los_Angeles
  # Turn off debug
  django_debug_mode=1
  # Turn off backtrace for server error
  http_500_debug_mode=1
  # Server email for internal error messages
  ## django_server_email='hue@localhost.localdomain'
  # Email backend
  ## django_email_backend=django.core.mail.backends.smtp.EmailBackend
  # Set to true to use CherryPy as the webserver, set to false
  # to use Spawning as the webserver. Defaults to Spawning if
  # key is not specified.
  use_cherrypy_server=false
  # Webserver runs as this user
  server_user=hue
  server_group=hadoop
  # If set to false, runcpserver will not actually start the web server.
  # Used if Apache is being used as a WSGI container.
  ## enable_server=yes
  # Number of threads used by the CherryPy web server
  ## cherrypy_server_threads=10
  # Filename of SSL Certificate
  ## ssl_certificate=
  # Filename of SSL RSA Private Key
  ## ssl_private_key=
  # Default encoding for site data
  ## default_site_encoding=utf-8
  # Options for X_FRAME_OPTIONS header. Default is SAMEORIGIN
  x_frame_options='ALLOWALL'
  # Administrators
  # ----------------
  [[django_admins]]
    ## [[[admin1]]]
    ## name=john
    ## email=john@doe.com
  # UI customizations
  # -------------------
  [[custom]]
  # Top banner HTML code
  ## banner_top_html=
  # Top about page HTML code
  ## about_top_html='''-div--a href="/dump_config"-Visit the Hue Configuration page-/a--/div-'''
  # Configuration options for user authentication into the web application
  # ------------------------------------------------------------------------
  [[auth]]
    # Authentication backend. Common settings are:
    # - django.contrib.auth.backends.ModelBackend (entirely Django backend)
    # - desktop.auth.backend.AllowAllBackend (allows everyone)
    # - desktop.auth.backend.AllowFirstUserDjangoBackend
    #     (Default. Relies on Django and user manager, after the first login)
    # - desktop.auth.backend.LdapBackend
    # - desktop.auth.backend.PamBackend
    # - desktop.auth.backend.SpnegoDjangoBackend
    # - desktop.auth.backend.RemoteUserDjangoBackend
    backend=desktop.auth.backend.AllowFirstUserDjangoBackend
    ## pam_service=login
    # When using the desktop.auth.backend.RemoteUserDjangoBackend, this sets
    # the normalized name of the header that contains the remote user.
    # The HTTP header in the request is converted to a key by converting
    # all characters to uppercase, replacing any hyphens with underscores
    # and adding an HTTP_ prefix to the name. So, for example, if the header
    # is called Remote-User that would be configured as HTTP_REMOTE_USER
    #
    # Defaults to HTTP_REMOTE_USER
    ## remote_user_header=HTTP_REMOTE_USER
  # Configuration options for connecting to LDAP and Active Directory
  # -------------------------------------------------------------------
  [[ldap]]
  # The search base for finding users and groups
  ## base_dn="DC=mycompany,DC=com"
  # The NT domain to connect to (only for use with Active Directory)
  ## nt_domain=mycompany.com
  # URL of the LDAP server
  ## ldap_url=ldap://auth.mycompany.com
  # Path to certificate for authentication over TLS
  ## ldap_cert=
  # Distinguished name of the user to bind as -- not necessary if the LDAP server
  # supports anonymous searches
  ## bind_dn="CN=ServiceAccount,DC=mycompany,DC=com"
  # Password of the bind user -- not necessary if the LDAP server supports
  # anonymous searches
  ## bind_password=
  # Pattern for searching for usernames -- Use -username- for the parameter
  # For use when using LdapBackend for Hue authentication
  ## ldap_username_pattern="uid=-username-,ou=People,dc=mycompany,dc=com"
  # Create users in Hue when they try to login with their LDAP credentials
  # For use when using LdapBackend for Hue authentication
  ## create_users_on_login=true
      [[[users]]]
      # Base filter for searching for users
      ## user_filter="objectclass=*"
      # The username attribute in the LDAP schema
      ## user_name_attr=sAMAccountName
      [[[groups]]]
      # Base filter for searching for groups
      ## group_filter="objectclass=*"
      # The username attribute in the LDAP schema
      ## group_name_attr=cn
  # Configuration options for specifying the Desktop Database.  For more info,
  # see http://docs.djangoproject.com/en/1.1/ref/settings/#database-engine
  # ------------------------------------------------------------------------
  [[database]]
    engine=sqlite3
    name=/var/lib/hue/desktop.db
    # Database engine is typically one of:
    # postgresql_psycopg2, mysql, or sqlite3
    #
    # Note that for sqlite3, 'name', below is a filename;
    # for other backends, it is the database name.
    ## engine=sqlite3
    ## host=
    ## port=
    ## user=
    ## password=
    ## name=
  # Configuration options for connecting to an external SMTP server
  # ------------------------------------------------------------------------
  [[smtp]]
    # The SMTP server information for email notification delivery
    host=localhost
    port=25
    user=
    password=
    # Whether to use a TLS (secure) connection when talking to the SMTP server
    tls=no
    # Default email address to use for various automated notification from Hue
    ## default_from_email=hue@localhost
  # Configuration options for Kerberos integration for secured Hadoop clusters
  # ------------------------------------------------------------------------
  [[kerberos]]
    # Path to Hue's Kerberos keytab file
    ## hue_keytab=/etc/security/keytabs/hue.service.keytab
    # Kerberos principal name for Hue
    ## hue_principal=hue/IP
    # Path to kinit
    ## kinit_path=/usr/bin/kinit
    ## Frequency in seconds with which Hue will renew its keytab. Default 1h.
    ## reinit_frequency=3600
    ## Path to keep Kerberos credentials cached.
    ## ccache_path=/tmp/hue_krb5_ccache
[hadoop]
  # Configuration for HDFS NameNode
  # ------------------------------------------------------------------------
  [[hdfs_clusters]]
    [[[default]]]
      # Enter the filesystem uri
      fs_defaultfs=hdfs://sandbox.hortonworks.com:8020
      # Use WebHdfs/HttpFs as the communication mechanism. To fallback to
      # using the Thrift plugin (used in Hue 1.x), this must be uncommented
      # and explicitly set to the empty value.
      webhdfs_url=http://sandbox.hortonworks.com:50070/webhdfs/v1/
      # security_enabled=true
      # Settings about this HDFS cluster. If you install HDFS in a
      # different location, you need to set the following.
      # Defaults to $HADOOP_HDFS_HOME or /usr/lib/hadoop-hdfs
      ## hadoop_hdfs_home=/usr/lib/hadoop/lib
      # Defaults to $HADOOP_BIN or /usr/bin/hadoop
      ## hadoop_bin=/usr/bin/hadoop
      # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
      ## hadoop_conf_dir=/etc/hadoop/conf
  # Configuration for MapReduce JobTracker
  # ------------------------------------------------------------------------
  [[mapred_clusters]]
    ## [[[default]]]
      # Enter the host on which you are running the Hadoop JobTracker
      ## jobtracker_host=sandbox.hortonworks.com
      # The port where the JobTracker IPC listens on
      ## jobtracker_port=50300
      # Thrift plug-in port for the JobTracker
      ## thrift_port=9290
      # Whether to submit jobs to this cluster
      ## submit_to=true
      # Job tracker kerberos principal
      ## jt_kerberos_principal=jt
      ## security_enabled=true
      # Settings about this MR1 cluster. If you install MR1 in a
      # different location, you need to set the following.
      # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
      ## hadoop_mapred_home=/usr/lib/hadoop/lib
      # Defaults to $HADOOP_BIN or /usr/bin/hadoop
      ## hadoop_bin=/usr/bin/hadoop
      # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
      ## hadoop_conf_dir=/etc/hadoop/conf
  # Configuration for Yarn
  # ------------------------------------------------------------------------
  [[yarn_clusters]]
    [[[default]]]
      # Enter the host on which you are running the ResourceManager
      resourcemanager_host=sandbox.hortonworks.com
      # The port where the ResourceManager IPC listens on
      resourcemanager_port=8050
      # Whether to submit jobs to this cluster
      submit_to=true
      ## security_enabled=false
      # Settings about this MR2 cluster. If you install MR2 in a
      # different location, you need to set the following.
      # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
      ## hadoop_mapred_home=/usr/lib/hadoop-mapreduce
      # Defaults to $HADOOP_BIN or /usr/bin/hadoop
      ## hadoop_bin=/usr/bin/hadoop
      # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
      ## hadoop_conf_dir=/etc/hadoop/conf
      # URL of the ResourceManager API
      resourcemanager_api_url=http://sandbox.hortonworks.com:8088
      # URL of the ProxyServer API
      proxy_api_url=http://sandbox.hortonworks.com:8088
      # URL of the HistoryServer API
      history_server_api_url=http://sandbox.hortonworks.com:19888
      # URL of the NodeManager API
      node_manager_api_url=http://sandbox.hortonworks.com:8042
[liboozie]
  # The URL where the Oozie service runs on. This is required in order for
  # users to submit jobs.
  oozie_url=http://sandbox.hortonworks.com:11000/oozie
  ## security_enabled=true
  # Location on HDFS where the workflows/coordinator are deployed when submitted.
  ## remote_deployement_dir=/user/hue/oozie/deployments
[oozie]
  # Location on local FS where the examples are stored.
  ## local_data_dir=..../examples
  # Location on local FS where the data for the examples is stored.
  ## sample_data_dir=...thirdparty/sample_data
  # Location on HDFS where the oozie examples and workflows are stored.
  ## remote_data_dir=/user/hue/oozie/workspaces
  # Share workflows and coordinators information with all users. If set to false,
  # they will be visible only to the owner and administrators.
  ## share_jobs=true
  # Maximum of Oozie workflows or coodinators to retrieve in one API call.
  ## oozie_jobs_count=100
[beeswax]
  # Host where Beeswax server Thrift daemon is running.
  # If Kerberos security is enabled, the fully-qualified domain name (FQDN) is
  # required, even if the Thrift daemon is running on the same host as Hue.
  ## beeswax_server_host=-FQDN of Beeswax Server-
  # Port where Beeswax Thrift server runs on.
  ## beeswax_server_port=8002
  # Host where internal metastore Thrift daemon is running.
  ## beeswax_meta_server_host=localhost
  # Configure the port the internal metastore daemon runs on.
  # Used only if hive.metastore.local is true.
  ## beeswax_meta_server_port=8003
  # Hive home directory
  ## hive_home_dir=/usr/lib/hive
  # Hive configuration directory, where hive-site.xml is located
  ## hive_conf_dir=/etc/hive/conf
  # Timeout in seconds for thrift calls to beeswax service
  ## beeswax_server_conn_timeout=120
  # Timeout in seconds for thrift calls to the hive metastore
  ## metastore_conn_timeout=10
  # Maximum Java heapsize (in megabytes) used by Beeswax Server.
  # Note that the setting of HADOOP_HEAPSIZE in $HADOOP_CONF_DIR/hadoop-env.sh
  # may override this setting.
  ## beeswax_server_heapsize=1000
  # Share saved queries with all users. If set to false, saved queries are
  # visible only to the owner and administrators.
  ## share_saved_queries=true
  # The backend to contact for queries/metadata requests
  # Choices are 'beeswax' (default), 'hiveserver2'.
  ## server_interface=beeswax
  # Option to show execution engine choice.
  show_execution_engine=True
[jobsub]
  # Location on HDFS where the jobsub examples and templates are stored.
  ## remote_data_dir=/user/hue/jobsub
  # Location on local FS where examples and template are stored.
  ## local_data_dir=..../data
  # Location on local FS where sample data is stored
  ## sample_data_dir=...thirdparty/sample_data
[jobbrowser]
  # Share submitted jobs information with all users. If set to false,
  # submitted jobs are visible only to the owner and administrators.
  ## share_jobs=true
[shell]
  # The shell_buffer_amount specifies the number of bytes of output per shell
  # that the Shell app will keep in memory. If not specified, it defaults to
  # 524288 (512 MiB).
  ## shell_buffer_amount=100
  # If you run Hue against a Hadoop cluster with Kerberos security enabled, the
  # Shell app needs to acquire delegation tokens for the subprocesses to work
  # correctly. These delegation tokens are stored as temporary files in some
  # directory. You can configure this directory here. If not specified, it
  # defaults to /tmp/hue_delegation_tokens.
  ## shell_delegation_token_dir=/tmp/hue_delegation_tokens
  [[ shelltypes ]]
    # Define and configure a new shell type "flume"
    # ------------------------------------------------------------------------
    #[[[ flume ]]]
    # nice_name="Flume Shell"
    # command="/usr/bin/flume shell"
    # help="The command-line Flume client interface."
    #  [[[[ environment ]]]]
        # You can specify environment variables for the Flume shell
        # in this section.
    # Define and configure a new shell type "pig"
    # ------------------------------------------------------------------------
    [[[ pig ]]]
      nice_name="Pig Shell (Grunt)"
      command="/usr/bin/pig -l /dev/null"
      help="The command-line interpreter for Pig"
      [[[[ environment ]]]]
        # You can specify environment variables for the Pig shell
        # in this section. Note that JAVA_HOME must be configured
        # for the Pig shell to run.
        [[[[[ JAVA_HOME ]]]]]
          value="/usr/jdk/jdk1.6.0_31/"
        [[[[[ PATH ]]]]]
          value = "/usr/local/bin:/bin:/usr/bin"
    # Define and configure a new shell type "hbase"
    # ------------------------------------------------------------------------
    [[[ hbase ]]]
      nice_name="HBase Shell"
      command="/usr/bin/hbase shell"
      help="The command-line HBase client interface."
      [[[[ environment ]]]]
        # You can configure environment variables for the HBase shell
        # in this section.
    # Define and configure a new shell type "sqoop2"
    # ------------------------------------------------------------------------
    #[[[ r_shell ]]]
    #  nice_name="R shell"
    #  command="/usr/bin/R"
    #  help="The R language for Statistical Computing"
      #[[[[ environment ]]]]
        # You can configure environment variables for the Sqoop2 shell
        # in this section.
      #   [[[[[ JAVA_HOME ]]]]]
      #    value="/usr/jdk/jdk1.6.0_31/"
    # Define and configure a new shell type "bash" for testing only
    # ------------------------------------------------------------------------
    [[[ bash ]]]
      nice_name="Bash (Test only!!!)"
      command="/bin/bash"
      help="A shell that does not depend on Hadoop components"
[useradmin]
  # The name of the default user group that users will be a member of
  default_user_group=hadoop
  default_username=hue
  default_user_password=1111
[hcatalog]
  templeton_url="http://sandbox.hortonworks.com:50111/templeton/v1/"
  security_enabled=false
[about]
  tutorials_path="/usr/lib/tutorials/sandbox-tutorials"
  tutorials_update_script="/usr/lib/tutorials/tutorials_app/run/run.sh"
  tutorials_installed=True
  sandbox_version="2.3"
  # Tooltip title on about page
  ## about_page_title="Hue"
  # Title on about page
  ## about_title="Hue"
  sandbox=true
[pig]
  udf_path="/tmp/udfs"
[proxy]
  whitelist="(localhost|sandbox.hortonworks.com|127\.0\.0\.1):(50030|50070|50060|50075|50111)",

log.conf

[logger_root]
handlers=logfile,errorlog
[logger_access]
handlers=accesslog
qualname=access
[logger_shell_output]
handlers=shell_output_log
qualname=shell_output
[logger_shell_input]
handlers=shell_input_log
qualname=shell_input
[handler_stderr]
class=StreamHandler
formatter=default
level=INFO
args=(sys.stderr,)
[handler_accesslog]
class=handlers.RotatingFileHandler
level=INFO
propagate=True
formatter=access
args=('%LOG_DIR%/access.log', 'a', 1000000, 3)
[handler_errorlog]
class=handlers.RotatingFileHandler
level=ERROR
formatter=default
args=('%LOG_DIR%/error.log', 'a', 1000000, 3)
[handler_logfile]
class=handlers.RotatingFileHandler
level=INFO
formatter=default
args=('%LOG_DIR%/%PROC_NAME%.log', 'a', 1000000, 3)
[handler_shell_output_log]
class=handlers.RotatingFileHandler
level=INFO
formatter=default
args=('%LOG_DIR%/shell_output.log', 'a', 1000000, 3)
[handler_shell_input_log]
class=handlers.RotatingFileHandler
level=INFO
formatter=default
args=('%LOG_DIR%/shell_input.log', 'a', 1000000, 3)
[formatter_default]
format=[%(asctime)s] %(module)-12s %(levelname)-8s %(message)s
datefmt=%d/%b/%Y %H:%M:%S +0000
[formatter_access]
format=[%(asctime)s] %(levelname)-8s %(message)s
datefmt=%d/%b/%Y %H:%M:%S +0000
[loggers]
keys=root,access,shell_output,shell_input
[handlers]
keys=stderr,logfile,accesslog,errorlog,shell_output_log,shell_input_log
[formatters]
keys=default,access

kafka

/etc/kafka/conf:
-rw-r--r-- 1 root  root   1199 2015-07-14 14:12 consumer.properties
-rw-r--r-- 1 root  root    281 2015-07-14 14:12 kafka_client_jaas.conf
-rw-r--r-- 1 kafka root    680 2015-07-21 16:02 kafka-env.sh
-rw-r--r-- 1 kafka hadoop 3863 2015-07-21 16:02 log4j.properties
-rw-r--r-- 1 root  root   2228 2015-07-14 14:12 producer.properties
-rw-r--r-- 1 kafka hadoop 2356 2015-07-21 16:02 server.properties
-rw-r--r-- 1 root  root   3325 2015-07-14 14:12 test-log4j.properties
-rw-r--r-- 1 root  root    993 2015-07-14 14:12 tools-log4j.properties
-rw-r--r-- 1 root  root   1023 2015-07-14 14:12 zookeeper.properties

consumer.properties

zookeeper.connect=127.0.0.1:2181
zookeeper.connection.timeout.ms=6000
group.id=test-consumer-group

kafka_client_jaas.conf

KafkaClient {
   com.sun.security.auth.module.Krb5LoginModule required
   useTicketCache=true
   renewTicket=true
   serviceName="kafka";
};
Client {
   com.sun.security.auth.module.Krb5LoginModule required
   useTicketCache=true
   renewTicket=true
   serviceName="zookeeper";
};

kafka-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export PATH=$PATH:$JAVA_HOME/bin
export PID_DIR=/var/run/kafka
export LOG_DIR=/var/log/kafka
export KAFKA_KERBEROS_PARAMS=
if [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then
  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar
  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*
fi
if [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then
. /etc/kafka/conf/kafka-ranger-env.sh
fi
    

log4j.properties

kafka.logs.dir=logs
log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka=INFO, kafkaAppender
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false
   

producer.properties

metadata.broker.list=localhost:9092
producer.type=sync
compression.codec=none
serializer.class=kafka.serializer.DefaultEncoder

server.properties

    
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.id=0
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.host=sandbox.hortonworks.com
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=sandbox.hortonworks.com:2181
zookeeper.connection.timeout.ms=15000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
    

test-log4j.properties

log4j.rootLogger=INFO, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaAppender.File=logs/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.stateChangeAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.stateChangeAppender.File=logs/state-change.log
log4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.stateChangeAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.requestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.requestAppender.File=logs/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=logs/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka.tools=DEBUG, kafkaAppender
log4j.logger.kafka.tools.ProducerPerformance$ProducerThread=DEBUG, kafkaAppender
log4j.logger.kafka=INFO, kafkaAppender
log4j.logger.kafka.network.RequestChannel$=TRACE, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false
log4j.logger.kafka.request.logger=TRACE, requestAppender
log4j.additivity.kafka.request.logger=false
log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false
log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false

tools-log4j.properties

log4j.rootLogger=WARN, stdout 
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n

zookeeper.properties

dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=0

knox

/etc/knox/conf:
-rw-r--r-- 1 knox knox 2114 2015-07-21 16:01 gateway-log4j.properties
-rw-r--r-- 1 knox knox  865 2015-07-21 16:01 gateway-site.xml
-rw-r--r-- 1 root root 1485 2015-07-14 16:26 knoxcli-log4j.properties
-rw-r--r-- 1 knox knox 1724 2015-07-21 16:01 ldap-log4j.properties
-rwxr--r-- 1 knox knox 7178 2015-07-21 20:16 ranger-knox-audit.xml
-rwxr--r-- 1 knox knox 2271 2015-07-21 20:16 ranger-knox-security.xml
-rwxr--r-- 1 knox knox 2273 2015-07-21 20:16 ranger-policymgr-ssl.xml
-rw-r--r-- 1 knox knox   69 2015-07-21 20:16 ranger-security.xml
-rw-r--r-- 1 root root   91 2015-07-14 16:26 README
-rw-r--r-- 1 root root 1436 2015-07-14 16:26 shell-log4j.properties
drwxr-xr-x 2 knox knox 4096 2015-07-21 20:16 topologies
-rw-r--r-- 1 knox knox 2764 2015-07-21 16:01 users.ldif

/etc/knox/conf/topologies:
-rw-r--r-- 1 knox knox 4428 2015-07-21 20:16 admin.xml
-rw-r--r-- 1 knox knox 4584 2015-07-21 20:16 default.xml
-rwxr-xr-x 1 root root 4748 2015-07-21 20:16 knox_sample.xml
-rw-r--r-- 1 knox knox   89 2015-07-14 16:26 README

gateway-log4j.properties

app.log.dir=${launcher.dir}/../logs
app.log.file=${launcher.name}.log
app.audit.file=${launcher.name}-audit.log
log4j.rootLogger=ERROR, drfa
log4j.logger.org.apache.hadoop.gateway=INFO
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender
log4j.appender.drfa.File=${app.log.dir}/${app.log.file}
log4j.appender.drfa.DatePattern=.yyyy-MM-dd
log4j.appender.drfa.layout=org.apache.log4j.PatternLayout
log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
log4j.logger.audit=INFO, auditfile
log4j.appender.auditfile=org.apache.log4j.DailyRollingFileAppender
log4j.appender.auditfile.File=${app.log.dir}/${app.audit.file}
log4j.appender.auditfile.Append = true
log4j.appender.auditfile.DatePattern = '.'yyyy-MM-dd
log4j.appender.auditfile.layout = org.apache.hadoop.gateway.audit.log4j.layout.AuditLayout

gateway-site.xml

-!--Tue Jul 21 16:01:08 2015---
    -configuration-
    
    -property-
      -name-gateway.gateway.conf.dir-/name-
      -value-deployments-/value-
    -/property-
    
    -property-
      -name-gateway.hadoop.kerberos.secured-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-gateway.path-/name-
      -value-gateway-/value-
    -/property-
    
    -property-
      -name-gateway.port-/name-
      -value-8443-/value-
    -/property-
    
    -property-
      -name-java.security.auth.login.config-/name-
      -value-/etc/knox/conf/krb5JAASLogin.conf-/value-
    -/property-
    
    -property-
      -name-java.security.krb5.conf-/name-
      -value-/etc/knox/conf/krb5.conf-/value-
    -/property-
    
    -property-
      -name-sun.security.krb5.debug-/name-
      -value-true-/value-
    -/property-
    
  -/configuration-

knoxcli-log4j.properties

app.log.dir=${launcher.dir}/../logs
app.log.file=${launcher.name}.log
log4j.rootLogger=ERROR, drfa
log4j.logger.org.apache.hadoop.gateway=INFO
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender
log4j.appender.drfa.File=${app.log.dir}/${app.log.file}
log4j.appender.drfa.DatePattern=.yyyy-MM-dd
log4j.appender.drfa.layout=org.apache.log4j.PatternLayout
log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n

ldap-log4j.properties

        # Licensed to the Apache Software Foundation (ASF) under one
        # or more contributor license agreements.  See the NOTICE file
        # distributed with this work for additional information
        # regarding copyright ownership.  The ASF licenses this file
        # to you under the Apache License, Version 2.0 (the
        # "License"); you may not use this file except in compliance
        # with the License.  You may obtain a copy of the License at
        #
        #     http://www.apache.org/licenses/LICENSE-2.0
        #
        # Unless required by applicable law or agreed to in writing, software
        # distributed under the License is distributed on an "AS IS" BASIS,
        # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
        # See the License for the specific language governing permissions and
        # limitations under the License.
        app.log.dir=${launcher.dir}/../logs
        app.log.file=${launcher.name}.log
        log4j.rootLogger=ERROR, drfa
        log4j.logger.org.apache.directory.server.ldap.LdapServer=INFO
        log4j.logger.org.apache.directory=WARN
        log4j.appender.stdout=org.apache.log4j.ConsoleAppender
        log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
        log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
        log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender
        log4j.appender.drfa.File=${app.log.dir}/${app.log.file}
        log4j.appender.drfa.DatePattern=.yyyy-MM-dd
        log4j.appender.drfa.layout=org.apache.log4j.PatternLayout
        log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n
    

ranger-knox-audit.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-xasecure.audit.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-!-- DB audit provider configuration ---
	-property-
		-name-xasecure.audit.db.is.enabled-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.db.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.db.batch.size-/name-
		-value-100-/value-
	-/property-	
	-!--  Properties whose name begin with "xasecure.audit.jpa." are used to configure JPA ---
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.url-/name-
		-value-jdbc:mysql://localhost/ranger_audit-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.user-/name-
		-value-rangerlogger-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.password-/name-
		-value-crypted-/value-
	-/property-
	-property-
		-name-xasecure.audit.jpa.javax.persistence.jdbc.driver-/name-
		-value-com.mysql.jdbc.Driver-/value-
	-/property-
	-property-
		-name-xasecure.audit.credential.provider.file-/name-
		-value-jceks://file/etc/ranger/sandbox_knox/cred.jceks-/value-
	-/property-
	-!-- HDFS audit provider configuration ---
	-property-
		-name-xasecure.audit.hdfs.is.enabled-/name-
		-value-true-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.is.async-/name-
		-value-true-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.hdfs.async.max.queue.size-/name-
		-value-1048576-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.encoding-/name-
		-value/-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.directory-/name-
		-value-hdfs://sandbox.hortonworks.com:8020/ranger/audit/%app-type%/%time:yyyyMMdd%-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.file-/name-
		-value-%hostname%-audit.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.flush.interval.seconds-/name-
		-value-900-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.rollover.interval.seconds-/name-
		-value-86400-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.destination.open.retry.interval.seconds-/name-
		-value-60-/value-
	-/property-
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.directory-/name-
		-value-/var/log/knox/audit-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file-/name-
		-value-%time:yyyyMMdd-HHmm.ss%.log-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.file.buffer.size.bytes-/name-
		-value-8192-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.flush.interval.seconds-/name-
		-value-60-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.buffer.rollover.interval.seconds-/name-
		-value-600-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.directory-/name-
		-value-/var/log/knox/audit/archive-/value-
	-/property-	
	-property-
		-name-xasecure.audit.hdfs.config.local.archive.max.file.count-/name-
		-value-10-/value-
	-/property-	
	-!-- Log4j audit provider configuration ---
	-property-
		-name-xasecure.audit.log4j.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.is.async-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.log4j.async.max.queue.size-/name-
		-value-10240-/value-
	-/property-	
	-property-
		-name-xasecure.audit.log4j.async.max.flush.interval.ms-/name-
		-value-30000-/value-
	-/property-	
	
	-!-- Kafka audit provider configuration ---
	-property-
		-name-xasecure.audit.kafka.is.enabled-/name-
		-value-false-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.kafka.broker_list-/name-
		-value-localhost:9092-/value-
	-/property-	
	-property-
		-name-xasecure.audit.kafka.topic_name-/name-
		-value-ranger_audits-/value-
	-/property-	
	
	-!-- Ranger audit provider configuration ---
	-property-
		-name-xasecure.audit.solr.is.enabled-/name-
		-value-false-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.async.max.queue.size-/name-
		-value-1-/value-
	-/property-	
	-property-
		-name-xasecure.audit.solr.async.max.flush.interval.ms-/name-
		-value-1000-/value-
	-/property-	
	
	-property-
		-name-xasecure.audit.solr.solr_url-/name-
		-value-http://localhost:6083/solr/ranger_audits-/value-
	-/property-	
	
-property-
        -name-xasecure.audit.destination.solr-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.urls-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.user-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.password-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.zookeepers-/name-
        -value-NONE-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.solr.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/solr/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs-/name-
        -value-false-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.batch.filespool.dir-/name-
        -value-/var/log/hive/audit/hdfs/spool-/value-
    -/property-
    -property-
        -name-xasecure.audit.destination.hdfs.dir-/name-
        -value-hdfs://__REPLACE__NAME_NODE_HOST:8020/ranger/audit-/value-
    -/property-
-/configuration-

ranger-knox-security.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-property-
		-name-ranger.plugin.knox.service.name-/name-
		-value-sandbox_knox-/value-
		-description-
			Name of the Ranger service containing policies for this Knox instance
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.knox.policy.source.impl-/name-
		-value-org.apache.ranger.admin.client.RangerAdminJersey2RESTClient-/value-
		-description-
			Class to retrieve policies
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.knox.policy.rest.url-/name-
		-value-http://sandbox.hortonworks.com:6080-/value-
		-description-
			URL to Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.knox.policy.rest.ssl.config.file-/name-
		-value-/etc/knox/conf/ranger-policymgr-ssl.xml-/value-
		-description-
			Path to the file containing SSL details to contact Ranger Admin
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.knox.policy.pollIntervalMs-/name-
		-value-5000-/value-
		-description-
			How often to poll for changes in policies?
		-/description-
	-/property-
	-property-
		-name-ranger.plugin.knox.policy.cache.dir-/name-
		-value-/etc/ranger/sandbox_knox/policycache-/value-
		-description-
			Directory where Ranger policies are cached after successful retrieval from the source
		-/description-
	-/property-
-/configuration-

ranger-policymgr-ssl.xml

-?xml version="1.0" encoding="UTF-8" standalone="no"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
      http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
----?xml-stylesheet type="text/xsl" href="configuration.xsl"?--configuration xmlns:xi="http://www.w3.org/2001/XInclude"-
	-!--  The following properties are used for 2-way SSL client server validation ---
	-property-
		-name-xasecure.policymgr.clientssl.keystore-/name-
		-value-/etc/knox/conf/ranger-plugin-keystore.jks-/value-
		-description- 
			Java Keystore files 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.password-/name-
		-value-myKeyFilePassword-/value-
		-description- 
			password for keystore 
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore-/name-
		-value-/etc/knox/conf/ranger-plugin-truststore.jks-/value-
		-description- 
			java truststore file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.password-/name-
		-value-changeit-/value-
		-description- 
			java  truststore password
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.keystore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_knox/cred.jceks-/value-
		-description- 
			java  keystore credential file
		-/description-
	-/property-
	-property-
		-name-xasecure.policymgr.clientssl.truststore.credential.file-/name-
		-value-jceks://file/etc/ranger/sandbox_knox/cred.jceks-/value-
		-description- 
			java  truststore credential file
		-/description-
	-/property-
-/configuration-

ranger-security.xml

-ranger-\n-enabled-Tue Jul 21 20:16:35 UTC 2015-/enabled-\n-/ranger-

README

THIS IS THE DIRECTORY WHERE YOU PLACE COPY OR SAVE THE gateway-site.xml and users.ldif FILE

shell-log4j.properties

app.log.dir=${launcher.dir}/../logs
app.log.file=${launcher.name}.log
log4j.rootLogger=ERROR, drfa
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
log4j.appender.drfa=org.apache.log4j.DailyRollingFileAppender
log4j.appender.drfa.File=${app.log.dir}/${app.log.file}
log4j.appender.drfa.DatePattern=.yyyy-MM-dd
log4j.appender.drfa.layout=org.apache.log4j.PatternLayout
log4j.appender.drfa.layout.ConversionPattern=%d{ISO8601} %-5p %c{2} (%F:%M(%L)) - %m%n

topologies


users.ldif

version: 1
dn: dc=hadoop,dc=apache,dc=org
objectclass: organization
objectclass: dcObject
o: Hadoop
dc: hadoop
dn: ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:organizationalUnit
ou: people
dn: uid=guest,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: Guest
sn: User
uid: guest
userPassword:guest-password
dn: uid=admin,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: Admin
sn: Admin
uid: admin
userPassword:admin-password
dn: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: sam
sn: sam
uid: sam
userPassword:sam-password
dn: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:person
objectclass:organizationalPerson
objectclass:inetOrgPerson
cn: tom
sn: tom
uid: tom
userPassword:tom-password
dn: ou=groups,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass:organizationalUnit
ou: groups
description: generic groups branch
dn: cn=analyst,ou=groups,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass: groupofnames
cn: analyst
description:analyst  group
member: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org
member: uid=tom,ou=people,dc=hadoop,dc=apache,dc=org
dn: cn=scientist,ou=groups,dc=hadoop,dc=apache,dc=org
objectclass:top
objectclass: groupofnames
cn: scientist
description: scientist group
member: uid=sam,ou=people,dc=hadoop,dc=apache,dc=org

oozie

/etc/oozie/conf:
drwxr-xr-x 3 oozie hadoop   4096 2015-07-21 16:05 action-conf
-rw-r--r-- 1 oozie hadoop    940 2015-07-21 15:59 adminusers.txt
drwxr-xr-x 2 root  root     4096 2015-07-21 15:58 hadoop-conf
-rw-r--r-- 1 oozie hadoop   1409 2015-07-14 15:09 hadoop-config.xml
-rw-r--r-- 1 oozie hadoop      0 2015-07-21 15:59 oozie-default.xml
-rwxr-xr-x 1 root  root   107091 2015-07-14 15:09 oozie-default.xml.reference
-rw-r--r-- 1 root  root     2215 2015-07-14 15:09 oozie-env.cmd
-rw-r--r-- 1 oozie root     1812 2015-07-21 16:44 oozie-env.sh
-rw-r--r-- 1 oozie hadoop   3248 2015-07-21 15:59 oozie-log4j.properties
-rw-rw-r-- 1 oozie hadoop   9661 2015-07-21 16:44 oozie-site.xml

/etc/oozie/conf/action-conf:
drwxr-xr-x 2 oozie hadoop 4096 2015-07-21 16:05 hive
-rw-r--r-- 1 oozie hadoop 1113 2015-07-14 15:09 hive.xml

/etc/oozie/conf/action-conf/hive:
-rw-r--r-- 1 oozie hadoop 19099 2015-07-21 16:44 hive-site.xml
-rw-rw-r-- 1 oozie hadoop  6533 2015-07-21 16:44 tez-site.xml

/etc/oozie/conf/hadoop-conf:
-rw-r--r-- 1 root root 1409 2015-07-14 15:09 core-site.xml

action-conf


adminusers.txt

oozie
oozie-admin

hadoop-conf


hadoop-config.xml

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one
  or more contributor license agreements.  See the NOTICE file
  distributed with this work for additional information
  regarding copyright ownership.  The ASF licenses this file
  to you under the Apache License, Version 2.0 (the
  "License"); you may not use this file except in compliance
  with the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
---
-configuration-
    -property-
        -name-mapreduce.jobtracker.kerberos.principal-/name-
        -value-mapred/_HOST@LOCALREALM-/value-
    -/property-
    -property-
      -name-yarn.resourcemanager.principal-/name-
      -value-yarn/_HOST@LOCALREALM-/value-
    -/property-
    -property-
        -name-dfs.namenode.kerberos.principal-/name-
        -value-hdfs/_HOST@LOCALREALM-/value-
    -/property-
    -property-
        -name-mapreduce.framework.name-/name-
        -value-yarn-/value-
    -/property-
-/configuration-

oozie-default.xml


oozie-default.xml.reference

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one
  or more contributor license agreements.  See the NOTICE file
  distributed with this work for additional information
  regarding copyright ownership.  The ASF licenses this file
  to you under the Apache License, Version 2.0 (the
  "License"); you may not use this file except in compliance
  with the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
---
-configuration-
    -!-- ************************** VERY IMPORTANT  ************************** ---
    -!-- This file is in the Oozie configuration directory only for reference. ---
    -!-- It is not loaded by Oozie, Oozie uses its own privatecopy.            ---
    -!-- ************************** VERY IMPORTANT  ************************** ---
    -property-
        -name-oozie.output.compression.codec-/name-
        -value-gz-/value-
        -description-
            The name of the compression codec to use.
            The implementation class for the codec needs to be specified through another property oozie.compression.codecs.
            You can specify a comma separated list of 'Codec_name'='Codec_class' for oozie.compression.codecs
            where codec class implements the interface org.apache.oozie.compression.CompressionCodec.
            If oozie.compression.codecs is not specified, gz codec implementation is used by default.
        -/description-
    -/property-
    -property-
        -name-oozie.action.mapreduce.uber.jar.enable-/name-
        -value-false-/value-
        -description-
            If true, enables the oozie.mapreduce.uber.jar mapreduce workflow configuration property, which is used to specify an
            uber jar in HDFS.  Submitting a workflow with an uber jar requires at least Hadoop 2.2.0 or 1.2.0.  If false, workflows
            which specify the oozie.mapreduce.uber.jar configuration property will fail.
        -/description-
    -/property-
    -property-
        -name-oozie.processing.timezone-/name-
        -value-UTC-/value-
        -description-
            Oozie server timezone. Valid values are UTC and GMT(+/-)####, for example 'GMT+0530' would be India
            timezone. All dates parsed and genered dates by Oozie Coordinator/Bundle will be done in the specified
            timezone. The default value of 'UTC' should not be changed under normal circumtances. If for any reason
            is changed, note that GMT(+/-)#### timezones do not observe DST changes.
        -/description-
    -/property-
    -!-- Base Oozie URL: -SCHEME-://-HOST-:-PORT-/-CONTEXT- ---
    -property-
        -name-oozie.base.url-/name-
        -value-http://localhost:8080/oozie-/value-
        -description-
             Base Oozie URL.
        -/description-
    -/property-
    -!-- Services ---
    -property-
        -name-oozie.system.id-/name-
        -value-oozie-${user.name}-/value-
        -description-
            The Oozie system ID.
        -/description-
    -/property-
    -property-
        -name-oozie.systemmode-/name-
        -value-NORMAL-/value-
        -description-
            System mode for  Oozie at startup.
        -/description-
    -/property-
    -property-
        -name-oozie.delete.runtime.dir.on.shutdown-/name-
        -value-true-/value-
        -description-
            If the runtime directory should be kept after Oozie shutdowns down.
        -/description-
    -/property-
    -property-
        -name-oozie.services-/name-
        -value-
            org.apache.oozie.service.SchedulerService,
            org.apache.oozie.service.InstrumentationService,
            org.apache.oozie.service.MemoryLocksService,
            org.apache.oozie.service.UUIDService,
            org.apache.oozie.service.ELService,
            org.apache.oozie.service.AuthorizationService,
            org.apache.oozie.service.UserGroupInformationService,
            org.apache.oozie.service.HadoopAccessorService,
            org.apache.oozie.service.JobsConcurrencyService,
            org.apache.oozie.service.URIHandlerService,
            org.apache.oozie.service.DagXLogInfoService,
            org.apache.oozie.service.SchemaService,
            org.apache.oozie.service.LiteWorkflowAppService,
            org.apache.oozie.service.JPAService,
            org.apache.oozie.service.StoreService,
            org.apache.oozie.service.SLAStoreService,
            org.apache.oozie.service.DBLiteWorkflowStoreService,
            org.apache.oozie.service.CallbackService,
            org.apache.oozie.service.ActionService,
            org.apache.oozie.service.ShareLibService,
            org.apache.oozie.service.CallableQueueService,
            org.apache.oozie.service.ActionCheckerService,
            org.apache.oozie.service.RecoveryService,
            org.apache.oozie.service.PurgeService,
            org.apache.oozie.service.CoordinatorEngineService,
            org.apache.oozie.service.BundleEngineService,
            org.apache.oozie.service.DagEngineService,
            org.apache.oozie.service.CoordMaterializeTriggerService,
            org.apache.oozie.service.StatusTransitService,
            org.apache.oozie.service.PauseTransitService,
            org.apache.oozie.service.GroupsService,
            org.apache.oozie.service.ProxyUserService,
            org.apache.oozie.service.XLogStreamingService,
            org.apache.oozie.service.JvmPauseMonitorService,
            org.apache.oozie.service.SparkConfigurationService
        -/value-
        -description-
            All services to be created and managed by Oozie Services singleton.
            Class names must be separated by commas.
        -/description-
    -/property-
    -property-
        -name-oozie.services.ext-/name-
        -value- -/value-
        -description-
            To add/replace services defined in 'oozie.services' with custom implementations.
            Class names must be separated by commas.
        -/description-
    -/property-
    -property-
        -name-oozie.service.XLogStreamingService.buffer.len-/name-
        -value-4096-/value-
        -description-4K buffer for streaming the logs progressively-/description-
    -/property-
 -!-- HCatAccessorService ---
   -property-
        -name-oozie.service.HCatAccessorService.jmsconnections-/name-
        -value-
        default=java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory
        -/value-
        -description-
        Specify the map  of endpoints to JMS configuration properties. In general, endpoint
        identifies the HCatalog server URL. "default" is used if no endpoint is mentioned
        in the query. If some JMS property is not defined, the system will use the property
        defined jndi.properties. jndi.properties files is retrieved from the application classpath.
        Mapping rules can also be provided for mapping Hcatalog servers to corresponding JMS providers.
        hcat://${1}.${2}.server.com:8020=java.naming.factory.initial#Dummy.Factory;java.naming.provider.url#tcp://broker.${2}:61616
        -/description-
   -/property-
    -!-- TopicService ---
   -property-
        -name-oozie.service.JMSTopicService.topic.name-/name-
        -value-
        default=${username}
        -/value-
        -description-
        Topic options are ${username} or ${jobId} or a fixed string which can be specified as default or for a
        particular job type.
        For e.g To have a fixed string topic for workflows, coordinators and bundles,
        specify in the following comma-separated format: {jobtype1}={some_string1}, {jobtype2}={some_string2}
        where job type can be WORKFLOW, COORDINATOR or BUNDLE.
        e.g. Following defines topic for workflow job, workflow action, coordinator job, coordinator action,
        bundle job and bundle action
        WORKFLOW=workflow,
        COORDINATOR=coordinator,
        BUNDLE=bundle
        For jobs with no defined topic, default topic will be ${username}
        -/description-
    -/property-
    -!-- JMS Producer connection ---
    -property-
        -name-oozie.jms.producer.connection.properties-/name-
        -value-java.naming.factory.initial#org.apache.activemq.jndi.ActiveMQInitialContextFactory;java.naming.provider.url#tcp://localhost:61616;connectionFactoryNames#ConnectionFactory-/value-
    -/property-
 -!-- JMSAccessorService ---
    -property-
        -name-oozie.service.JMSAccessorService.connectioncontext.impl-/name-
        -value-
        org.apache.oozie.jms.DefaultConnectionContext
        -/value-
        -description-
        Specifies the Connection Context implementation
        -/description-
    -/property-
    -!-- ConfigurationService ---
    -property-
        -name-oozie.service.ConfigurationService.ignore.system.properties-/name-
        -value-
            oozie.service.AuthorizationService.security.enabled
        -/value-
        -description-
            Specifies "oozie.*" properties to cannot be overriden via Java system properties.
            Property names must be separted by commas.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ConfigurationService.verify.available.properties-/name-
        -value-true-/value-
        -description-
            Specifies whether the available configurations check is enabled or not.
        -/description-
    -/property-
    -!-- SchedulerService ---
    -property-
        -name-oozie.service.SchedulerService.threads-/name-
        -value-10-/value-
        -description-
            The number of threads to be used by the SchedulerService to run deamon tasks.
            If maxed out, scheduled daemon tasks will be queued up and delayed until threads become available.
        -/description-
    -/property-
    -!--  AuthorizationService ---
    
    -property-
        -name-oozie.service.AuthorizationService.authorization.enabled-/name-
        -value-false-/value-
        -description-
            Specifies whether security (user name/admin role) is enabled or not.
            If disabled any user can manage Oozie system and manage any job.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AuthorizationService.default.group.as.acl-/name-
        -value-false-/value-
        -description-
            Enables old behavior where the User's default group is the job's ACL.
        -/description-
    -/property-
    -!-- InstrumentationService ---
    -property-
        -name-oozie.service.InstrumentationService.logging.interval-/name-
        -value-60-/value-
        -description-
            Interval, in seconds, at which instrumentation should be logged by the InstrumentationService.
            If set to 0 it will not log instrumentation data.
        -/description-
    -/property-
    -!-- PurgeService ---
    -property-
        -name-oozie.service.PurgeService.older.than-/name-
        -value-30-/value-
        -description-
            Completed workflow jobs older than this value, in days, will be purged by the PurgeService.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.PurgeService.coord.older.than-/name-
        -value-7-/value-
        -description-
            Completed coordinator jobs older than this value, in days, will be purged by the PurgeService.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.PurgeService.bundle.older.than-/name-
        -value-7-/value-
        -description-
            Completed bundle jobs older than this value, in days, will be purged by the PurgeService.
        -/description-
    -/property-
    -property-
        -name-oozie.service.PurgeService.purge.old.coord.action-/name-
        -value-false-/value-
        -description-
            Whether to purge completed workflows and their corresponding coordinator actions
            of long running coordinator jobs if the completed workflow jobs are older than the value
            specified in oozie.service.PurgeService.older.than.
        -/description-
    -/property-
    
    -property-
		-name-oozie.service.PurgeService.purge.limit-/name-
		-value-100-/value-
		-description-
			Completed Actions purge - limit each purge to this value
        -/description-
	-/property-
	
    -property-
        -name-oozie.service.PurgeService.purge.interval-/name-
        -value-3600-/value-
        -description-
            Interval at which the purge service will run, in seconds.
        -/description-
    -/property-
    
    -!-- RecoveryService ---
    -property-
        -name-oozie.service.RecoveryService.wf.actions.older.than-/name-
        -value-120-/value-
        -description-
            Age of the actions which are eligible to be queued for recovery, in seconds.
        -/description-
    -/property-
    -property-
        -name-oozie.service.RecoveryService.wf.actions.created.time.interval-/name-
        -value-7-/value-
        -description-
        Created time period of the actions which are eligible to be queued for recovery in days.
        -/description-
    -/property-
    -property-
        -name-oozie.service.RecoveryService.callable.batch.size-/name-
        -value-10-/value-
        -description-
            This value determines the number of callable which will be batched together
            to be executed by a single thread.
        -/description-
    -/property-
    -property-
        -name-oozie.service.RecoveryService.push.dependency.interval-/name-
        -value-200-/value-
        -description-
            This value determines the delay for push missing dependency command queueing
            in Recovery Service
        -/description-
    -/property-
    -property-
        -name-oozie.service.RecoveryService.interval-/name-
        -value-60-/value-
        -description-
            Interval at which the RecoverService will run, in seconds.
        -/description-
    -/property-
    -property-
        -name-oozie.service.RecoveryService.coord.older.than-/name-
        -value-600-/value-
        -description-
            Age of the Coordinator jobs or actions which are eligible to be queued for recovery, in seconds.
        -/description-
    -/property-
	-property-
        -name-oozie.service.RecoveryService.bundle.older.than-/name-
        -value-600-/value-
        -description-
            Age of the Bundle jobs which are eligible to be queued for recovery, in seconds.
        -/description-
    -/property-
    -!-- CallableQueueService ---
    -property-
        -name-oozie.service.CallableQueueService.queue.size-/name-
        -value-10000-/value-
        -description-Max callable queue size-/description-
    -/property-
    -property-
        -name-oozie.service.CallableQueueService.threads-/name-
        -value-10-/value-
        -description-Number of threads used for executing callables-/description-
    -/property-
    -property-
        -name-oozie.service.CallableQueueService.callable.concurrency-/name-
        -value-3-/value-
        -description-
            Maximum concurrency for a given callable type.
            Each command is a callable type (submit, start, run, signal, job, jobs, suspend,resume, etc).
            Each action type is a callable type (Map-Reduce, Pig, SSH, FS, sub-workflow, etc).
            All commands that use action executors (action-start, action-end, action-kill and action-check) use
            the action type as the callable type.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.CallableQueueService.callable.next.eligible-/name-
        -value-true-/value-
        -description-
            If true, when a callable in the queue has already reached max concurrency,
            Oozie continuously find next one which has not yet reach max concurrency.
        -/description-
    -/property-
    -property-
        -name-oozie.service.CallableQueueService.InterruptMapMaxSize-/name-
        -value-500-/value-
        -description-
            Maximum Size of the Interrupt Map, the interrupt element will not be inserted in the map if exceeded the size.
        -/description-
    -/property-
    -property-
        -name-oozie.service.CallableQueueService.InterruptTypes-/name-
        -value-kill,resume,suspend,bundle_kill,bundle_resume,bundle_suspend,coord_kill,coord_change,coord_resume,coord_suspend-/value-
        -description-
            Getting the types of XCommands that are considered to be of Interrupt type
        -/description-
    -/property-
    -!--  CoordMaterializeTriggerService ---
    -property-
        -name-oozie.service.CoordMaterializeTriggerService.lookup.interval
        -/name-
        -value-300-/value-
        -description- Coordinator Job Lookup interval.(in seconds).
        -/description-
    -/property-
    -!-- Enable this if you want different scheduling interval for CoordMaterializeTriggerService.
    By default it will use lookup interval as scheduling interval
    -property-
        -name-oozie.service.CoordMaterializeTriggerService.scheduling.interval
        -/name-
        -value-300-/value-
        -description- The frequency at which the CoordMaterializeTriggerService will run.-/description-
    -/property-
    ---
    -property-
        -name-oozie.service.CoordMaterializeTriggerService.materialization.window
        -/name-
        -value-3600-/value-
        -description- Coordinator Job Lookup command materialized each
            job for this next "window" duration
        -/description-
    -/property-
    -property-
        -name-oozie.service.CoordMaterializeTriggerService.callable.batch.size-/name-
        -value-10-/value-
        -description-
            This value determines the number of callable which will be batched together
            to be executed by a single thread.
        -/description-
    -/property-
    -property-
        -name-oozie.service.CoordMaterializeTriggerService.materialization.system.limit-/name-
        -value-50-/value-
        -description-
            This value determines the number of coordinator jobs to be materialized at a given time.
        -/description-
    -/property-
    -property-
        -name-oozie.service.coord.normal.default.timeout
        -/name-
        -value-120-/value-
        -description-Default timeout for a coordinator action input check (in minutes) for normal job.
            -1 means infinite timeout-/description-
	-/property-
	-property-
		-name-oozie.service.coord.default.max.timeout
		-/name-
		-value-86400-/value-
		-description-Default maximum timeout for a coordinator action input check (in minutes). 86400= 60days
        -/description-
	-/property-
	-property-
		-name-oozie.service.coord.input.check.requeue.interval
		-/name-
		-value-60000-/value-
		-description-Command re-queue interval for coordinator data input check (in millisecond).
        -/description-
	-/property-
    -property-
        -name-oozie.service.coord.push.check.requeue.interval
        -/name-
        -value-600000-/value-
        -description-Command re-queue interval for push dependencies (in millisecond).
        -/description-
    -/property-
    -property-
		-name-oozie.service.coord.default.concurrency
		-/name-
		-value-1-/value-
		-description-Default concurrency for a coordinator job to determine how many maximum action should
		be executed at the same time. -1 means infinite concurrency.-/description-
	-/property-
    -property-
		-name-oozie.service.coord.default.throttle
		-/name-
		-value-12-/value-
		-description-Default throttle for a coordinator job to determine how many maximum action should 
		be in WAITING state at the same time.-/description-
	-/property-
	-property-
		-name-oozie.service.coord.materialization.throttling.factor
		-/name-
		-value-0.05-/value-
		-description-Determine how many maximum actions should be in WAITING state for a single job at any time. The value is calculated by 
		this factor X the total queue size.-/description-
	-/property-
    -property-
        -name-oozie.service.coord.check.maximum.frequency-/name-
        -value-true-/value-
        -description-
            When true, Oozie will reject any coordinators with a frequency faster than 5 minutes.  It is not recommended to disable
            this check or submit coordinators with frequencies faster than 5 minutes: doing so can cause unintended behavior and
            additional system stress.
        -/description-
    -/property-
	-!-- ELService ---
    -!--  List of supported groups for ELService ---
	-property-
        -name-oozie.service.ELService.groups-/name-
        -value-job-submit,workflow,wf-sla-submit,coord-job-submit-freq,coord-job-submit-nofuncs,coord-job-submit-data,coord-job-submit-instances,coord-sla-submit,coord-action-create,coord-action-create-inst,coord-sla-create,coord-action-start,coord-job-wait-timeout-/value-
        -description-List of groups for different ELServices-/description-
    -/property-
    -property-
        -name-oozie.service.ELService.constants.job-submit-/name-
        -value-
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.job-submit-/name-
        -value-
        -/value-
        -description-
          EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.job-submit-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions without having to include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.job-submit-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions without having to include all the built in ones.
        -/description-
    -/property-
-!-- Workflow specifics ---
    -property-
        -name-oozie.service.ELService.constants.workflow-/name-
        -value-
            KB=org.apache.oozie.util.ELConstantsFunctions#KB,
            MB=org.apache.oozie.util.ELConstantsFunctions#MB,
            GB=org.apache.oozie.util.ELConstantsFunctions#GB,
            TB=org.apache.oozie.util.ELConstantsFunctions#TB,
            PB=org.apache.oozie.util.ELConstantsFunctions#PB,
            RECORDS=org.apache.oozie.action.hadoop.HadoopELFunctions#RECORDS,
            MAP_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_IN,
            MAP_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#MAP_OUT,
            REDUCE_IN=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_IN,
            REDUCE_OUT=org.apache.oozie.action.hadoop.HadoopELFunctions#REDUCE_OUT,
            GROUPS=org.apache.oozie.action.hadoop.HadoopELFunctions#GROUPS
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.workflow-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.workflow-/name-
        -value-
            firstNotNull=org.apache.oozie.util.ELConstantsFunctions#firstNotNull,
            concat=org.apache.oozie.util.ELConstantsFunctions#concat,
            replaceAll=org.apache.oozie.util.ELConstantsFunctions#replaceAll,
            appendAll=org.apache.oozie.util.ELConstantsFunctions#appendAll,
            trim=org.apache.oozie.util.ELConstantsFunctions#trim,
            timestamp=org.apache.oozie.util.ELConstantsFunctions#timestamp,
            urlEncode=org.apache.oozie.util.ELConstantsFunctions#urlEncode,
            toJsonStr=org.apache.oozie.util.ELConstantsFunctions#toJsonStr,
            toPropertiesStr=org.apache.oozie.util.ELConstantsFunctions#toPropertiesStr,
            toConfigurationStr=org.apache.oozie.util.ELConstantsFunctions#toConfigurationStr,
            wf:id=org.apache.oozie.DagELFunctions#wf_id,
            wf:name=org.apache.oozie.DagELFunctions#wf_name,
            wf:appPath=org.apache.oozie.DagELFunctions#wf_appPath,
            wf:conf=org.apache.oozie.DagELFunctions#wf_conf,
            wf:user=org.apache.oozie.DagELFunctions#wf_user,
            wf:group=org.apache.oozie.DagELFunctions#wf_group,
            wf:callback=org.apache.oozie.DagELFunctions#wf_callback,
            wf:transition=org.apache.oozie.DagELFunctions#wf_transition,
            wf:lastErrorNode=org.apache.oozie.DagELFunctions#wf_lastErrorNode,
            wf:errorCode=org.apache.oozie.DagELFunctions#wf_errorCode,
            wf:errorMessage=org.apache.oozie.DagELFunctions#wf_errorMessage,
            wf:run=org.apache.oozie.DagELFunctions#wf_run,
            wf:actionData=org.apache.oozie.DagELFunctions#wf_actionData,
            wf:actionExternalId=org.apache.oozie.DagELFunctions#wf_actionExternalId,
            wf:actionTrackerUri=org.apache.oozie.DagELFunctions#wf_actionTrackerUri,
            wf:actionExternalStatus=org.apache.oozie.DagELFunctions#wf_actionExternalStatus,
            hadoop:counters=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_counters,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf,
            fs:exists=org.apache.oozie.action.hadoop.FsELFunctions#fs_exists,
            fs:isDir=org.apache.oozie.action.hadoop.FsELFunctions#fs_isDir,
            fs:dirSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_dirSize,
            fs:fileSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_fileSize,
            fs:blockSize=org.apache.oozie.action.hadoop.FsELFunctions#fs_blockSize,
            hcat:exists=org.apache.oozie.coord.HCatELFunctions#hcat_exists
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.WorkflowAppService.WorkflowDefinitionMaxLength-/name-
        -value-100000-/value-
        -description-
            The maximum length of the workflow definition in bytes
            An error will be reported if the length exceeds the given maximum
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.workflow-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -!-- Resolve SLA information during Workflow job submission ---
	-property-
        -name-oozie.service.ELService.constants.wf-sla-submit-/name-
        -value-
            MINUTES=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_MINUTES,
            HOURS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_HOURS,
            DAYS=org.apache.oozie.util.ELConstantsFunctions#SUBMIT_DAYS
            -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.wf-sla-submit-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.wf-sla-submit-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.wf-sla-submit-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
-!-- Coordinator specifics ---l
-!-- Phase 1 resolution during job submission ---
-!-- EL Evalautor setup to resolve mainly frequency tags ---
    -property-
        -name-oozie.service.ELService.constants.coord-job-submit-freq-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-job-submit-freq-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-job-submit-freq-/name-
        -value-
            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,
            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,
            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,
            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,
            coord:endOfDays=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfDays,
            coord:endOfMonths=org.apache.oozie.coord.CoordELFunctions#ph1_coord_endOfMonths,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-job-submit-freq-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.constants.coord-job-wait-timeout-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-job-wait-timeout-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions without having to include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-job-wait-timeout-/name-
        -value-
            coord:days=org.apache.oozie.coord.CoordELFunctions#ph1_coord_days,
            coord:months=org.apache.oozie.coord.CoordELFunctions#ph1_coord_months,
            coord:hours=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hours,
            coord:minutes=org.apache.oozie.coord.CoordELFunctions#ph1_coord_minutes,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-job-wait-timeout-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions without having to include all the built in ones.
        -/description-
    -/property-
-!-- EL Evalautor setup to resolve mainly all constants/variables - no EL functions is resolved ---
    -property-
        -name-oozie.service.ELService.constants.coord-job-submit-nofuncs-/name-
        -value-
            MINUTE=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTE,
            HOUR=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOUR,
            DAY=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAY,
            MONTH=org.apache.oozie.coord.CoordELConstants#SUBMIT_MONTH,
            YEAR=org.apache.oozie.coord.CoordELConstants#SUBMIT_YEAR
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-job-submit-nofuncs-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-job-submit-nofuncs-/name-
        -value-
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-job-submit-nofuncs-/name-
        -value- -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
-!-- EL Evalautor setup to **check** whether instances/start-instance/end-instances are valid
 no EL functions will be resolved ---
    -property-
        -name-oozie.service.ELService.constants.coord-job-submit-instances-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-job-submit-instances-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-job-submit-instances-/name-
        -value-
            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph1_coord_hoursInDay_echo,
            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph1_coord_daysInMonth_echo,
            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_tzOffset_echo,
            coord:current=org.apache.oozie.coord.CoordELFunctions#ph1_coord_current_echo,
            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_currentRange_echo,
            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_offset_echo,
            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latest_echo,
            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_latestRange_echo,
            coord:future=org.apache.oozie.coord.CoordELFunctions#ph1_coord_future_echo,
            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph1_coord_futureRange_echo,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph1_coord_absolute_echo,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-job-submit-instances-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
-!-- EL Evalautor setup to **check** whether dataIn and dataOut are valid
 no EL functions will be resolved ---
    -property-
        -name-oozie.service.ELService.constants.coord-job-submit-data-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-job-submit-data-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-job-submit-data-/name-
        -value-
            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataIn_echo,
            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,
            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,
            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,
            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,
            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,
            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,
            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseIn_echo,
            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,
            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableIn_echo,
            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,
            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionFilter_echo,
            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMin_echo,
            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitionMax_echo,
            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataInPartitions_echo,
            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,
            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-job-submit-data-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -!-- Resolve SLA information during Coordinator job submission ---
	-property-
        -name-oozie.service.ELService.constants.coord-sla-submit-/name-
        -value-
            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,
            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,
            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS
            -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-sla-submit-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-sla-submit-/name-
        -value-
            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dataOut_echo,
            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed,
            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actualTime_echo_wrap,
            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,
            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateTzOffset_echo,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,
            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph1_coord_actionId_echo,
            coord:name=org.apache.oozie.coord.CoordELFunctions#ph1_coord_name_echo,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_databaseOut_echo,
            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph1_coord_tableOut_echo,
            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitions_echo,
            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph1_coord_dataOutPartitionValue_echo,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-sla-submit-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
 -!--  Action creation for coordinator ---
-property-
        -name-oozie.service.ELService.constants.coord-action-create-/name-
        -value-
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-action-create-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-action-create-/name-
        -value-
            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,
            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,
            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,
            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current,
            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange,
            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset,
            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,
            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,
            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,
            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,
            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,
            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,
            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-action-create-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
 -!--  Action creation for coordinator used to only evaluate instance number like ${current (daysInMonth())}. current will be echo-ed ---
-property-
        -name-oozie.service.ELService.constants.coord-action-create-inst-/name-
        -value-
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-action-create-inst-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-action-create-inst-/name-
        -value-
            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph2_coord_hoursInDay,
            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph2_coord_daysInMonth,
            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_tzOffset,
            coord:current=org.apache.oozie.coord.CoordELFunctions#ph2_coord_current_echo,
            coord:currentRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_currentRange_echo,
            coord:offset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_offset_echo,
            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,
            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latestRange_echo,
            coord:future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,
            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_futureRange_echo,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:absolute=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_echo,
            coord:absoluteRange=org.apache.oozie.coord.CoordELFunctions#ph2_coord_absolute_range,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-action-create-inst-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    
        -!-- Resolve SLA information during Action creation/materialization ---
	-property-
        -name-oozie.service.ELService.constants.coord-sla-create-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-sla-create-/name-
        -value-
            MINUTES=org.apache.oozie.coord.CoordELConstants#SUBMIT_MINUTES,
            HOURS=org.apache.oozie.coord.CoordELConstants#SUBMIT_HOURS,
            DAYS=org.apache.oozie.coord.CoordELConstants#SUBMIT_DAYS-/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-sla-create-/name-
        -value-
            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,
            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,
            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actualTime,
            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateOffset,
            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph2_coord_dateTzOffset,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,
            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph2_coord_actionId,
            coord:name=org.apache.oozie.coord.CoordELFunctions#ph2_coord_name,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,
            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,
            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,
            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-sla-create-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
-!--  Action start for coordinator ---
-property-
        -name-oozie.service.ELService.constants.coord-action-start-/name-
        -value-
        -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.constants.coord-action-start-/name-
        -value- -/value-
        -description-
            EL constant declarations, separated by commas, format is [PREFIX:]NAME=CLASS#CONSTANT.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.functions.coord-action-start-/name-
        -value-
            coord:hoursInDay=org.apache.oozie.coord.CoordELFunctions#ph3_coord_hoursInDay,
            coord:daysInMonth=org.apache.oozie.coord.CoordELFunctions#ph3_coord_daysInMonth,
            coord:tzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_tzOffset,
            coord:latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest,
            coord:latestRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latestRange,
            coord:future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future,
            coord:futureRange=org.apache.oozie.coord.CoordELFunctions#ph3_coord_futureRange,
            coord:dataIn=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataIn,
            coord:dataOut=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dataOut,
            coord:nominalTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime,
            coord:actualTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actualTime,
            coord:dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset,
            coord:dateTzOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateTzOffset,
            coord:formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime,
            coord:actionId=org.apache.oozie.coord.CoordELFunctions#ph3_coord_actionId,
            coord:name=org.apache.oozie.coord.CoordELFunctions#ph3_coord_name,
            coord:conf=org.apache.oozie.coord.CoordELFunctions#coord_conf,
            coord:user=org.apache.oozie.coord.CoordELFunctions#coord_user,
            coord:databaseIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseIn,
            coord:databaseOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_databaseOut,
            coord:tableIn=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableIn,
            coord:tableOut=org.apache.oozie.coord.HCatELFunctions#ph3_coord_tableOut,
            coord:dataInPartitionFilter=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionFilter,
            coord:dataInPartitionMin=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMin,
            coord:dataInPartitionMax=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitionMax,
            coord:dataInPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataInPartitions,
            coord:dataOutPartitions=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitions,
            coord:dataOutPartitionValue=org.apache.oozie.coord.HCatELFunctions#ph3_coord_dataOutPartitionValue,
            hadoop:conf=org.apache.oozie.action.hadoop.HadoopELFunctions#hadoop_conf
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.ext.functions.coord-action-start-/name-
        -value-
        -/value-
        -description-
            EL functions declarations, separated by commas, format is [PREFIX:]NAME=CLASS#METHOD.
            This property is a convenience property to add extensions to the built in executors without having to
            include all the built in ones.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ELService.latest-el.use-current-time-/name-
        -value-false-/value-
        -description-
            Determine whether to use the current time to determine the latest dependency or the action creation time.
            This is for backward compatibility with older oozie behaviour.
        -/description-
    -/property-
    -!-- UUIDService ---
    -property-
        -name-oozie.service.UUIDService.generator-/name-
        -value-counter-/value-
        -description-
            random : generated UUIDs will be random strings.
            counter: generated UUIDs generated will be a counter postfixed with the system startup time.
        -/description-
    -/property-
    -!-- DBLiteWorkflowStoreService ---
    -property-
        -name-oozie.service.DBLiteWorkflowStoreService.status.metrics.collection.interval-/name-
        -value-5-/value-
        -description- Workflow Status metrics collection interval in minutes.-/description-
    -/property-
    -property-
        -name-oozie.service.DBLiteWorkflowStoreService.status.metrics.window-/name-
        -value-3600-/value-
        -description-
            Workflow Status metrics collection window in seconds. Workflow status will be instrumented for the window.
        -/description-
    -/property-
    -!-- DB Schema Info, used by DBLiteWorkflowStoreService ---
    -property-
        -name-oozie.db.schema.name-/name-
        -value-oozie-/value-
        -description-
            Oozie DataBase Name
        -/description-
    -/property-
   -!-- StoreService ---
    -property-
        -name-oozie.service.JPAService.create.db.schema-/name-
        -value-false-/value-
        -description-
            Creates Oozie DB.
            If set to true, it creates the DB schema if it does not exist. If the DB schema exists is a NOP.
            If set to false, it does not create the DB schema. If the DB schema does not exist it fails start up.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.validate.db.connection-/name-
        -value-true-/value-
        -description-
            Validates DB connections from the DB connection pool.
            If the 'oozie.service.JPAService.create.db.schema' property is set to true, this property is ignored.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.JPAService.validate.db.connection.eviction.interval-/name-
        -value-300000-/value-
        -description-
            Validates DB connections from the DB connection pool.
            When validate db connection 'TestWhileIdle' is true, the number of milliseconds to sleep 
            between runs of the idle object evictor thread.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.JPAService.validate.db.connection.eviction.num-/name-
        -value-10-/value-
        -description-
            Validates DB connections from the DB connection pool.
            When validate db connection 'TestWhileIdle' is true, the number of objects to examine during
            each run of the idle object evictor thread.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.connection.data.source-/name-
        -value-org.apache.commons.dbcp.BasicDataSource-/value-
        -description-
            DataSource to be used for connection pooling.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.connection.properties-/name-
        -value- -/value-
        -description-
            DataSource connection properties.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.jdbc.driver-/name-
        -value-org.apache.derby.jdbc.EmbeddedDriver-/value-
        -description-
            JDBC driver class.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.jdbc.url-/name-
        -value-jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true-/value-
        -description-
            JDBC URL.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.jdbc.username-/name-
        -value-sa-/value-
        -description-
            DB user name.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.jdbc.password-/name-
        -value- -/value-
        -description-
            DB user password.
            IMPORTANT: if password is emtpy leave a 1 space string, the service trims the value,
                       if empty Configuration assumes it is NULL.
            IMPORTANT: if the StoreServicePasswordService is active, it will reset this value with the value given in
                       the console.
        -/description-
    -/property-
    -property-
        -name-oozie.service.JPAService.pool.max.active.conn-/name-
        -value-10-/value-
        -description-
             Max number of connections.
        -/description-
    -/property-
   -!-- SchemaService ---
    -property-
        -name-oozie.service.SchemaService.wf.schemas-/name-
        -value-
            oozie-workflow-0.1.xsd,oozie-workflow-0.2.xsd,oozie-workflow-0.2.5.xsd,oozie-workflow-0.3.xsd,oozie-workflow-0.4.xsd,
            oozie-workflow-0.4.5.xsd,oozie-workflow-0.5.xsd,
            shell-action-0.1.xsd,shell-action-0.2.xsd,shell-action-0.3.xsd,
            email-action-0.1.xsd,email-action-0.2.xsd,
            hive-action-0.2.xsd,hive-action-0.3.xsd,hive-action-0.4.xsd,hive-action-0.5.xsd,
            sqoop-action-0.2.xsd,sqoop-action-0.3.xsd,sqoop-action-0.4.xsd,
            ssh-action-0.1.xsd,ssh-action-0.2.xsd,
            distcp-action-0.1.xsd,distcp-action-0.2.xsd,
            oozie-sla-0.1.xsd,oozie-sla-0.2.xsd,
            hive2-action-0.1.xsd,
            spark-action-0.1.xsd
        -/value-
        -description-
            List of schemas for workflows (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.wf.ext.schemas-/name-
        -value- -/value-
        -description-
            List of additional schemas for workflows (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.coord.schemas-/name-
        -value-
            oozie-coordinator-0.1.xsd,oozie-coordinator-0.2.xsd,oozie-coordinator-0.3.xsd,oozie-coordinator-0.4.xsd,
            oozie-sla-0.1.xsd,oozie-sla-0.2.xsd
        -/value-
        -description-
            List of schemas for coordinators (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.coord.ext.schemas-/name-
        -value- -/value-
        -description-
            List of additional schemas for coordinators (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.bundle.schemas-/name-
        -value-
            oozie-bundle-0.1.xsd,oozie-bundle-0.2.xsd
        -/value-
        -description-
            List of schemas for bundles (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.bundle.ext.schemas-/name-
        -value- -/value-
        -description-
            List of additional schemas for bundles (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.sla.schemas-/name-
        -value-
            gms-oozie-sla-0.1.xsd,oozie-sla-0.2.xsd
        -/value-
        -description-
            List of schemas for semantic validation for GMS SLA (separated by commas).
        -/description-
    -/property-
    -property-
        -name-oozie.service.SchemaService.sla.ext.schemas-/name-
        -value- -/value-
        -description-
            List of additional schemas for semantic validation for GMS SLA (separated by commas).
        -/description-
    -/property-
    -!-- CallbackService ---
    -property-
        -name-oozie.service.CallbackService.base.url-/name-
        -value-${oozie.base.url}/callback-/value-
        -description-
             Base callback URL used by ActionExecutors.
        -/description-
    -/property-
    -property-
        -name-oozie.service.CallbackService.early.requeue.max.retries-/name-
        -value-5-/value-
        -description-
            If Oozie receives a callback too early (while the action is in PREP state), it will requeue the command this many times
            to give the action time to transition to RUNNING.
        -/description-
    -/property-
    -!-- CallbackServlet ---
    -property-
        -name-oozie.servlet.CallbackServlet.max.data.len-/name-
        -value-2048-/value-
        -description-
            Max size in characters for the action completion data output.
        -/description-
    -/property-
    -!-- External stats---
    -property-
        -name-oozie.external.stats.max.size-/name-
        -value--1-/value-
        -description-
            Max size in bytes for action stats. -1 means infinite value.
        -/description-
    -/property-
    -!-- JobCommand ---
    -property-
        -name-oozie.JobCommand.job.console.url-/name-
        -value-${oozie.base.url}?job=-/value-
        -description-
             Base console URL for a workflow job.
        -/description-
    -/property-
    -!-- ActionService ---
    -property-
        -name-oozie.service.ActionService.executor.classes-/name-
        -value-
            org.apache.oozie.action.decision.DecisionActionExecutor,
            org.apache.oozie.action.hadoop.JavaActionExecutor,
            org.apache.oozie.action.hadoop.FsActionExecutor,
            org.apache.oozie.action.hadoop.MapReduceActionExecutor,
            org.apache.oozie.action.hadoop.PigActionExecutor,
            org.apache.oozie.action.hadoop.HiveActionExecutor,
            org.apache.oozie.action.hadoop.ShellActionExecutor,
            org.apache.oozie.action.hadoop.SqoopActionExecutor,
            org.apache.oozie.action.hadoop.DistcpActionExecutor,
            org.apache.oozie.action.hadoop.Hive2ActionExecutor,
            org.apache.oozie.action.ssh.SshActionExecutor,
            org.apache.oozie.action.oozie.SubWorkflowActionExecutor,
            org.apache.oozie.action.email.EmailActionExecutor,
            org.apache.oozie.action.hadoop.SparkActionExecutor
        -/value-
        -description-
            List of ActionExecutors classes (separated by commas).
            Only action types with associated executors can be used in workflows.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ActionService.executor.ext.classes-/name-
        -value- -/value-
        -description-
            List of ActionExecutors extension classes (separated by commas). Only action types with associated
            executors can be used in workflows. This property is a convenience property to add extensions to the built
            in executors without having to include all the built in ones.
        -/description-
    -/property-
    -!-- ActionCheckerService ---
    -property-
        -name-oozie.service.ActionCheckerService.action.check.interval-/name-
        -value-60-/value-
        -description-
            The frequency at which the ActionCheckService will run.
        -/description-
    -/property-
     -property-
        -name-oozie.service.ActionCheckerService.action.check.delay-/name-
        -value-600-/value-
        -description-
            The time, in seconds, between an ActionCheck for the same action.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ActionCheckerService.callable.batch.size-/name-
        -value-10-/value-
        -description-
            This value determines the number of actions which will be batched together
            to be executed by a single thread.
        -/description-
    -/property-
    -!-- StatusTransitService ---
    -property-
        -name-oozie.service.StatusTransitService.statusTransit.interval-/name-
        -value-60-/value-
        -description-
            The frequency in seconds at which the StatusTransitService will run.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.StatusTransitService.backward.support.for.coord.status-/name-
        -value-false-/value-
        -description-
            true, if coordinator job submits using 'uri:oozie:coordinator:0.1' and wants to keep Oozie 2.x status transit.
            if set true,
            1. SUCCEEDED state in coordinator job means materialization done.
            2. No DONEWITHERROR state in coordinator job
            3. No PAUSED or PREPPAUSED state in coordinator job
            4. PREPSUSPENDED becomes SUSPENDED in coordinator job
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.StatusTransitService.backward.support.for.states.without.error-/name-
        -value-true-/value-
        -description-
            true, if you want to keep Oozie 3.2 status transit.
            Change it to false for Oozie 4.x releases.
            if set true,
            No states like RUNNINGWITHERROR, SUSPENDEDWITHERROR and PAUSEDWITHERROR
            for coordinator and bundle
        -/description-
    -/property-
    -!-- PauseTransitService ---
    -property-
        -name-oozie.service.PauseTransitService.PauseTransit.interval-/name-
        -value-60-/value-
        -description-
            The frequency in seconds at which the PauseTransitService will run.
        -/description-
    -/property-
    -!-- LauncherMapper ---
    -property-
        -name-oozie.action.max.output.data-/name-
        -value-2048-/value-
        -description-
            Max size in characters for output data.
        -/description-
    -/property-
    -property-
        -name-oozie.action.fs.glob.max-/name-
        -value-1000-/value-
        -description-
            Maximum number of globbed files.
        -/description-
    -/property-
    -!-- JavaActionExecutor ---
    -!-- This is common to the subclasses of action executors for Java (e.g. map-reduce, pig, hive, java, etc) ---
    -property-
        -name-oozie.action.launcher.mapreduce.job.ubertask.enable-/name-
        -value-false-/value-
        -description-
            Enables Uber Mode for the launcher job in YARN/Hadoop 2 (no effect in Hadoop 1) for all action types by default.
            This can be overridden on a per-action-type basis by setting
            oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site.xml (where #action-type# is the action
            type; for example, "pig").  And that can be overridden on a per-action basis by setting
            oozie.launcher.mapreduce.job.ubertask.enable in an action's configuration section in a workflow.  In summary, the
            priority is this:
            1. action's configuration section in a workflow
            2. oozie.action.#action-type#.launcher.mapreduce.job.ubertask.enable in oozie-site
            3. oozie.action.launcher.mapreduce.job.ubertask.enable in oozie-site
        -/description-
    -/property-
    -property-
        -name-oozie.action.shell.launcher.mapreduce.job.ubertask.enable-/name-
        -value-false-/value-
        -description-
            The Shell action may have issues with the $PATH environment when using Uber Mode, and so Uber Mode is disabled by
            default for it.  See oozie.action.launcher.mapreduce.job.ubertask.enable
        -/description-
    -/property-
    -property-
        -name-oozie.action.launcher.yarn.timeline-service.enabled-/name-
        -value-false-/value-
        -description-
            Enables/disables getting delegation tokens for ATS for the launcher job in
            YARN/Hadoop 2.6 (no effect in Hadoop 1) for all action types by default if tez-site.xml is present in
            distributed cache.
            This can be overridden on a per-action basis by setting
            oozie.launcher.yarn.timeline-service.enabled in an action's configuration section in a workflow.
        -/description-
    -/property-
    -!-- HadoopActionExecutor ---
    -!-- This is common to the subclasses action executors for map-reduce and pig ---
    -property-
        -name-oozie.action.retries.max-/name-
        -value-3-/value-
        -description-
           The number of retries for executing an action in case of failure
        -/description-
    -/property-
    -property-
        -name-oozie.action.retry.interval-/name-
        -value-10-/value-
        -description-
            The interval between retries of an action in case of failure
        -/description-
    -/property-
    -property-
        -name-oozie.action.retry.policy-/name-
        -value-periodic-/value-
        -description-
            Retry policy of an action in case of failure. Possible values are periodic/exponential
        -/description-
    -/property-
    -!-- SshActionExecutor ---
    -property-
        -name-oozie.action.ssh.delete.remote.tmp.dir-/name-
        -value-true-/value-
        -description-
            If set to true, it will delete temporary directory at the end of execution of ssh action.
        -/description-
    -/property-
    -property-
        -name-oozie.action.ssh.http.command-/name-
        -value-curl-/value-
        -description-
            Command to use for callback to oozie, normally is 'curl' or 'wget'.
            The command must available in PATH environment variable of the USER@HOST box shell.
        -/description-
    -/property-
    -property-
        -name-oozie.action.ssh.http.command.post.options-/name-
        -value---data-binary @#stdout --request POST --header "content-type:text/plain"-/value-
        -description-
            The callback command POST options.
            Used when the ouptut of the ssh action is captured.
        -/description-
    -/property-
    
    -property-
        -name-oozie.action.ssh.allow.user.at.host-/name-
        -value-true-/value-
        -description-
            Specifies whether the user specified by the ssh action is allowed or is to be replaced
            by the Job user
        -/description-
    -/property-
    -!-- SubworkflowActionExecutor ---
    -property-
        -name-oozie.action.subworkflow.max.depth-/name-
        -value-50-/value-
        -description-
            The maximum depth for subworkflows.  For example, if set to 3, then a workflow can start subwf1, which can start subwf2,
            which can start subwf3; but if subwf3 tries to start subwf4, then the action will fail.  This is helpful in preventing
            errant workflows from starting infintely recursive subworkflows.
        -/description-
    -/property-
    -!-- HadoopAccessorService ---
    -property-
        -name-oozie.service.HadoopAccessorService.kerberos.enabled-/name-
        -value-false-/value-
        -description-
            Indicates if Oozie is configured to use Kerberos.
        -/description-
    -/property-
    -property-
        -name-local.realm-/name-
        -value-LOCALHOST-/value-
        -description-
            Kerberos Realm used by Oozie and Hadoop. Using 'local.realm' to be aligned with Hadoop configuration
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.keytab.file-/name-
        -value-${user.home}/oozie.keytab-/value-
        -description-
            Location of the Oozie user keytab file.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.kerberos.principal-/name-
        -value-${user.name}/localhost@${local.realm}-/value-
        -description-
            Kerberos principal for Oozie service.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.jobTracker.whitelist-/name-
        -value- -/value-
        -description-
            Whitelisted job tracker for Oozie service.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.nameNode.whitelist-/name-
        -value- -/value-
        -description-
            Whitelisted job tracker for Oozie service.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.hadoop.configurations-/name-
        -value-*=hadoop-conf-/value-
        -description-
            Comma separated AUTHORITY=HADOOP_CONF_DIR, where AUTHORITY is the HOST:PORT of
            the Hadoop service (JobTracker, YARN, HDFS). The wildcard '*' configuration is
            used when there is no exact match for an authority. The HADOOP_CONF_DIR contains
            the relevant Hadoop *-site.xml files. If the path is relative is looked within
            the Oozie configuration directory; though the path can be absolute (i.e. to point
            to Hadoop client conf/ directories in the local filesystem.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.action.configurations-/name-
        -value-*=action-conf-/value-
        -description-
            Comma separated AUTHORITY=ACTION_CONF_DIR, where AUTHORITY is the HOST:PORT of
            the Hadoop MapReduce service (JobTracker, YARN). The wildcard '*' configuration is
            used when there is no exact match for an authority. The ACTION_CONF_DIR may contain
            ACTION.xml files where ACTION is the action type ('java', 'map-reduce', 'pig',
            'hive', 'sqoop', etc.). If the ACTION.xml file exists, its properties will be used
            as defaults properties for the action. If the path is relative is looked within
            the Oozie configuration directory; though the path can be absolute (i.e. to point
            to Hadoop client conf/ directories in the local filesystem.
        -/description-
    -/property-
    -property-
        -name-oozie.service.HadoopAccessorService.action.configurations.load.default.resources-/name-
        -value-true-/value-
        -description-
            true means that default and site xml files of hadoop (core-default, core-site,
            hdfs-default, hdfs-site, mapred-default, mapred-site, yarn-default, yarn-site)
            are parsed into actionConf on Oozie server. false means that site xml files are
            not loaded on server, instead loaded on launcher node.
            This is only done for pig and hive actions which handle loading those files
            automatically from the classpath on launcher task. It defaults to true.
        -/description-
    -/property-
    -!-- Credentials ---
    -property-
        -name-oozie.credentials.credentialclasses-/name-
        -value- -/value-
        -description-
            A list of credential class mapping for CredentialsProvider
        -/description-
    -/property-
    -property-
        -name-oozie.actions.main.classnames-/name-
        -value-distcp=org.apache.hadoop.tools.DistCp-/value-
        -description-
            A list of class name mapping for Action classes
        -/description-
    -/property-
    -property-
        -name-oozie.service.WorkflowAppService.system.libpath-/name-
        -value-/user/${user.name}/share/lib-/value-
        -description-
            System library path to use for workflow applications.
            This path is added to workflow application if their job properties sets
            the property 'oozie.use.system.libpath' to true.
        -/description-
    -/property-
    -property-
        -name-oozie.command.default.lock.timeout-/name-
        -value-5000-/value-
        -description-
            Default timeout (in milliseconds) for commands for acquiring an exclusive lock on an entity.
        -/description-
    -/property-
    -property-
        -name-oozie.command.default.requeue.delay-/name-
        -value-10000-/value-
        -description-
            Default time (in milliseconds) for commands that are requeued for delayed execution.
        -/description-
    -/property-
   -!-- LiteWorkflowStoreService, Workflow Action Automatic Retry ---
    -property-
        -name-oozie.service.LiteWorkflowStoreService.user.retry.max-/name-
        -value-3-/value-
        -description-
            Automatic retry max count for workflow action is 3 in default.
        -/description-
    -/property-
    -property-
        -name-oozie.service.LiteWorkflowStoreService.user.retry.inteval-/name-
        -value-10-/value-
        -description-
            Automatic retry interval for workflow action is in minutes and the default value is 10 minutes.
        -/description-
    -/property-
    -property-
        -name-oozie.service.LiteWorkflowStoreService.user.retry.error.code-/name-
        -value-JA008,JA009,JA017,JA018,JA019,FS009,FS008,FS014-/value-
        -description-
            Automatic retry interval for workflow action is handled for these specified error code:
            FS009, FS008 is file exists error when using chmod in fs action.
            FS014 is permission error in fs action
            JA018 is output directory exists error in workflow map-reduce action.
            JA019 is error while executing distcp action.
            JA017 is job not exists error in action executor.
            JA008 is FileNotFoundException in action executor.
            JA009 is IOException in action executor.
            ALL is the any kind of error in action executor.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.LiteWorkflowStoreService.user.retry.error.code.ext-/name-
        -value- -/value-
        -description-
            Automatic retry interval for workflow action is handled for these specified extra error code:
            ALL is the any kind of error in action executor.
        -/description-
    -/property-
    
    -property-
        -name-oozie.service.LiteWorkflowStoreService.node.def.version-/name-
        -value-_oozie_inst_v_1-/value-
        -description-
            NodeDef default version, _oozie_inst_v_0 or _oozie_inst_v_1
        -/description-
    -/property-
    -!-- Oozie Authentication ---
    -property-
        -name-oozie.authentication.type-/name-
        -value-simple-/value-
        -description-
            Defines authentication used for Oozie HTTP endpoint.
            Supported values are: simple | kerberos | #AUTHENTICATION_HANDLER_CLASSNAME#
        -/description-
    -/property-
    -property-
        -name-oozie.server.authentication.type-/name-
        -value-${oozie.authentication.type}-/value-
        -description-
            Defines authentication used for Oozie server communicating to other Oozie server over HTTP(s).
            Supported values are: simple | kerberos | #AUTHENTICATOR_CLASSNAME#
        -/description-
    -/property-
    -property-
        -name-oozie.authentication.token.validity-/name-
        -value-36000-/value-
        -description-
            Indicates how long (in seconds) an authentication token is valid before it has
            to be renewed.
        -/description-
    -/property-
    -property-
      -name-oozie.authentication.cookie.domain-/name-
      -value- -/value-
      -description-
        The domain to use for the HTTP cookie that stores the authentication token.
        In order to authentiation to work correctly across multiple hosts
        the domain must be correctly set.
      -/description-
    -/property-
    -property-
        -name-oozie.authentication.simple.anonymous.allowed-/name-
        -value-true-/value-
        -description-
            Indicates if anonymous requests are allowed when using 'simple' authentication.
        -/description-
    -/property-
    -property-
        -name-oozie.authentication.kerberos.principal-/name-
        -value-HTTP/localhost@${local.realm}-/value-
        -description-
            Indicates the Kerberos principal to be used for HTTP endpoint.
            The principal MUST start with 'HTTP/' as per Kerberos HTTP SPNEGO specification.
        -/description-
    -/property-
    -property-
        -name-oozie.authentication.kerberos.keytab-/name-
        -value-${oozie.service.HadoopAccessorService.keytab.file}-/value-
        -description-
            Location of the keytab file with the credentials for the principal.
            Referring to the same keytab file Oozie uses for its Kerberos credentials for Hadoop.
        -/description-
    -/property-
    -property-
        -name-oozie.authentication.kerberos.name.rules-/name-
        -value-DEFAULT-/value-
        -description-
            The kerberos names rules is to resolve kerberos principal names, refer to Hadoop's
            KerberosName for more details.
        -/description-
    -/property-
    -!-- Coordinator "NONE" execution order default time tolerance ---
    -property-
        -name-oozie.coord.execution.none.tolerance-/name-
        -value-1-/value-
        -description-
            Default time tolerance in minutes after action nominal time for an action to be skipped
            when execution order is "NONE"
        -/description-
    -/property-
    -!-- Coordinator Actions default length ---
	-property-
		-name-oozie.coord.actions.default.length-/name-
		-value-1000-/value-
		-description-
			Default number of coordinator actions to be retrieved by the info command
		-/description-
	-/property-
	-!-- ForkJoin validation ---
	-property-
		-name-oozie.validate.ForkJoin-/name-
		-value-true-/value-
		-description-
			If true, fork and join should be validated at wf submission time.
		-/description-
	-/property-
	-property-
		-name-oozie.coord.action.get.all.attributes-/name-
		-value-false-/value-
		-description-
			Setting to true is not recommended as coord job/action info will bring all columns of the action in memory.
			Set it true only if backward compatibility for action/job info is required.
		-/description-
	-/property-
	-property-
		-name-oozie.service.HadoopAccessorService.supported.filesystems-/name-
		-value-hdfs,hftp,webhdfs-/value-
		-description-
			Enlist the different filesystems supported for federation. If wildcard "*" is specified,
			then ALL file schemes will be allowed.
		-/description-
	-/property-
    -property-
        -name-oozie.service.URIHandlerService.uri.handlers-/name-
        -value-org.apache.oozie.dependency.FSURIHandler-/value-
        -description-
                Enlist the different uri handlers supported for data availability checks.
        -/description-
    -/property-
    -!-- Oozie HTTP Notifications ---
    -property-
        -name-oozie.notification.url.connection.timeout-/name-
        -value-10000-/value-
        -description-
            Defines the timeout, in milliseconds, for Oozie HTTP notification callbacks. Oozie does
            HTTP notifications for workflow jobs which set the 'oozie.wf.action.notification.url',
            'oozie.wf.worklfow.notification.url' and/or 'oozie.coord.action.notification.url'
            properties in their job.properties. Refer to section '5 Oozie Notifications' in the
            Workflow specification for details.
        -/description-
    -/property-
    -!-- Enable Distributed Cache workaround for Hadoop 2.0.2-alpha (MAPREDUCE-4820) ---
    -property-
        -name-oozie.hadoop-2.0.2-alpha.workaround.for.distributed.cache-/name-
        -value-false-/value-
        -description-
            Due to a bug in Hadoop 2.0.2-alpha, MAPREDUCE-4820, launcher jobs fail to set
            the distributed cache for the action job because the local JARs are implicitly
            included triggering a duplicate check.
            This flag removes the distributed cache files for the action as they'll be
            included from the local JARs of the JobClient (MRApps) submitting the action
            job from the launcher.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.filter.app.types-/name-
        -value-workflow_job, coordinator_action-/value-
        -description-
            The app-types among workflow/coordinator/bundle job/action for which
            for which events system is enabled.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.event.queue-/name-
        -value-org.apache.oozie.event.MemoryEventQueue-/value-
        -description-
            The implementation for EventQueue in use by the EventHandlerService.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.event.listeners-/name-
        -value-org.apache.oozie.jms.JMSJobEventListener-/value-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.queue.size-/name-
        -value-10000-/value-
        -description-
            Maximum number of events to be contained in the event queue.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.worker.interval-/name-
        -value-30-/value-
        -description-
            The default interval (seconds) at which the worker threads will be scheduled to run
            and process events.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.batch.size-/name-
        -value-10-/value-
        -description-
            The batch size for batched draining per thread from the event queue.
        -/description-
    -/property-
    -property-
        -name-oozie.service.EventHandlerService.worker.threads-/name-
        -value-3-/value-
        -description-
            Number of worker threads to be scheduled to run and process events.
        -/description-
    -/property-
    -property-
        -name-oozie.sla.service.SLAService.capacity-/name-
        -value-5000-/value-
        -description-
             Maximum number of sla records to be contained in the memory structure.
        -/description-
    -/property-
    -property-
        -name-oozie.sla.service.SLAService.alert.events-/name-
        -value-END_MISS-/value-
        -description-
             Default types of SLA events for being alerted of.
        -/description-
    -/property-
    -property-
        -name-oozie.sla.service.SLAService.calculator.impl-/name-
        -value-org.apache.oozie.sla.SLACalculatorMemory-/value-
        -description-
             The implementation for SLACalculator in use by the SLAService.
        -/description-
    -/property-
    -property-
        -name-oozie.sla.service.SLAService.job.event.latency-/name-
        -value-90000-/value-
        -description-
             Time in milliseconds to account of latency of getting the job status event
             to compare against and decide sla miss/met
        -/description-
    -/property-
    -property-
        -name-oozie.sla.service.SLAService.check.interval-/name-
        -value-30-/value-
        -description-
             Time interval, in seconds, at which SLA Worker will be scheduled to run
        -/description-
    -/property-
    -property-
        -name-oozie.sla.disable.alerts.older.than-/name-
        -value-48-/value-
        -description-
             Time threshold, in HOURS, for disabling SLA alerting for jobs whose
             nominal time is older than this.
        -/description-
    -/property-
    -!-- ZooKeeper configuration ---
    -property-
        -name-oozie.zookeeper.connection.string-/name-
        -value-localhost:2181-/value-
        -description-
            Comma-separated values of host:port pairs of the ZooKeeper servers.
        -/description-
    -/property-
    -property-
        -name-oozie.zookeeper.namespace-/name-
        -value-oozie-/value-
        -description-
            The namespace to use.  All of the Oozie Servers that are planning on talking to each other should have the same
            namespace.
        -/description-
    -/property-
    -property-
        -name-oozie.zookeeper.connection.timeout-/name-
        -value-180-/value-
        -description-
        Default ZK connection timeout (in sec). If connection is lost for more than timeout, then Oozie server will shutdown
        itself if oozie.zookeeper.server.shutdown.ontimeout is true.
        -/description-
    -/property-
    -property-
        -name-oozie.zookeeper.server.shutdown.ontimeout-/name-
        -value-true-/value-
        -description-
            If true, Oozie server will shutdown itself on ZK
            connection timeout.
        -/description-
    -/property-
    -property-
        -name-oozie.http.hostname-/name-
        -value-localhost-/value-
        -description-
            Oozie server host name.
        -/description-
    -/property-
    -property-
        -name-oozie.http.port-/name-
        -value-11000-/value-
        -description-
            Oozie server port.
        -/description-
    -/property-
    -property-
        -name-oozie.instance.id-/name-
        -value-${oozie.http.hostname}-/value-
        -description-
            Each Oozie server should have its own unique instance id. The default is system property
            =${OOZIE_HTTP_HOSTNAME}= (i.e. the hostname).
        -/description-
    -/property-
    -!-- Log HA configuration ---
   -property-
       -name-oozie.service.XLogCopyService.interval-/name-
        -value-300-/value-
        -description-
            The interval at which XLogCopyService runs. The service will copy contents in oozie.log onto hdfs by batch
        -/description-
   -/property-
    -property-
        -name-oozie.service.XLogCopyService.hdfs.log.dir-/name-
        -value-/user/${user.name}/oozie-logs-/value-
        -description-
            the default dir on hdfs where oozie.log will be copied to.
        -/description-
    -/property-
    -property-
        -name-oozie.service.XLogCopyService.purge.enable-/name-
        -value-false-/value-
        -description-
            if true, a command will run alongside with PurgeService to delete jobs logs in hdfs that are older than
            a configured interval.
            That means, if true,  the bundle, coordinator, or workflow jobs that are going to be purged from the
                database will also see their corresponding logs in hdfs being deleted.
            -/description-
    -/property-
    -!-- Sharelib Configuration ---
    -property-
        -name-oozie.service.ShareLibService.mapping.file-/name-
        -value- -/value-
        -description-
            Sharelib mapping files contains list of key=value,
            where key will be the sharelib name for the action and value is a comma separated list of
            DFS directories or jar files.
            Example.
            oozie.pig_10=hdfs:///share/lib/pig/pig-0.10.1/lib/
            oozie.pig=hdfs:///share/lib/pig/pig-0.11.1/lib/
            oozie.distcp=hdfs:///share/lib/hadoop-2.2.0/share/hadoop/tools/lib/hadoop-distcp-2.2.0.jar
        -/description-
    -/property-
        -property-
        -name-oozie.service.ShareLibService.fail.fast.on.startup-/name-
        -value-false-/value-
        -description-
            Fails server starup if sharelib initilzation fails.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ShareLibService.purge.interval-/name-
        -value-1-/value-
        -description-
            How often, in days, Oozie should check for old ShareLibs and LauncherLibs to purge from HDFS.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ShareLibService.temp.sharelib.retention.days-/name-
        -value-7-/value-
        -description-
            ShareLib retention time in days.
        -/description-
    -/property-
    -property-
        -name-oozie.action.ship.launcher.jar-/name-
        -value-false-/value-
        -description-
            Specifies whether launcher jar is shipped or not.
        -/description-
    -/property-
    -property-
        -name-oozie.action.jobinfo.enable-/name-
        -value-false-/value-
        -description-
        JobInfo will contain information of bundle, coordinator, workflow and actions. If enabled, hadoop job will have
        property(oozie.job.info) which value is multiple key/value pair separated by ",". This information can be used for
        analytics like how many oozie jobs are submitted for a particular period, what is the total number of failed pig jobs,
        etc from mapreduce job history logs and configuration.
        User can also add custom workflow property to jobinfo by adding property which prefix with "oozie.job.info."
        Eg.
        oozie.job.info="bundle.id=,bundle.name=,coord.name=,coord.nominal.time=,coord.name=,wf.id=,
        wf.name=,action.name=,action.type=,launcher=true"
        -/description-
    -/property-
    -property-
        -name-oozie.service.XLogStreamingService.max.log.scan.duration-/name-
        -value--1-/value-
        -description-
        Max log scan duration in hours. If log scan request end_date - start_date - value,
        then exception is thrown to reduce the scan duration. -1 indicate no limit.
        -/description-
    -/property-
    -property-
        -name-oozie.service.XLogStreamingService.actionlist.max.log.scan.duration-/name-
        -value--1-/value-
        -description-
        Max log scan duration in hours for coordinator job when list of actions are specified.
        If log streaming request end_date - start_date - value, then exception is thrown to reduce the scan duration.
        -1 indicate no limit.
        This setting is separate from max.log.scan.duration as we want to allow higher durations when actions are specified.
        -/description-
    -/property-
    -!-- JvmPauseMonitorService Configuration ---
    -property-
        -name-oozie.service.JvmPauseMonitorService.warn-threshold.ms-/name-
        -value-10000-/value-
        -description-
            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate
            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for
            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log
            a WARN level message; there is also a counter named "jvm.pause.warn-threshold".
        -/description-
    -/property-
    -property-
        -name-oozie.service.JvmPauseMonitorService.info-threshold.ms-/name-
        -value-1000-/value-
        -description-
            The JvmPauseMonitorService runs a thread that repeatedly tries to detect when the JVM pauses, which could indicate
            that the JVM or host machine is overloaded or other problems.  This thread sleeps for 500ms; if it sleeps for
            significantly longer, then there is likely a problem.  This property specifies the threadshold for when Oozie should log
            an INFO level message; there is also a counter named "jvm.pause.info-threshold".
        -/description-
    -/property-
    -property-
        -name-oozie.service.ZKLocksService.locks.reaper.threshold-/name-
        -value-300-/value-
        -description-
            The frequency at which the ChildReaper will run.
            Duration should be in sec. Default is 5 min.
        -/description-
    -/property-
    -property-
        -name-oozie.service.ZKLocksService.locks.reaper.threads-/name-
        -value-2-/value-
        -description-
            Number of fixed threads used by ChildReaper to
            delete empty locks.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AbandonedCoordCheckerService.check.interval
        -/name-
        -value-1440-/value-
        -description-
            Interval, in minutes, at which AbandonedCoordCheckerService should run.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AbandonedCoordCheckerService.check.delay
        -/name-
        -value-60-/value-
        -description-
            Delay, in minutes, at which AbandonedCoordCheckerService should run.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AbandonedCoordCheckerService.failure.limit
        -/name-
        -value-25-/value-
        -description-
            Failure limit. A job is considered to be abandoned/faulty if total number of actions in
            failed/timedout/suspended -= "Failure limit" and there are no succeeded action.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AbandonedCoordCheckerService.kill.jobs
        -/name-
        -value-false-/value-
        -description-
            If true, AbandonedCoordCheckerService will kill abandoned coords.
        -/description-
    -/property-
    -property-
        -name-oozie.service.AbandonedCoordCheckerService.job.older.than-/name-
        -value-2880-/value-
        -description-
         In minutes, job will be considered as abandoned/faulty if job is older than this value.
        -/description-
    -/property-
    -property-
        -name-oozie.notification.proxy-/name-
        -value--/value-
        -description-
         System level proxy setting for job notifications.
        -/description-
    -/property-
    -property-
        -name-oozie.wf.rerun.disablechild-/name-
        -value-false-/value-
        -description-
            By setting this option, workflow rerun will be disabled if parent workflow or coordinator exist and
            it will only rerun through parent.
        -/description-
    -/property-
    -property-
        -name-oozie.use.system.libpath-/name-
        -value-false-/value-
        -description-
            Default value of oozie.use.system.libpath. If user haven't specified =oozie.use.system.libpath=
            in the job.properties and this value is true and Oozie will include sharelib jars for workflow.
        -/description-
    -/property-
    -property-
        -name-oozie.service.PauseTransitService.callable.batch.size
        -/name-
        -value-10-/value-
        -description-
            This value determines the number of callable which will be batched together
            to be executed by a single thread.
        -/description-
    -/property-
    -!-- XConfiguration ---
    -property-
        -name-oozie.configuration.substitute.depth-/name-
        -value-20-/value-
        -description-
            This value determines the depth of substitution in configurations.
            If set -1, No limitation on substitution.
        -/description-
    -/property-
    -property-
        -name-oozie.service.SparkConfigurationService.spark.configurations-/name-
        -value-*=spark-conf-/value-
        -description-
            Comma separated AUTHORITY=SPARK_CONF_DIR, where AUTHORITY is the HOST:PORT of
            the ResourceManager of a YARN cluster. The wildcard '*' configuration is
            used when there is no exact match for an authority. The SPARK_CONF_DIR contains
            the relevant spark-defaults.conf properties file. If the path is relative is looked within
            the Oozie configuration directory; though the path can be absolute.  This is only used
            when the Spark master is set to either "yarn-client" or "yarn-cluster".
        -/description-
    -/property-
    -property-
        -name-oozie.email.attachment.enabled-/name-
        -value-true-/value-
        -description-
            This value determines whether to support email attachment of a file on HDFS.
            Set it false if there is any security concern.
        -/description-
    -/property-
-/configuration-

oozie-env.cmd

set CATALINA_OPTS=%CATALINA_OPTS% -Xmx1024m

oozie-env.sh

if [ -d "/usr/lib/bigtop-tomcat" ]; then
  export OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}
  export CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}
  export CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}
  export OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat
fi
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export JRE_HOME=${JAVA_HOME}
export CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m -XX:MaxPermSize=512m"
export OOZIE_LOG=/var/log/oozie
export CATALINA_PID=/var/run/oozie/oozie.pid
export OOZIE_DATA=/hadoop/oozie/data
export OOZIE_HTTP_PORT=11000
export OOZIE_ADMIN_PORT=11001
export JAVA_LIBRARY_PATH=/usr/hdp/current/hadoop-client/lib/native/Linux-amd64-64
export OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "
    

oozie-log4j.properties

log4j.appender.oozie=org.apache.log4j.DailyRollingFileAppender
log4j.appender.oozie.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.oozie.File=${oozie.log.dir}/oozie.log
log4j.appender.oozie.Append=true
log4j.appender.oozie.layout=org.apache.log4j.PatternLayout
log4j.appender.oozie.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - SERVER[${oozie.instance.id}] %m%n
log4j.appender.oozieops=org.apache.log4j.DailyRollingFileAppender
log4j.appender.oozieops.DatePattern='.'yyyy-MM-dd
log4j.appender.oozieops.File=${oozie.log.dir}/oozie-ops.log
log4j.appender.oozieops.Append=true
log4j.appender.oozieops.layout=org.apache.log4j.PatternLayout
log4j.appender.oozieops.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n
log4j.appender.oozieinstrumentation=org.apache.log4j.DailyRollingFileAppender
log4j.appender.oozieinstrumentation.DatePattern='.'yyyy-MM-dd
log4j.appender.oozieinstrumentation.File=${oozie.log.dir}/oozie-instrumentation.log
log4j.appender.oozieinstrumentation.Append=true
log4j.appender.oozieinstrumentation.layout=org.apache.log4j.PatternLayout
log4j.appender.oozieinstrumentation.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n
log4j.appender.oozieaudit=org.apache.log4j.DailyRollingFileAppender
log4j.appender.oozieaudit.DatePattern='.'yyyy-MM-dd
log4j.appender.oozieaudit.File=${oozie.log.dir}/oozie-audit.log
log4j.appender.oozieaudit.Append=true
log4j.appender.oozieaudit.layout=org.apache.log4j.PatternLayout
log4j.appender.oozieaudit.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n
log4j.appender.openjpa=org.apache.log4j.DailyRollingFileAppender
log4j.appender.openjpa.DatePattern='.'yyyy-MM-dd
log4j.appender.openjpa.File=${oozie.log.dir}/oozie-jpa.log
log4j.appender.openjpa.Append=true
log4j.appender.openjpa.layout=org.apache.log4j.PatternLayout
log4j.appender.openjpa.layout.ConversionPattern=%d{ISO8601} %5p %c{1}:%L - %m%n
log4j.logger.openjpa=INFO, openjpa
log4j.logger.oozieops=INFO, oozieops
log4j.logger.oozieinstrumentation=ALL, oozieinstrumentation
log4j.logger.oozieaudit=ALL, oozieaudit
log4j.logger.org.apache.oozie=INFO, oozie
log4j.logger.org.apache.hadoop=WARN, oozie
log4j.logger.org.mortbay=WARN, oozie
log4j.logger.org.hsqldb=WARN, oozie
log4j.logger.org.apache.hadoop.security.authentication.server=INFO, oozie
    

oozie-site.xml

-!--Tue Jul 21 16:44:25 2015---
    -configuration-
    
    -property-
      -name-oozie.authentication.kerberos.name.rules-/name-
      -value-
    -/value-
    -/property-
    
    -property-
      -name-oozie.authentication.simple.anonymous.allowed-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-oozie.authentication.type-/name-
      -value-simple-/value-
    -/property-
    
    -property-
      -name-oozie.base.url-/name-
      -value-http://sandbox.hortonworks.com:11000/oozie-/value-
    -/property-
    
    -property-
      -name-oozie.credentials.credentialclasses-/name-
      -value-hcat=org.apache.oozie.action.hadoop.HCatCredentials,hive2=org.apache.oozie.action.hadoop.Hive2Credentials-/value-
    -/property-
    
    -property-
      -name-oozie.db.schema.name-/name-
      -value-oozie-/value-
    -/property-
    
    -property-
      -name-oozie.service.AuthorizationService.authorization.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-oozie.service.AuthorizationService.security.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-action-create-/name-
      -value-
      now=org.apache.oozie.extensions.OozieELExtensions#ph2_now,
      today=org.apache.oozie.extensions.OozieELExtensions#ph2_today,
      yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday,
      currentWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_currentWeek,
      lastWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_lastWeek,
      currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth,
      lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth,
      currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear,
      lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear,
      latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,
      future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,
      formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-action-create-inst-/name-
      -value-
      now=org.apache.oozie.extensions.OozieELExtensions#ph2_now_inst,
      today=org.apache.oozie.extensions.OozieELExtensions#ph2_today_inst,
      yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday_inst,
      currentWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_currentWeek_inst,
      lastWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_lastWeek_inst,
      currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth_inst,
      lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth_inst,
      currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear_inst,
      lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear_inst,
      latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,
      future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo,
      formatTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_formatTime,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-action-start-/name-
      -value-
      now=org.apache.oozie.extensions.OozieELExtensions#ph2_now,
      today=org.apache.oozie.extensions.OozieELExtensions#ph2_today,
      yesterday=org.apache.oozie.extensions.OozieELExtensions#ph2_yesterday,
      currentWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_currentWeek,
      lastWeek=org.apache.oozie.extensions.OozieELExtensions#ph2_lastWeek,
      currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_currentMonth,
      lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph2_lastMonth,
      currentYear=org.apache.oozie.extensions.OozieELExtensions#ph2_currentYear,
      lastYear=org.apache.oozie.extensions.OozieELExtensions#ph2_lastYear,
      latest=org.apache.oozie.coord.CoordELFunctions#ph3_coord_latest,
      future=org.apache.oozie.coord.CoordELFunctions#ph3_coord_future,
      dataIn=org.apache.oozie.extensions.OozieELExtensions#ph3_dataIn,
      instanceTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_nominalTime,
      dateOffset=org.apache.oozie.coord.CoordELFunctions#ph3_coord_dateOffset,
      formatTime=org.apache.oozie.coord.CoordELFunctions#ph3_coord_formatTime,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-job-submit-data-/name-
      -value-
      now=org.apache.oozie.extensions.OozieELExtensions#ph1_now_echo,
      today=org.apache.oozie.extensions.OozieELExtensions#ph1_today_echo,
      yesterday=org.apache.oozie.extensions.OozieELExtensions#ph1_yesterday_echo,
      currentWeek=org.apache.oozie.extensions.OozieELExtensions#ph1_currentWeek_echo,
      lastWeek=org.apache.oozie.extensions.OozieELExtensions#ph1_lastWeek_echo,
      currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_currentMonth_echo,
      lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_lastMonth_echo,
      currentYear=org.apache.oozie.extensions.OozieELExtensions#ph1_currentYear_echo,
      lastYear=org.apache.oozie.extensions.OozieELExtensions#ph1_lastYear_echo,
      dataIn=org.apache.oozie.extensions.OozieELExtensions#ph1_dataIn_echo,
      instanceTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_wrap,
      formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,
      dateOffset=org.apache.oozie.coord.CoordELFunctions#ph1_coord_dateOffset_echo,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-job-submit-instances-/name-
      -value-
      now=org.apache.oozie.extensions.OozieELExtensions#ph1_now_echo,
      today=org.apache.oozie.extensions.OozieELExtensions#ph1_today_echo,
      yesterday=org.apache.oozie.extensions.OozieELExtensions#ph1_yesterday_echo,
      currentWeek=org.apache.oozie.extensions.OozieELExtensions#ph1_currentWeek_echo,
      lastWeek=org.apache.oozie.extensions.OozieELExtensions#ph1_lastWeek_echo,
      currentMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_currentMonth_echo,
      lastMonth=org.apache.oozie.extensions.OozieELExtensions#ph1_lastMonth_echo,
      currentYear=org.apache.oozie.extensions.OozieELExtensions#ph1_currentYear_echo,
      lastYear=org.apache.oozie.extensions.OozieELExtensions#ph1_lastYear_echo,
      formatTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_formatTime_echo,
      latest=org.apache.oozie.coord.CoordELFunctions#ph2_coord_latest_echo,
      future=org.apache.oozie.coord.CoordELFunctions#ph2_coord_future_echo-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-sla-create-/name-
      -value-
      instanceTime=org.apache.oozie.coord.CoordELFunctions#ph2_coord_nominalTime,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.ELService.ext.functions.coord-sla-submit-/name-
      -value-
      instanceTime=org.apache.oozie.coord.CoordELFunctions#ph1_coord_nominalTime_echo_fixed,
      user=org.apache.oozie.coord.CoordELFunctions#coord_user-/value-
    -/property-
    
    -property-
      -name-oozie.service.HadoopAccessorService.hadoop.configurations-/name-
      -value-*=/etc/hadoop/conf-/value-
    -/property-
    
    -property-
      -name-oozie.service.HadoopAccessorService.kerberos.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-oozie.service.HadoopAccessorService.supported.filesystems-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-oozie.service.JPAService.jdbc.driver-/name-
      -value-org.apache.derby.jdbc.EmbeddedDriver-/value-
    -/property-
    
    -property-
      -name-oozie.service.JPAService.jdbc.password-/name-
      -value-oozie-/value-
    -/property-
    
    -property-
      -name-oozie.service.JPAService.jdbc.url-/name-
      -value-jdbc:derby:${oozie.data.dir}/${oozie.db.schema.name}-db;create=true-/value-
    -/property-
    
    -property-
      -name-oozie.service.JPAService.jdbc.username-/name-
      -value-oozie-/value-
    -/property-
    
    -property-
      -name-oozie.service.ProxyUserService.proxyuser.falcon.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-oozie.service.ProxyUserService.proxyuser.falcon.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-oozie.service.ProxyUserService.proxyuser.hue.groups-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-oozie.service.ProxyUserService.proxyuser.hue.hosts-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-oozie.service.URIHandlerService.uri.handlers-/name-
      -value-org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler-/value-
    -/property-
    
    -property-
      -name-oozie.services.ext-/name-
      -value-org.apache.oozie.service.JMSAccessorService,org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService-/value-
    -/property-
    
  -/configuration-

pig

/etc/pig/conf:
-rw-r--r-- 1 hdfs hadoop  1152 2015-07-21 15:59 log4j.properties
-rwxr-xr-x 1 hdfs root     203 2015-07-21 15:59 pig-env.sh
-rw-r--r-- 1 hdfs hadoop 23761 2015-07-21 15:59 pig.properties

log4j.properties

log4j.logger.org.apache.pig=info, A
log4j.appender.A=org.apache.log4j.ConsoleAppender
log4j.appender.A.layout=org.apache.log4j.PatternLayout
log4j.appender.A.layout.ConversionPattern=%-4r [%t] %-5p %c %x - %m%n
    

pig-env.sh

JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
if [ -d "/usr/lib/tez" ]; then
  PIG_OPTS="$PIG_OPTS -Dmapreduce.framework.name=yarn"
fi
    

pig.properties

pig.location.check.strict=false
hcat.bin=/usr/local/hcat/bin/hcat
pig.tez.auto.parallelism=true
pig.tez.grace.parallelism=true
    

slider

/etc/slider/conf:
-rw-r--r-- 1 root root 2380 2015-07-21 15:59 log4j.properties
-rw-r--r-- 1 root root 3025 2015-07-14 16:23 log4j-server.properties
-rw-r--r-- 1 root root   75 2015-07-21 16:41 slider-client.xml
-rwxr-xr-x 1 root root  530 2015-07-21 15:59 slider-env.sh
-rw-r--r-- 1 root root 2286 2015-07-14 16:23 slider-server.xml

log4j.properties

log4j.rootLogger=INFO,stdout
log4j.threshhold=ALL
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} - %m%n
log4j.appender.subprocess=org.apache.log4j.ConsoleAppender
log4j.appender.subprocess.layout=org.apache.log4j.PatternLayout
log4j.appender.subprocess.layout.ConversionPattern=[%c{1}]: %m%n
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
log4j.logger.org.apache.hadoop.hdfs=WARN
log4j.logger.org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor=WARN
log4j.logger.org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl=WARN
log4j.logger.org.apache.zookeeper=WARN
    

log4j-server.properties

log4j.rootLogger=INFO, amlog
log4j.threshhold=ALL
log4j.appender.amlog=org.apache.log4j.RollingFileAppender
log4j.appender.amlog.layout=org.apache.log4j.PatternLayout
log4j.appender.amlog.File=${LOG_DIR}/slider.log
log4j.appender.amlog.MaxFileSize=1MB
log4j.appender.amlog.MaxBackupIndex=10
log4j.appender.amlog.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} - %m%n
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.Target=System.err
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} - %m%n
log4j.appender.subprocess=org.apache.log4j.ConsoleAppender
log4j.appender.subprocess.layout=org.apache.log4j.PatternLayout
log4j.appender.subprocess.layout.ConversionPattern=[%c{1}]: %m%n
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
log4j.logger.org.apache.hadoop.hdfs=WARN
log4j.logger.org.apache.hadoop.hdfs.shortcircuit=ERROR
log4j.logger.org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor=WARN
log4j.logger.org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl=WARN
log4j.logger.org.apache.zookeeper=WARN
log4j.logger.org.apache.curator.framework.state=ERROR
log4j.logger.org.apache.curator.framework.imps=WARN

slider-client.xml

-!--Tue Jul 21 16:41:27 2015---
    -configuration-
    
  -/configuration-

slider-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export HADOOP_CONF_DIR=/usr/hdp/current/hadoop-client/conf
    

slider-server.xml

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at
       http://www.apache.org/licenses/LICENSE-2.0
   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
---
-!--
  This is an optional configuration file.
  Properties set here are picked up in the slider Application Master, 
  supplementing configuration options in core-site.xml and yarn-site.xml.
  
  These options are NOT read in the client
---
-configuration-
  -!--
    -property-
      -name-slider.metrics.ganglia.enabled-/name-
      -value-true-/value-
      -description-Boolean to enable Ganglia metrics reporting-/description-
    -/property-
    -property-
      -name-slider.metrics.ganglia.host-/name-
      -value-localhost-/value-
      -description-Ganglia hostname-/description-
    -/property-
    
    -property-
      -name-slider.metrics.ganglia.port-/name-
      -value-8649-/value-
      -description-Ganglia port-/description-
    -/property-
    
    -property-
      -name-slider.metrics.ganglia.version-31-/name-
      -value-true-/value-
      -description-protocol version true=v3.1, false=v3.0-/description-
    -/property-
  ---
  -!--
  Options to enable metrics to slf4j
  
  -property-
    -name-slider.metrics.logging.enabled-/name-
    -value-true-/value-
    -description-Boolean to enable SL4J metrics reporting at
    scheduled intervals-/description-
  -/property-
  -property-
    -name-slider.metrics.logging.log.name-/name-
    -value-org.apache.slider.metrics.log-/value-
    -description-name of log-/description-
  -/property-
  
  ---
-/configuration-

spark

/etc/spark/conf:
-rw-r--r-- 1 root  root   303 2015-07-14 15:33 fairscheduler.xml.template
-rw-r--r-- 1 spark spark  209 2015-07-21 16:03 hive-site.xml
-rw-r--r-- 1 spark spark   28 2015-07-21 16:00 java-opts
-rw-r--r-- 1 spark spark  626 2015-07-21 16:00 log4j.properties
-rw-r--r-- 1 root  root   620 2015-07-14 15:33 log4j.properties.template
-rw-r--r-- 1 spark spark 4962 2015-07-21 16:00 metrics.properties
-rw-r--r-- 1 root  root  5567 2015-07-14 15:33 metrics.properties.template
-rw-r--r-- 1 root  root    80 2015-07-14 15:33 slaves.template
-rw-r--r-- 1 root  root   844 2015-07-21 16:03 spark-defaults.conf
-rw-r--r-- 1 root  root   507 2015-07-14 15:33 spark-defaults.conf.template
-rwxr-xr-x 1 spark spark 1828 2015-07-21 16:03 spark-env.sh
-rwxr-xr-x 1 root  root  3217 2015-07-14 15:33 spark-env.sh.template

fairscheduler.xml.template

-?xml version="1.0"?-
-allocations-
  -pool name="production"-
    -schedulingMode-FAIR-/schedulingMode-
    -weight-1-/weight-
    -minShare-2-/minShare-
  -/pool-
  -pool name="test"-
    -schedulingMode-FIFO-/schedulingMode-
    -weight-2-/weight-
    -minShare-3-/minShare-
  -/pool-
-/allocations-

hive-site.xml

-!--Tue Jul 21 16:03:10 2015---
    -configuration-
    
    -property-
      -name-hive.metastore.uris-/name-
      -value-thrift://sandbox.hortonworks.com:9083-/value-
    -/property-
    
  -/configuration-

java-opts

  -Dhdp.version=2.3.0.0-2557

log4j.properties

log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO
    

log4j.properties.template

log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.logger.org.eclipse.jetty=WARN
log4j.logger.org.eclipse.jetty.util.component.AbstractLifeCycle=ERROR
log4j.logger.org.apache.spark.repl.SparkIMain$exprTyper=INFO
log4j.logger.org.apache.spark.repl.SparkILoop$SparkILoopInterpreter=INFO

metrics.properties

    

metrics.properties.template


slaves.template

localhost

spark-defaults.conf

    
spark.driver.extraJavaOptions -Dhdp.version=2.3.0.0-2557
spark.history.kerberos.keytab none
spark.history.kerberos.principal none
spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port 18080
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.0.0-2557
spark.yarn.applicationMaster.waitTries 10
spark.yarn.containerLauncherMaxThreads 25
spark.yarn.driver.memoryOverhead 384
spark.yarn.executor.memoryOverhead 384
spark.yarn.historyServer.address sandbox.hortonworks.com:18080
spark.yarn.max.executor.failures 3
spark.yarn.preserve.staging.files false
spark.yarn.queue default
spark.yarn.scheduler.heartbeat.interval-ms 5000
spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
spark.yarn.submit.file.replication 3
    

spark-defaults.conf.template


spark-env.sh

export SPARK_CONF_DIR=${SPARK_HOME:-/usr/hdp/current/spark-historyserver}/conf
export SPARK_LOG_DIR=/var/log/spark
export SPARK_PID_DIR=/var/run/spark
SPARK_IDENT_STRING=$USER
SPARK_NICENESS=0
export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hadoop-client}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/usr/hdp/current/hadoop-client/conf}
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
if [ -d "/etc/tez/conf/" ]; then
  export TEZ_CONF_DIR=/etc/tez/conf
else
  export TEZ_CONF_DIR=
fi

spark-env.sh.template



sqoop

/etc/sqoop/conf:
-rwxr-xr-x 1 root  root   3895 2015-07-14 14:57 oraoop-site-template.xml
-rw-r--r-- 1 sqoop hadoop  699 2015-07-21 16:00 sqoop-env.sh
-rwxr-xr-x 1 root  root   1368 2015-07-14 14:57 sqoop-env-template.cmd
-rwxr-xr-x 1 sqoop hadoop 1345 2015-07-14 14:57 sqoop-env-template.sh
-rwxr-xr-x 1 sqoop hadoop 5531 2015-07-14 14:57 sqoop-site-template.xml
-rwxr-xr-x 1 sqoop hadoop 5531 2015-07-14 14:58 sqoop-site.xml

oraoop-site-template.xml

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
  Licensed to the Apache Software Foundation (ASF) under one
  or more contributor license agreements.  See the NOTICE file
  distributed with this work for additional information
  regarding copyright ownership.  The ASF licenses this file
  to you under the Apache License, Version 2.0 (the
  "License"); you may not use this file except in compliance
  with the License.  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing,
  software distributed under the License is distributed on an
  "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
  KIND, either express or implied.  See the License for the
  specific language governing permissions and limitations
  under the License.
---
-!-- Put OraOop-specific properties in this file. ---
-configuration-
  -property-
    -name-oraoop.oracle.session.initialization.statements-/name-
    -value-alter session disable parallel query;
           alter session set "_serial_direct_read"=true;
           alter session set tracefile_identifier=oraoop;
           --alter session set events '10046 trace name context forever, level 8';
    -/value-
    -description-A semicolon-delimited list of Oracle statements that are executed, in order, to initialize each Oracle session.
                 Use {[property_name]|[default_value]} characters to refer to a Sqoop/Hadoop configuration property.
                 If the property does not exist, the specified default value will be used.
                 E.g. {oracle.sessionTimeZone|GMT} will equate to the value of the property named "oracle.sessionTimeZone" or
                 to "GMT" if this property has not been set.
    -/description-
  -/property-
  -property-
    -name-mapred.map.tasks.speculative.execution-/name-
    -value-false-/value-
    -description-Speculative execution is disabled to prevent redundant load on the Oracle database.
    -/description-
  -/property-
  -property-
    -name-oraoop.import.hint-/name-
    -value-NO_INDEX(t)-/value-
    -description-Hint to add to the SELECT statement for an IMPORT job.
                 The table will have an alias of t which can be used in the hint.
                 By default the NO_INDEX hint is applied to stop the use of an index.
                 To override this in oraoop-site.xml set the value to a blank string.
    -/description-
  -/property-
-!--
  -property-
    -name-oraoop.block.allocation-/name-
    -value-ROUNDROBIN-/value-
    -description-Supported values are: ROUNDROBIN or SEQUENTIAL or RANDOM.
                 Refer to the OraOop documentation for more details.
    -/description-
  -/property-
---
-!--
  -property-
    -name-oraoop.import.omit.lobs.and.long-/name-
    -value-false-/value-
    -description-If true, OraOop will omit BLOB, CLOB, NCLOB and LONG columns during an Import.
    -/description-
  -/property-
---
-!--
  -property-
    -name-oraoop.table.import.where.clause.location-/name-
    -value-SUBSPLIT-/value-
    -description-Supported values are: SUBSPLIT or SPLIT.
                 Refer to the OraOop documentation for more details.
    -/description-
  -/property-
---
-!--
  -property-
    -name-oraoop.oracle.append.values.hint.usage-/name-
    -value-AUTO-/value-
    -description-Supported values are: AUTO or ON or OFF.
                 ON:
                     OraOop will use the APPEND_VALUES Oracle hint during a Sqoop export, when inserting
                     data into an Oracle table.
                 OFF:
                     OraOop will not use the APPEND_VALUES Oracle hint during a Sqoop export.
                 AUTO:
                     For OraOop 1.1, the AUTO setting will not use the APPEND_VALUES hint.
    -/description-
  -/property-
---
-/configuration-

sqoop-env.sh

export HADOOP_HOME=${HADOOP_HOME:-/usr/hdp/current/hbase-client}
export HBASE_HOME=${HBASE_HOME:-/usr/hdp/current/hbase-client}
export HIVE_HOME=${HIVE_HOME:-/usr/hdp/current/hive-client}
export ZOOCFGDIR=${ZOOCFGDIR:-/etc/zookeeper/conf}
export SQOOP_USER_CLASSPATH="`ls ${HIVE_HOME}/lib/libthrift-*.jar 2- /dev/null`:${SQOOP_USER_CLASSPATH}"
    

sqoop-env-template.cmd

@echo off
:: Licensed to the Apache Software Foundation (ASF) under one or more
:: contributor license agreements.  See the NOTICE file distributed with
:: this work for additional information regarding copyright ownership.
:: The ASF licenses this file to You under the Apache License, Version 2.0
:: (the "License"); you may not use this file except in compliance with
:: the License.  You may obtain a copy of the License at
::
::     http://www.apache.org/licenses/LICENSE-2.0
::
:: Unless required by applicable law or agreed to in writing, software
:: distributed under the License is distributed on an "AS IS" BASIS,
:: WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
:: See the License for the specific language governing permissions and
:: limitations under the License.
:: included in all the hadoop scripts with source command
:: should not be executable directly
:: also should not be passed any arguments, since we need original $*
:: Set Hadoop-specific environment variables here.
::Set path to where bin/hadoop is available
::set HADOOP_COMMON_HOME=
::Set path to where hadoop-*-core.jar is available
::set HADOOP_MAPRED_HOME=
::set the path to where bin/hbase is available
::set HBASE_HOME=
::Set the path to where bin/hive is available
::set HIVE_HOME=
::Set the path for where zookeper config dir is
::set ZOOCFGDIR=

sqoop-env-template.sh


sqoop-site-template.xml

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
---
-!-- Put Sqoop-specific properties in this file. ---
-configuration-
  -!--
    Set the value of this property to explicitly enable third-party
    ManagerFactory plugins.
    If this is not used, you can alternately specify a set of ManagerFactories
    in the $SQOOP_CONF_DIR/managers.d/ subdirectory.  Each file should contain
    one or more lines like:
      manager.class.name[=/path/to/containing.jar]
    Files will be consulted in lexicographical order only if this property
    is unset.
  ---
  -!--
  -property-
    -name-sqoop.connection.factories-/name-
    -value-com.cloudera.sqoop.manager.DefaultManagerFactory-/value-
    -description-A comma-delimited list of ManagerFactory implementations
      which are consulted, in order, to instantiate ConnManager instances
      used to drive connections to databases.
    -/description-
  -/property-
  ---
  -!--
    Set the value of this property to enable third-party tools.
    If this is not used, you can alternately specify a set of ToolPlugins
    in the $SQOOP_CONF_DIR/tools.d/ subdirectory.  Each file should contain
    one or more lines like:
      plugin.class.name[=/path/to/containing.jar]
    Files will be consulted in lexicographical order only if this property
    is unset.
  ---
  -!--
  -property-
    -name-sqoop.tool.plugins-/name-
    -value--/value-
    -description-A comma-delimited list of ToolPlugin implementations
      which are consulted, in order, to register SqoopTool instances which
      allow third-party tools to be used.
    -/description-
  -/property-
  ---
  -!--
    By default, the Sqoop metastore will auto-connect to a local embedded
    database stored in ~/.sqoop/. To disable metastore auto-connect, uncomment
    this next property.
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.enable.autoconnect-/name-
    -value-false-/value-
    -description-If true, Sqoop will connect to a local metastore
      for job management when no other metastore arguments are
      provided.
    -/description-
  -/property-
  ---
  -!--
    The auto-connect metastore is stored in ~/.sqoop/. Uncomment
    these next arguments to control the auto-connect process with
    greater precision.
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.autoconnect.url-/name-
    -value-jdbc:hsqldb:file:/tmp/sqoop-meta/meta.db;shutdown=true-/value-
    -description-The connect string to use when connecting to a
      job-management metastore. If unspecified, uses ~/.sqoop/.
      You can specify a different path here.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.client.autoconnect.username-/name-
    -value-SA-/value-
    -description-The username to bind to the metastore.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.client.autoconnect.password-/name-
    -value--/value-
    -description-The password to bind to the metastore.
    -/description-
  -/property-
  ---
  -!--
    For security reasons, by default your database password will not be stored in
    the Sqoop metastore. When executing a saved job, you will need to
    reenter the database password. Uncomment this setting to enable saved
    password storage. (INSECURE!)
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.record.password-/name-
    -value-true-/value-
    -description-If true, allow saved passwords in the metastore.
    -/description-
  -/property-
  ---
  -!--
    Enabling this option will instruct Sqoop to put all options that
    were used in the invocation into created mapreduce job(s). This
    become handy when one needs to investigate what exact options were
    used in the Sqoop invocation.
  ---
  -!--
  -property-
    -name-sqoop.jobbase.serialize.sqoopoptions-/name-
    -value-true-/value-
    -description-If true, then all options will be serialized into job.xml
    -/description-
  -/property-
  ---
  -!--
    SERVER CONFIGURATION: If you plan to run a Sqoop metastore on this machine,
    you should uncomment and set these parameters appropriately.
    You should then configure clients with:
       sqoop.metastore.client.autoconnect.url =
       jdbc:hsqldb:hsql://<server-name>:<port>/sqoop
  ---
  -!--
  -property-
    -name-sqoop.metastore.server.location-/name-
    -value-/tmp/sqoop-metastore/shared.db-/value-
    -description-Path to the shared metastore database files.
    If this is not set, it will be placed in ~/.sqoop/.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.server.port-/name-
    -value-16000-/value-
    -description-Port that this metastore should listen on.
    -/description-
  -/property-
  ---
-/configuration-

sqoop-site.xml

-?xml version="1.0"?-
-?xml-stylesheet type="text/xsl" href="configuration.xsl"?-
-!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
---
-!-- Put Sqoop-specific properties in this file. ---
-configuration-
  -!--
    Set the value of this property to explicitly enable third-party
    ManagerFactory plugins.
    If this is not used, you can alternately specify a set of ManagerFactories
    in the $SQOOP_CONF_DIR/managers.d/ subdirectory.  Each file should contain
    one or more lines like:
      manager.class.name[=/path/to/containing.jar]
    Files will be consulted in lexicographical order only if this property
    is unset.
  ---
  -!--
  -property-
    -name-sqoop.connection.factories-/name-
    -value-com.cloudera.sqoop.manager.DefaultManagerFactory-/value-
    -description-A comma-delimited list of ManagerFactory implementations
      which are consulted, in order, to instantiate ConnManager instances
      used to drive connections to databases.
    -/description-
  -/property-
  ---
  -!--
    Set the value of this property to enable third-party tools.
    If this is not used, you can alternately specify a set of ToolPlugins
    in the $SQOOP_CONF_DIR/tools.d/ subdirectory.  Each file should contain
    one or more lines like:
      plugin.class.name[=/path/to/containing.jar]
    Files will be consulted in lexicographical order only if this property
    is unset.
  ---
  -!--
  -property-
    -name-sqoop.tool.plugins-/name-
    -value--/value-
    -description-A comma-delimited list of ToolPlugin implementations
      which are consulted, in order, to register SqoopTool instances which
      allow third-party tools to be used.
    -/description-
  -/property-
  ---
  -!--
    By default, the Sqoop metastore will auto-connect to a local embedded
    database stored in ~/.sqoop/. To disable metastore auto-connect, uncomment
    this next property.
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.enable.autoconnect-/name-
    -value-false-/value-
    -description-If true, Sqoop will connect to a local metastore
      for job management when no other metastore arguments are
      provided.
    -/description-
  -/property-
  ---
  -!--
    The auto-connect metastore is stored in ~/.sqoop/. Uncomment
    these next arguments to control the auto-connect process with
    greater precision.
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.autoconnect.url-/name-
    -value-jdbc:hsqldb:file:/tmp/sqoop-meta/meta.db;shutdown=true-/value-
    -description-The connect string to use when connecting to a
      job-management metastore. If unspecified, uses ~/.sqoop/.
      You can specify a different path here.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.client.autoconnect.username-/name-
    -value-SA-/value-
    -description-The username to bind to the metastore.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.client.autoconnect.password-/name-
    -value--/value-
    -description-The password to bind to the metastore.
    -/description-
  -/property-
  ---
  -!--
    For security reasons, by default your database password will not be stored in
    the Sqoop metastore. When executing a saved job, you will need to
    reenter the database password. Uncomment this setting to enable saved
    password storage. (INSECURE!)
  ---
  -!--
  -property-
    -name-sqoop.metastore.client.record.password-/name-
    -value-true-/value-
    -description-If true, allow saved passwords in the metastore.
    -/description-
  -/property-
  ---
  -!--
    Enabling this option will instruct Sqoop to put all options that
    were used in the invocation into created mapreduce job(s). This
    become handy when one needs to investigate what exact options were
    used in the Sqoop invocation.
  ---
  -!--
  -property-
    -name-sqoop.jobbase.serialize.sqoopoptions-/name-
    -value-true-/value-
    -description-If true, then all options will be serialized into job.xml
    -/description-
  -/property-
  ---
  -!--
    SERVER CONFIGURATION: If you plan to run a Sqoop metastore on this machine,
    you should uncomment and set these parameters appropriately.
    You should then configure clients with:
       sqoop.metastore.client.autoconnect.url =
       jdbc:hsqldb:hsql://<server-name>:<port>/sqoop
  ---
  -!--
  -property-
    -name-sqoop.metastore.server.location-/name-
    -value-/tmp/sqoop-metastore/shared.db-/value-
    -description-Path to the shared metastore database files.
    If this is not set, it will be placed in ~/.sqoop/.
    -/description-
  -/property-
  -property-
    -name-sqoop.metastore.server.port-/name-
    -value-16000-/value-
    -description-Port that this metastore should listen on.
    -/description-
  -/property-
  ---
-/configuration-

storm

/etc/storm/conf:
-rw-r--r-- 1 storm hadoop 1211 2015-07-21 15:48 config.yaml
-rw-r--r-- 1 root  root   1128 2015-07-14 14:40 storm_env.ini
-rw-r--r-- 1 storm root    272 2015-07-21 16:04 storm-env.sh
-rw-r--r-- 1 storm hadoop   85 2015-07-21 15:48 storm-metrics2.properties
-rw-r--r-- 1 root  root   1024 2015-07-14 14:40 storm-slider-env.sh
-rw-r--r-- 1 storm hadoop 4576 2015-07-21 15:48 storm.yaml

config.yaml

nimbusHost: None
nimbusPort: 6627
http:
  # The port on which the HTTP server listens for service requests.
  port: 8745
  # The port on which the HTTP server listens for administrative requests.
  adminPort: 8746
enableGanglia: False
ganglia:
  reportInterval: 60
enableMetricsSink: True
metrics_collector:
  reportInterval: 60
  host: "sandbox.hortonworks.com"
  port: 6188
  appId: "nimbus"

storm_env.ini

[environment]

storm-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export STORM_CONF_DIR=/usr/hdp/current/storm-supervisor/conf
export STORM_HOME=/usr/hdp/current/storm-supervisor
    

storm-metrics2.properties

collector=sandbox.hortonworks.com
port=6188
maxRowCacheSize=10000
sendInterval=59000

storm-slider-env.sh

export JAVA_HOME=${JAVA_HOME}
export SLIDER_HOME=${SLIDER_HOME}

storm.yaml

dev.zookeeper.path : '/tmp/dev-storm-zookeeper'
drpc.childopts : '-Xmx220m'
drpc.invocations.port : 3773
drpc.port : 3772
drpc.queue.size : 128
drpc.request.timeout.secs : 600
drpc.worker.threads : 64
java.library.path : '/usr/local/lib:/opt/local/lib:/usr/lib:/usr/hdp/current/storm-client/lib'
logviewer.appender.name : 'A1'
logviewer.childopts : '-Xmx128m '
logviewer.port : 8005
nimbus.childopts : '-Xmx220m -javaagent:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=sandbox.hortonworks.com,port=8649,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-client/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Nimbus_JVM'
nimbus.cleanup.inbox.freq.secs : 600
nimbus.file.copy.expiration.secs : 600
nimbus.inbox.jar.expiration.secs : 3600
nimbus.monitor.freq.secs : 10
nimbus.reassign : true
nimbus.seeds : [sandbox.hortonworks.com]
nimbus.supervisor.timeout.secs : 60
nimbus.task.launch.secs : 120
nimbus.task.timeout.secs : 30
nimbus.thrift.max_buffer_size : 1048576
nimbus.thrift.port : 6627
nimbus.topology.validator : 'backtype.storm.nimbus.DefaultTopologyValidator'
storm.cluster.mode : 'distributed'
storm.local.dir : '/hadoop/storm'
storm.local.mode.zmq : false
storm.log.dir : '/var/log/storm'
storm.messaging.netty.buffer_size : 5242880
storm.messaging.netty.client_worker_threads : 1
storm.messaging.netty.max_retries : 30
storm.messaging.netty.max_wait_ms : 1000
storm.messaging.netty.min_wait_ms : 100
storm.messaging.netty.server_worker_threads : 1
storm.messaging.transport : 'backtype.storm.messaging.netty.Context'
storm.thrift.transport : 'backtype.storm.security.auth.SimpleTransportPlugin'
storm.zookeeper.connection.timeout : 15000
storm.zookeeper.port : 2181
storm.zookeeper.retry.interval : 1000
storm.zookeeper.retry.intervalceiling.millis : 30000
storm.zookeeper.retry.times : 5
storm.zookeeper.root : '/storm'
storm.zookeeper.servers : ['sandbox.hortonworks.com']
storm.zookeeper.session.timeout : 20000
supervisor.childopts : '-Xmx256m  -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=56431 -javaagent:/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-supervisor/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Supervisor_JVM'
supervisor.heartbeat.frequency.secs : 5
supervisor.monitor.frequency.secs : 3
supervisor.slots.ports : [6700, 6701]
supervisor.worker.start.timeout.secs : 120
supervisor.worker.timeout.secs : 30
task.heartbeat.frequency.secs : 3
task.refresh.poll.secs : 10
topology.acker.executors : null
topology.builtin.metrics.bucket.size.secs : 60
topology.debug : false
topology.disruptor.wait.strategy : 'com.lmax.disruptor.BlockingWaitStrategy'
topology.enable.message.timeouts : true
topology.error.throttle.interval.secs : 10
topology.executor.receive.buffer.size : 1024
topology.executor.send.buffer.size : 1024
topology.fall.back.on.java.serialization : true
topology.kryo.factory : 'backtype.storm.serialization.DefaultKryoFactory'
topology.max.error.report.per.interval : 5
topology.max.replication.wait.time.sec : 60
topology.max.spout.pending : null
topology.max.task.parallelism : null
topology.message.timeout.secs : 30
topology.min.replication.count : 1
topology.optimize : true
topology.receiver.buffer.size : 8
topology.skip.missing.kryo.registrations : false
topology.sleep.spout.wait.strategy.time.ms : 1
topology.spout.wait.strategy : 'backtype.storm.spout.SleepSpoutWaitStrategy'
topology.state.synchronization.timeout.secs : 60
topology.stats.sample.rate : 0.05
topology.tick.tuple.freq.secs : null
topology.transfer.buffer.size : 1024
topology.trident.batch.emit.interval.millis : 500
topology.tuple.serializer : 'backtype.storm.serialization.types.ListDelegateSerializer'
topology.worker.childopts : null
topology.worker.shared.thread.pool.size : 4
topology.workers : 1
transactional.zookeeper.port : null
transactional.zookeeper.root : '/transactional'
transactional.zookeeper.servers : null
ui.childopts : '-Xmx220m'
ui.filter : null
ui.port : 8744
worker.childopts : '-Xmx768m  -javaagent:/usr/hdp/current/storm-client/contrib/storm-jmxetric/lib/jmxetric-1.0.4.jar=host=localhost,port=8650,wireformat31x=true,mode=multicast,config=/usr/hdp/current/storm-client/contrib/storm-jmxetric/conf/jmxetric-conf.xml,process=Worker_%ID%_JVM'
worker.heartbeat.frequency.secs : 1
zmq.hwm : 0
zmq.linger.millis : 5000
zmq.threads : 1

tez

/etc/tez/conf:
-r-xr-xr-x 1 tez root    276 2015-07-21 16:00 tez-env.sh
-rw-rw-r-- 1 tez hadoop 6533 2015-07-21 16:41 tez-site.xml

tez-env.sh

export TEZ_CONF_DIR=/etc/tez/2.3.0.0-2557/0
export HADOOP_HOME=${HADOOP_HOME:-/usr}
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
    

tez-site.xml

-!--Tue Jul 21 16:41:29 2015---
    -configuration-
    
    -property-
      -name-tez.am.am-rm.heartbeat.interval-ms.max-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-tez.am.container.idle.release-timeout-max.millis-/name-
      -value-20000-/value-
    -/property-
    
    -property-
      -name-tez.am.container.idle.release-timeout-min.millis-/name-
      -value-10000-/value-
    -/property-
    
    -property-
      -name-tez.am.container.reuse.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-tez.am.container.reuse.locality.delay-allocation-millis-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-tez.am.container.reuse.non-local-fallback.enabled-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-tez.am.container.reuse.rack-fallback.enabled-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-tez.am.java.opts-/name-
      -value--server -Xmx200m -Djava.net.preferIPv4Stack=true -XX:+UseNUMA -XX:+UseParallelGC-/value-
    -/property-
    
    -property-
      -name-tez.am.launch.cluster-default.cmd-opts-/name-
      -value--server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-tez.am.launch.cmd-opts-/name-
      -value--XX:+PrintGCDetails -verbose:gc -XX:+PrintGCTimeStamps -XX:+UseNUMA -XX:+UseParallelGC-/value-
    -/property-
    
    -property-
      -name-tez.am.launch.env-/name-
      -value-LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64-/value-
    -/property-
    
    -property-
      -name-tez.am.log.level-/name-
      -value-INFO-/value-
    -/property-
    
    -property-
      -name-tez.am.max.app.attempts-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-tez.am.maxtaskfailures.per.node-/name-
      -value-10-/value-
    -/property-
    
    -property-
      -name-tez.am.resource.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-tez.am.tez-ui.history-url.template-/name-
      -value-__HISTORY_URL_BASE__?viewPath=%2F%23%2Ftez-app%2F__APPLICATION_ID__-/value-
    -/property-
    
    -property-
      -name-tez.am.view-acls-/name-
      -value-*-/value-
    -/property-
    
    -property-
      -name-tez.cluster.additional.classpath.prefix-/name-
      -value-/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure-/value-
    -/property-
    
    -property-
      -name-tez.counters.max-/name-
      -value-2000-/value-
    -/property-
    
    -property-
      -name-tez.counters.max.groups-/name-
      -value-1000-/value-
    -/property-
    
    -property-
      -name-tez.dag.am.resource.memory.mb-/name-
      -value-250-/value-
    -/property-
    
    -property-
      -name-tez.generate.debug.artifacts-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-tez.grouping.max-size-/name-
      -value-1073741824-/value-
    -/property-
    
    -property-
      -name-tez.grouping.min-size-/name-
      -value-16777216-/value-
    -/property-
    
    -property-
      -name-tez.grouping.split-waves-/name-
      -value-1.7-/value-
    -/property-
    
    -property-
      -name-tez.history.logging.service.class-/name-
      -value-org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService-/value-
    -/property-
    
    -property-
      -name-tez.lib.uris-/name-
      -value-/hdp/apps/${hdp.version}/tez/tez.tar.gz-/value-
    -/property-
    
    -property-
      -name-tez.runtime.compress-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-tez.runtime.compress.codec-/name-
      -value-org.apache.hadoop.io.compress.SnappyCodec-/value-
    -/property-
    
    -property-
      -name-tez.runtime.convert.user-payload.to.history-text-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-tez.runtime.io.sort.mb-/name-
      -value-150-/value-
    -/property-
    
    -property-
      -name-tez.runtime.optimize.local.fetch-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-tez.runtime.pipelined.sorter.sort.threads-/name-
      -value-2-/value-
    -/property-
    
    -property-
      -name-tez.runtime.sorter.class-/name-
      -value-PIPELINED-/value-
    -/property-
    
    -property-
      -name-tez.runtime.unordered.output.buffer.size-mb-/name-
      -value-100-/value-
    -/property-
    
    -property-
      -name-tez.session.am.dag.submit.timeout.secs-/name-
      -value-300-/value-
    -/property-
    
    -property-
      -name-tez.session.client.timeout.secs-/name-
      -value--1-/value-
    -/property-
    
    -property-
      -name-tez.shuffle-vertex-manager.max-src-fraction-/name-
      -value-0.4-/value-
    -/property-
    
    -property-
      -name-tez.shuffle-vertex-manager.min-src-fraction-/name-
      -value-0.2-/value-
    -/property-
    
    -property-
      -name-tez.staging-dir-/name-
      -value-/tmp/${user.name}/staging-/value-
    -/property-
    
    -property-
      -name-tez.task.am.heartbeat.counter.interval-ms.max-/name-
      -value-4000-/value-
    -/property-
    
    -property-
      -name-tez.task.generate.counters.per.io-/name-
      -value-true-/value-
    -/property-
    
    -property-
      -name-tez.task.get-task.sleep.interval-ms.max-/name-
      -value-200-/value-
    -/property-
    
    -property-
      -name-tez.task.launch.cluster-default.cmd-opts-/name-
      -value--server -Djava.net.preferIPv4Stack=true -Dhdp.version=${hdp.version}-/value-
    -/property-
    
    -property-
      -name-tez.task.launch.cmd-opts-/name-
      -value--Xmx256m-/value-
    -/property-
    
    -property-
      -name-tez.task.launch.env-/name-
      -value-LD_LIBRARY_PATH=/usr/hdp/${hdp.version}/hadoop/lib/native:/usr/hdp/${hdp.version}/hadoop/lib/native/Linux-amd64-64-/value-
    -/property-
    
    -property-
      -name-tez.task.max-events-per-heartbeat-/name-
      -value-500-/value-
    -/property-
    
    -property-
      -name-tez.task.resource.memory.mb-/name-
      -value-1536-/value-
    -/property-
    
    -property-
      -name-tez.use.cluster.hadoop-libs-/name-
      -value-false-/value-
    -/property-
    
    -property-
      -name-yarn.app.mapreduce.am.command-opts-/name-
      -value--Xmx200m-/value-
    -/property-
    
  -/configuration-

zookeeper

/etc/zookeeper/conf:
-rw-r--r-- 1 zookeeper hadoop  548 2015-07-21 16:00 configuration.xsl
-rw-r--r-- 1 zookeeper hadoop 2449 2015-07-21 16:00 log4j.properties
-rw-r--r-- 1 zookeeper hadoop  978 2015-07-21 16:00 zoo.cfg
-rw-r--r-- 1 root      root   1175 2015-07-14 12:49 zookeeper-env.cmd
-rw-r--r-- 1 zookeeper hadoop  331 2015-07-21 16:43 zookeeper-env.sh
-rw-r--r-- 1 zookeeper hadoop  922 2015-07-14 12:49 zoo_sample.cfg

configuration.xsl

-?xml version="1.0"?-
-xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"-
-xsl:output method="html"/-
-xsl:template match="configuration"-
-html-
-body-
-table border="1"-
-tr-
 -td-name-/td-
 -td-value-/td-
 -td-description-/td-
-/tr-
-xsl:for-each select="property"-
  -tr-
     -td--a name="{name}"--xsl:value-of select="name"/--/a--/td-
     -td--xsl:value-of select="value"/--/td-
     -td--xsl:value-of select="description"/--/td-
  -/tr-
-/xsl:for-each-
-/table-
-/body-
-/html-
-/xsl:template-
-/xsl:stylesheet-

log4j.properties

log4j.rootLogger=INFO, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=INFO
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=DEBUG
log4j.appender.ROLLINGFILE.File=zookeeper.log
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=zookeeper_trace.log
log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} - %-5p [%t:%C{1}@%L][%x] - %m%n
    

zoo.cfg

clientPort=2181
initLimit=10
autopurge.purgeInterval=24
syncLimit=5
tickTime=2000
dataDir=/hadoop/zookeeper
autopurge.snapRetainCount=30
server.1=sandbox.hortonworks.com:2888:3888

zookeeper-env.cmd

@echo off
set JVMFLAGS=-Djava.net.preferIPv4Stack=true

zookeeper-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
export ZOOKEEPER_HOME=/usr/hdp/current/zookeeper-server
export ZOO_LOG_DIR=/var/log/zookeeper
export ZOOPIDFILE=/var/run/zookeeper/zookeeper_server.pid
export SERVER_JVMFLAGS=-Xmx1024m
export JAVA=$JAVA_HOME/bin/java
export CLASSPATH=$CLASSPATH:/usr/share/zookeeper/*
    

zoo_sample.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181

Go to the top



Generated Files

sandbox.hortonworks.com.3306.htm, sandbox.hortonworks.com.postgres.5432.ambarirca.htm, sandbox.hortonworks.com.postgres.5432.ambari.htm, sandbox.hortonworks.com.postgres.5432.postgres.htm,

UX2HTML - Unix Configuration Report in HTML format
Copyright (C) 1995-2015 Meo Bogliolo
Statistics generated on: Mon Jul 27 15:37:30 UTC 2015

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Pubblic License for more details.

Sources: Sourceforge. Documentation: Meo's online technical documentation.