[Linux-ha-jp] heartbeatからPostgreSQLサービスが自動起動できない

Back to archive index

赤松 akamatsu_hiroshi_b1****@lab*****
2013年 6月 10日 (月) 18:24:54 JST


To:O.N さん

 赤松です。

> 先日、F-Secureが起動しているとHeartbeatよりPostgreSQLサービスが起動
> できないと報告させていただきましたが、Heartbeatを再起動することで
> PostgreSQLサービスが起動できることが確認できました。
> なぜ、Heartbeatを再起動しないといけないのか、その原因がわかりません。
> もし想定される原因、対策をご存知でしたら教えてください。
> 
 上記は想定通りの動作です。

 6月7日 19:21 の私のメールにありますとおり Heartbeat を止めると
 F-Secure の PostgreSQL も止まります。
 これは Heartbeat が止めるからです。
 (そしてこれは恐らく F-Secure にとって好ましくない動作のはずです)

 今回のメッセージにも記載されてます。

=====
-----------------------------------------------------------------
 heartbeatを停止、/etc/init.d/heartbeat startコマンド実行後のログ
-----------------------------------------------------------------
...
Jun 10 16:54:07 SEVER2 IPaddr[5824]: INFO:  Success
Jun 10 16:54:07 SEVER2 ResourceManager[5730]: info: Running /etc/init.d/postgresql  stop  <-- ココ!
Jun 10 16:54:08 SEVER2 ResourceManager[5730]: info: Running /etc/init.d/httpd  stop
...
=====

 なので再度 Heartbeat を起動すると PostgreSQL が止まってるので
 PostgreSQL を起動する事が可能になります。

 これで一つ解決しました。

 あとは自作 RA から PostgreSQL が起動できない問題です。

 とりあえず先ほどのメールでお願いした資材を
 提供してもらえれば、何か判るかもしれません。

 以上です。


> 赤松 様
> Linux-HA-Japanメーリングリストの皆様
> 
> O.Nです。
> 
>  先日、F-Secureが起動しているとHeartbeatよりPostgreSQLサービスが起動できな
> いと
> 報告させていただきましたが、Heartbeatを再起動することで、PostgreSQLサービス
>> 起動できることが確認できました。
>  なぜ、Heartbeatを再起動しないといけないのか、その原因がわかりません。
> もし想定される原因、対策をご存知でしたら教えてください。
> 
> 1.事象:サーバ起動時、heartbeatよりPostgreSQLサービスが自動起動できない。
>    しかし、一旦、PostgreSQLサービスを起動し、heartbeatを停止し、
>    heartbeatを実行するとPostgreSQLサービスが起動できる。
>       設定等は変更しておりません。
> 
> 2. 実行コマンドとその結果は、次の通りです。
>  1).サーバ(SERVER2)を自動起動時、heartbeatよりPostgreSQLサービスが自動起動
> できない。
>  2). service postgresql startコマンドを実行する。
>  3).service heartbeat stopコマンドを実行する。
>  4)./etc/init.d/heartbeat startコマンドを実行すると、PostgreSQLサービスが
> 自動起動できる。
> 
> 
> 3./var/log/messageの出力ログ(抜粋)
> -----------------------------------------------------
>  初回、サーバ起動時のログ
> -----------------------------------------------------
> Jun 10 16:50:50 SEVER2 heartbeat: [3376]: info: AUTH: i=1: key = 0x9779118, 
> auth=0x567c80, authname=crc
> Jun 10 16:50:51 SEVER2 heartbeat: [3376]: info: Version 2 support: false
> Jun 10 16:50:51 SEVER2 heartbeat: [3376]: WARN: Logging daemon is disabled -
> -enabling logging daemon is recommended
> Jun 10 16:50:51 SEVER2 heartbeat: [3376]: info: **************************
> Jun 10 16:50:51 SEVER2 heartbeat: [3376]: info: Configuration validated. 
> Starting heartbeat 2.1.4
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: heartbeat: version 2.1.4
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: Heartbeat generation: 
> 1369315584
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: glib: ucast: write socket 
> priority set to IPTOS_LOWDELAY on eth1
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: glib: ucast: bound send 
> socket to device: eth1
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: glib: ucast: bound receive 
> socket to device: eth1
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: glib: ucast: started on port 
> 694 interface eth1 to 10.10.10.11
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: glib: ping heartbeat started.
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: G_main_add_TriggerHandler: 
> Added signal manual handler
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: G_main_add_TriggerHandler: 
> Added signal manual handler
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: notice: Using watchdog device: /
> dev/watchdog
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: G_main_add_SignalHandler: 
> Added signal handler for signal 17
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: Local status now set to: 
> 'up'
> Jun 10 16:50:51 SEVER2 heartbeat: [3378]: info: Managed write_hostcachedata 
> process 3429 exited with return code 0.
> Jun 10 16:50:51 SEVER2 gpm[3432]: *** info [startup.c(95)]: 
> Jun 10 16:50:51 SEVER2 gpm[3432]: Started gpm successfully. Entered daemon 
> mode.
> Jun 10 16:50:52 SEVER2 rhnsd[3524]: Red Hat Network Services Daemon starting 
> up.
> Jun 10 16:50:52 SEVER2 heartbeat: [3378]: info: Link 192.168.0.1:192.168.0.1 
> up.
> Jun 10 16:50:52 SEVER2 heartbeat: [3378]: info: Status update for node 192.
> 168.0.1: status ping
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Found user 'avahi' (UID 70) and 
> group 'avahi' (GID 70).
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Successfully dropped root 
> privileges.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: avahi-daemon 0.6.16 starting up.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: WARNING: No NSS support for mDNS 
> detected, consider installing nss-mdns!
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Successfully called chroot().
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Successfully dropped remaining 
> capabilities.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: No service found in /etc/avahi/
> services.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: New relevant interface eth1.IPv6 
> for mDNS.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Joining mDNS multicast group on 
> interface eth1.IPv6 with address fe80::96de:80ff:fe60:e509.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: New relevant interface eth1.IPv4 
> for mDNS.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Joining mDNS multicast group on 
> interface eth1.IPv4 with address 10.10.10.10.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: New relevant interface eth0.IPv6 
> for mDNS.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Joining mDNS multicast group on 
> interface eth0.IPv6 with address fe80::6a05:caff:fe13:24e6.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: New relevant interface eth0.IPv4 
> for mDNS.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Joining mDNS multicast group on 
> interface eth0.IPv4 with address 192.168.0.121.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Network interface enumeration 
> completed.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Registering new address record 
> for fe80::96de:80ff:fe60:e509 on eth1.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Registering new address record 
> for 10.10.10.10 on eth1.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Registering new address record 
> for fe80::6a05:caff:fe13:24e6 on eth0.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Registering new address record 
> for 192.168.0.121 on eth0.
> Jun 10 16:50:52 SEVER2 avahi-daemon[3552]: Registering HINFO record with 
> values 'I686'/'LINUX'.
> Jun 10 16:50:53 SEVER2 F-Secure Management Agent[3584]: fsma: Writing log to 
> /var/opt/f-secure/fsma/log/fsma.log 
> Jun 10 16:50:53 SEVER2 avahi-daemon[3552]: Server startup complete. Host 
> name is SEVER2.local. Local service cookie is 1192449128.
> Jun 10 16:50:54 SEVER2 smartd[3807]: smartd version 5.38 [i686-redhat-linux-
> gnu] Copyright (C) 2002-8 Bruce Allen 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Home page is http://smartmontools.
> sourceforge.net/  
> Jun 10 16:50:54 SEVER2 smartd[3807]: Opened configuration file /etc/smartd.
> conf 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Configuration file /etc/smartd.conf was 
> parsed, found DEVICESCAN, scanning devices 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Problem creating device name scan list 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Device: /dev/sda, opened 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Device /dev/sda: using '-d sat' for ATA 
> disk behind SAT layer. 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Device: /dev/sda, opened 
> Jun 10 16:50:54 SEVER2 smartd[3807]: Device: /dev/sda, not found in smartd 
> database. 
> Jun 10 16:50:55 SEVER2 smartd[3807]: Device: /dev/sda, is SMART capable. 
> Adding to "monitor" list. 
> Jun 10 16:50:55 SEVER2 smartd[3807]: Device: /dev/sdb, opened 
> Jun 10 16:50:55 SEVER2 smartd[3807]: Device /dev/sdb: using '-d sat' for ATA 
> disk behind SAT layer. 
> Jun 10 16:50:55 SEVER2 smartd[3807]: Device: /dev/sdb, opened 
> Jun 10 16:50:56 SEVER2 smartd[3807]: Device: /dev/sdb, not found in smartd 
> database. 
> Jun 10 16:50:57 SEVER2 smartd[3807]: Device: /dev/sdb, is SMART capable. 
> Adding to "monitor" list. 
> Jun 10 16:50:57 SEVER2 smartd[3807]: Device: /dev/sdc, opened 
> Jun 10 16:50:57 SEVER2 heartbeat: [3378]: WARN: Gmain_timeout_dispatch: 
> Dispatch function for hb_pop_deadtime took too long to execute: 750 ms (> 
> 100 ms) (GSource: 0x9782368)
> Jun 10 16:50:57 SEVER2 smartd[3807]: Device: /dev/sdc, IE (SMART) not 
> enabled, skip device Try 'smartctl -s on /dev/sdc' to turn on SMART features 
> Jun 10 16:50:57 SEVER2 smartd[3807]: Monitoring 0 ATA and 2 SCSI devices 
> Jun 10 16:50:58 SEVER2 smartd[3901]: smartd has fork()ed into background 
> mode. New PID=3901. 
> Jun 10 16:50:58 SEVER2 heartbeat: [3378]: WARN: G_CH_dispatch_int: Dispatch 
> function for read child took too long to execute: 810 ms (> 50 ms) (GSource: 
> 0x977f8d8)
> Jun 10 16:51:03 SEVER2 heartbeat: [3378]: WARN: Gmain_timeout_dispatch: 
> Dispatch function for send local status took too long to execute: 2330 ms (> 
> 50 ms) (GSource: 0x9783570)
> Jun 10 16:51:27 SEVER2 gconfd (root-4329): 起動中 (バージョン 2.14.0), PID 
> 4329 ユーザ 'root'
> Jun 10 16:51:27 SEVER2 gconfd (root-4329): 読み込み専用の設定ソースに対する
> アドレス "xml:readonly:/etc/gconf/gconf.xml.mandatory" (0 行目) を解決しまし
>> Jun 10 16:51:27 SEVER2 gconfd (root-4329): 書き込み可能な設定ソースに対する
> アドレス "xml:readwrite:/root/.gconf" (1 行目) を解決しました
> Jun 10 16:51:27 SEVER2 gconfd (root-4329): 読み込み専用の設定ソースに対する
> アドレス "xml:readonly:/etc/gconf/gconf.xml.defaults" (2 行目) を解決しまし
>> Jun 10 16:51:30 SEVER2 hcid[2904]: Default passkey agent (:1.5, /org/bluez/
> applet) registered
> Jun 10 16:51:31 SEVER2 pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 
> 0 Not Found
> Jun 10 16:51:31 SEVER2 last message repeated 2 times
> Jun 10 16:51:32 SEVER2 heartbeat: [3378]: WARN: G_CH_dispatch_int: Dispatch 
> function for read child took too long to execute: 60 ms (> 50 ms) (GSource: 
> 0x977f8d8)
> Jun 10 16:51:32 SEVER2 gconfd (root-4329): 書き込み可能な設定ソースに対する
> アドレス "xml:readwrite:/root/.gconf" (0 行目) を解決しました
> Jun 10 16:51:32 SEVER2 nm-system-settings: Loaded plugin ifcfg-rh: (c) 2007 
> - 2008 Red Hat, Inc.  To report bugs please use the NetworkManager mailing 
> list.
> Jun 10 16:51:32 SEVER2 nm-system-settings:    ifcfg-rh: parsing /etc/
> sysconfig/network-scripts/ifcfg-eth0 ... 
> Jun 10 16:51:32 SEVER2 nm-system-settings:    ifcfg-rh:     read connection 
> 'System eth0'
> Jun 10 16:51:32 SEVER2 nm-system-settings:    ifcfg-rh: parsing /etc/
> sysconfig/network-scripts/ifcfg-lo ... 
> Jun 10 16:51:32 SEVER2 nm-system-settings:    ifcfg-rh: parsing /etc/
> sysconfig/network-scripts/ifcfg-eth1 ... 
> Jun 10 16:51:32 SEVER2 nm-system-settings:    ifcfg-rh:     read connection 
> 'System eth1'
> Jun 10 16:51:34 SEVER2 pcscd: winscard.c:304:SCardConnect() Reader E-Gate 0 
> 0 Not Found
> Jun 10 16:52:51 SEVER2 heartbeat: [3378]: WARN: node SEVER1.domain: is dead
> Jun 10 16:52:51 SEVER2 heartbeat: [3378]: info: Comm_now_up(): updating 
> status to active
> Jun 10 16:52:51 SEVER2 heartbeat: [3378]: info: Local status now set to: 
> 'active'
> Jun 10 16:52:51 SEVER2 heartbeat: [3378]: info: Starting child client "/usr/
> lib/heartbeat/ipfail" (200,200)
> Jun 10 16:52:52 SEVER2 heartbeat: [4693]: info: Starting "/usr/lib/heartbeat
> /ipfail" as uid 200  gid 200 (pid 4693)
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: WARN: No STONITH device configured.
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: WARN: Shared disks are not 
> protected.
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: Resources being acquired 
> from SEVER1.domain.
> Jun 10 16:52:52 SEVER2 heartbeat: [4695]: info: No local resources [/usr/
> share/heartbeat/ResourceManager listkeys SEVER2.domain] to acquire.
> Jun 10 16:52:52 SEVER2 harc[4694]: info: Running /etc/ha.d/rc.d/status 
> status
> Jun 10 16:52:52 SEVER2 heartbeat: [4695]: info: Writing type [resource] 
> message to FIFO
> Jun 10 16:52:52 SEVER2 heartbeat: [4695]: info: FIFO message [type resource] 
> written rc=79
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: WARN: G_WC_dispatch: Dispatch 
> function for client registration took too long to execute: 40 ms (> 20 ms) 
> (GSource: 0x9793470)
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: Managed req_our_resources 
> process 4695 exited with return code 0.
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 0, reason 'req_our_resources' (0))
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES' (0))
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: Initial resource acquisition 
> complete (T_RESOURCES)
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (1))
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: info: STATE 1 => 3
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: WARN: G_CH_dispatch_int: Dispatch 
> function for FIFO took too long to execute: 70 ms (> 50 ms) (GSource: 
> 0x977c088)
> Jun 10 16:52:52 SEVER2 mach_down[4723]: info: Taking over resource group 
> drbddisk
> Jun 10 16:52:52 SEVER2 ResourceManager[4751]: info: Acquiring resource 
> group: SEVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 httpd 
> postgresql IPaddr::192.168.0.110/24/eth0 MailTo::test****@yahoo*****::
> SEVER_FailOver
> Jun 10 16:52:52 SEVER2 heartbeat: [3378]: WARN: G_CH_dispatch_int: Dispatch 
> function for API client took too long to execute: 230 ms (> 100 ms) 
> (GSource: 0x9791958)
> Jun 10 16:52:52 SEVER2 ResourceManager[4751]: info: Running /etc/ha.d/
> resource.d/drbddisk  start
> Jun 10 16:52:52 SEVER2 kernel: drbd0: role( Secondary -> Primary ) 
> Jun 10 16:52:53 SEVER2 Filesystem[4808]: INFO:  Resource is stopped
> Jun 10 16:52:53 SEVER2 ResourceManager[4751]: info: Running /etc/ha.d/
> resource.d/Filesystem /dev/drbd0 /usr1 ext3 start
> Jun 10 16:52:53 SEVER2 Filesystem[4909]: INFO: Running start for /dev/drbd0 
> on /usr1
> Jun 10 16:52:54 SEVER2 kernel: kjournald starting.  Commit interval 5 
> seconds
> Jun 10 16:52:54 SEVER2 kernel: EXT3-fs warning: maximal mount count reached, 
> running e2fsck is recommended
> Jun 10 16:52:54 SEVER2 kernel: EXT3 FS on drbd0, internal journal
> Jun 10 16:52:54 SEVER2 kernel: EXT3-fs: mounted filesystem with ordered data 
> mode.
> Jun 10 16:52:54 SEVER2 Filesystem[4898]: INFO:  Success
> Jun 10 16:52:54 SEVER2 ResourceManager[4751]: info: Running /etc/init.d/
> httpd  start
> ============================================================================
> ===
> ↑この処理後、PostgreSQLサービスが起動しなければならないが、起動できていない。
> ============================================================================
> ===
> Jun 10 16:52:55 SEVER2 IPaddr[5055]: INFO:  Resource is stopped
> Jun 10 16:52:55 SEVER2 ResourceManager[4751]: info: Running /etc/ha.d/
> resource.d/IPaddr 192.168.0.110/24/eth0 start
> Jun 10 16:52:56 SEVER2 IPaddr[5161]: INFO: Using calculated netmask for 192.
> 168.0.110: 255.255.255.0
> Jun 10 16:52:56 SEVER2 IPaddr[5161]: INFO: eval ifconfig eth0:0 192.168.0.
> 110 netmask 255.255.255.0 broadcast 192.168.0.255
> Jun 10 16:52:56 SEVER2 avahi-daemon[3552]: Registering new address record 
> for 192.168.0.110 on eth0.
> Jun 10 16:52:56 SEVER2 IPaddr[5132]: INFO:  Success
> Jun 10 16:52:56 SEVER2 MailTo[5268]: INFO:  Resource is stopped
> Jun 10 16:52:56 SEVER2 ResourceManager[4751]: info: Running /etc/ha.d/
> resource.d/MailTo test****@yahoo***** SEVER_FailOver start
> Jun 10 16:52:57 SEVER2 MailTo[5314]: INFO:  Success
> Jun 10 16:52:57 SEVER2 mach_down[4723]: info: /usr/share/heartbeat/
> mach_down: nice_failback: foreign resources acquired
> Jun 10 16:52:57 SEVER2 mach_down[4723]: info: mach_down takeover complete 
> for node SEVER1.domain.
> Jun 10 16:52:57 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (1))
> Jun 10 16:52:57 SEVER2 heartbeat: [3378]: info: mach_down takeover complete.
> Jun 10 16:52:57 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'mach_down' (1))
> Jun 10 16:52:57 SEVER2 heartbeat: [3378]: WARN: G_CH_dispatch_int: Dispatch 
> function for FIFO took too long to execute: 70 ms (> 50 ms) (GSource: 
> 0x977c088)
> Jun 10 16:52:57 SEVER2 heartbeat: [3378]: info: Managed status process 4694 
> exited with return code 0.
> Jun 10 16:53:02 SEVER2 heartbeat: [3378]: info: Local Resource acquisition 
> completed. (none)
> Jun 10 16:53:02 SEVER2 heartbeat: [3378]: info: local resource transition 
> completed.
> Jun 10 16:53:02 SEVER2 heartbeat: [3378]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (1))
> -----------------------------------------------------------------
>  heartbeatを停止、/etc/init.d/heartbeat startコマンド実行後のログ
> -----------------------------------------------------------------
> Jun 10 16:54:06 SEVER2 heartbeat: [5717]: info: Giving up all HA resources.
> Jun 10 16:54:06 SEVER2 ResourceManager[5730]: info: Releasing resource 
> group: SEVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 httpd 
> postgresql IPaddr::192.168.0.110/24/eth0 MailTo::test****@yahoo*****::
> SEVER_FailOver
> Jun 10 16:54:06 SEVER2 ResourceManager[5730]: info: Running /etc/ha.d/
> resource.d/MailTo test****@yahoo***** SEVER_FailOver stop
> Jun 10 16:54:06 SEVER2 MailTo[5768]: INFO:  Success
> Jun 10 16:54:06 SEVER2 ResourceManager[5730]: info: Running /etc/ha.d/
> resource.d/IPaddr 192.168.0.110/24/eth0 stop
> Jun 10 16:54:07 SEVER2 IPaddr[5853]: INFO: ifconfig eth0:0 down
> Jun 10 16:54:07 SEVER2 avahi-daemon[3552]: Withdrawing address record for 
> 192.168.0.110 on eth0.
> Jun 10 16:54:07 SEVER2 IPaddr[5824]: INFO:  Success
> Jun 10 16:54:07 SEVER2 ResourceManager[5730]: info: Running /etc/init.d/
> postgresql  stop
> Jun 10 16:54:08 SEVER2 ResourceManager[5730]: info: Running /etc/init.d/
> httpd  stop
> Jun 10 16:54:09 SEVER2 ResourceManager[5730]: info: Running /etc/ha.d/
> resource.d/Filesystem /dev/drbd0 /usr1 ext3 stop
> Jun 10 16:54:09 SEVER2 Filesystem[6237]: INFO: Running stop for /dev/drbd0 
> on /usr1
> Jun 10 16:54:09 SEVER2 Filesystem[6237]: INFO: Trying to unmount /usr1
> Jun 10 16:54:09 SEVER2 Filesystem[6237]: INFO: unmounted /usr1 successfully
> Jun 10 16:54:09 SEVER2 Filesystem[6206]: INFO:  Success
> Jun 10 16:54:09 SEVER2 ResourceManager[5730]: info: Running /etc/ha.d/
> resource.d/drbddisk  stop
> Jun 10 16:54:09 SEVER2 kernel: drbd0: role( Primary -> Secondary ) 
> Jun 10 16:54:09 SEVER2 heartbeat: [5717]: info: All HA resources 
> relinquished.
> Jun 10 16:54:09 SEVER2 heartbeat: [5717]: info: Writing type [shutdone] 
> message to FIFO
> Jun 10 16:54:09 SEVER2 heartbeat: [5717]: info: FIFO message [type shutdone] 
> written rc=27
> Jun 10 16:54:11 SEVER2 heartbeat: [3378]: info: killing /usr/lib/heartbeat/
> ipfail process group 4693 with signal 15
> Jun 10 16:54:12 SEVER2 heartbeat: [3378]: WARN: G_SIG_dispatch: Dispatch 
> function for SIGCHLD was delayed 230 ms (> 100 ms) before being called 
> (GSource: 0x977fa20)
> Jun 10 16:54:12 SEVER2 heartbeat: [3378]: info: G_SIG_dispatch: started at 
> 429432739 should have started at 429432716
> Jun 10 16:54:12 SEVER2 heartbeat: [3378]: WARN: G_SIG_dispatch: Dispatch 
> function for SIGCHLD took too long to execute: 50 ms (> 30 ms) (GSource: 
> 0x977fa20)
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: killing HBREAD process 3421 
> with signal 15
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: killing HBFIFO process 3403 
> with signal 15
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: killing HBWRITE process 3404 
> with signal 15
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: killing HBREAD process 3405 
> with signal 15
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: killing HBWRITE process 3420 
> with signal 15
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: WARN: Gmain_timeout_dispatch: 
> Dispatch function for shutdown phase 2 took too long to execute: 140 ms (> 
> 100 ms) (GSource: 0x979f5a0)
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: WARN: G_SIG_dispatch: Dispatch 
> function for SIGCHLD was delayed 130 ms (> 100 ms) before being called 
> (GSource: 0x977fa20)
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: G_SIG_dispatch: started at 
> 429432861 should have started at 429432848
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: Core process 3403 exited. 5 
> remaining
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: Core process 3404 exited. 4 
> remaining
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: Core process 3405 exited. 3 
> remaining
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: Core process 3420 exited. 2 
> remaining
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: Core process 3421 exited. 1 
> remaining
> Jun 10 16:54:13 SEVER2 heartbeat: [3378]: info: SEVER2.domain Heartbeat 
> shutdown complete.
> Jun 10 16:54:14 SEVER2 logd: [6329]: info: Waiting for pid=3321 to exit
> Jun 10 16:54:14 SEVER2 logd: [3322]: info: logd_term_write_action: received 
> SIGTERM
> Jun 10 16:54:14 SEVER2 logd: [3322]: info: Exiting write process
> Jun 10 16:54:15 SEVER2 logd: [6329]: info: Pid 3321 exited
> Jun 10 16:54:48 SEVER2 logd: [6342]: info: logd started with default 
> configuration.
> Jun 10 16:54:48 SEVER2 logd: [6350]: info: G_main_add_SignalHandler: Added 
> signal handler for signal 15
> Jun 10 16:54:48 SEVER2 logd: [6342]: info: G_main_add_SignalHandler: Added 
> signal handler for signal 15
> Jun 10 16:54:48 SEVER2 heartbeat: [6386]: info: AUTH: i=1: key = 0x9826920, 
> auth=0x155c80, authname=crc
> Jun 10 16:54:48 SEVER2 heartbeat: [6386]: info: Version 2 support: false
> Jun 10 16:54:48 SEVER2 heartbeat: [6386]: WARN: Logging daemon is disabled -
> -enabling logging daemon is recommended
> Jun 10 16:54:48 SEVER2 heartbeat: [6386]: info: **************************
> Jun 10 16:54:48 SEVER2 heartbeat: [6386]: info: Configuration validated. 
> Starting heartbeat 2.1.4
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: heartbeat: version 2.1.4
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: Heartbeat generation: 
> 1369315585
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: glib: ucast: write socket 
> priority set to IPTOS_LOWDELAY on eth1
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: glib: ucast: bound send 
> socket to device: eth1
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: glib: ucast: bound receive 
> socket to device: eth1
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: glib: ucast: started on port 
> 694 interface eth1 to 10.10.10.11
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: glib: ping heartbeat started.
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: G_main_add_TriggerHandler: 
> Added signal manual handler
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: G_main_add_TriggerHandler: 
> Added signal manual handler
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: notice: Using watchdog device: /
> dev/watchdog
> Jun 10 16:54:48 SEVER2 heartbeat: [6387]: info: G_main_add_SignalHandler: 
> Added signal handler for signal 17
> Jun 10 16:54:49 SEVER2 heartbeat: [6387]: info: Local status now set to: 
> 'up'
> Jun 10 16:54:49 SEVER2 heartbeat: [6387]: info: Managed write_hostcachedata 
> process 6395 exited with return code 0.
> Jun 10 16:54:50 SEVER2 heartbeat: [6387]: info: Link 192.168.0.1:192.168.0.1 
> up.
> Jun 10 16:54:50 SEVER2 heartbeat: [6387]: info: Status update for node 192.
> 168.0.1: status ping
> Jun 10 16:54:50 SEVER2 heartbeat: [6387]: WARN: G_CH_dispatch_int: Dispatch 
> function for read child took too long to execute: 250 ms (> 50 ms) (GSource: 
> 0x982d1d8)
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: node SEVER1.domain: is dead
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Comm_now_up(): updating 
> status to active
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Local status now set to: 
> 'active'
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Starting child client "/usr/
> lib/heartbeat/ipfail" (200,200)
> Jun 10 16:56:49 SEVER2 heartbeat: [6895]: info: Starting "/usr/lib/heartbeat
> /ipfail" as uid 200  gid 200 (pid 6895)
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: No STONITH device configured.
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: Shared disks are not 
> protected.
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Resources being acquired 
> from SEVER1.domain.
> Jun 10 16:56:49 SEVER2 harc[6896]: info: Running /etc/ha.d/rc.d/status 
> status
> Jun 10 16:56:49 SEVER2 heartbeat: [6902]: info: No local resources [/usr/
> share/heartbeat/ResourceManager listkeys SEVER2.domain] to acquire.
> Jun 10 16:56:49 SEVER2 heartbeat: [6902]: info: Writing type [resource] 
> message to FIFO
> Jun 10 16:56:49 SEVER2 heartbeat: [6902]: info: FIFO message [type resource] 
> written rc=79
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 0, 
> foreign 1, reason 'T_RESOURCES' (0))
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (0))
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Initial resource acquisition 
> complete (T_RESOURCES(us))
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: STATE 1 => 3
> Jun 10 16:56:49 SEVER2 mach_down[6925]: info: Taking over resource group 
> drbddisk
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: G_CH_dispatch_int: Dispatch 
> function for FIFO took too long to execute: 80 ms (> 50 ms) (GSource: 
> 0x9829960)
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: G_SIG_dispatch: Dispatch 
> function for SIGCHLD was delayed 110 ms (> 100 ms) before being called 
> (GSource: 0x982d320)
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: G_SIG_dispatch: started at 
> 429448463 should have started at 429448452
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: Managed req_our_resources 
> process 6902 exited with return code 0.
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'req_our_resources' (1))
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: G_SIG_dispatch: Dispatch 
> function for SIGCHLD took too long to execute: 70 ms (> 30 ms) (GSource: 
> 0x982d320)
> Jun 10 16:56:49 SEVER2 ResourceManager[6951]: info: Acquiring resource 
> group: SEVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 httpd 
> postgresql IPaddr::192.168.0.110/24/eth0 MailTo::test****@yahoo*****::
> SEVER_FailOver
> Jun 10 16:56:49 SEVER2 heartbeat: [6387]: WARN: G_WC_dispatch: Dispatch 
> function for client registration took too long to execute: 40 ms (> 20 ms) 
> (GSource: 0x98404a0)
> Jun 10 16:56:49 SEVER2 ResourceManager[6951]: info: Running /etc/ha.d/
> resource.d/drbddisk  start
> Jun 10 16:56:50 SEVER2 kernel: drbd0: role( Secondary -> Primary ) 
> Jun 10 16:56:50 SEVER2 heartbeat: [6387]: WARN: G_CH_dispatch_int: Dispatch 
> function for API client took too long to execute: 230 ms (> 100 ms) 
> (GSource: 0x9841e98)
> Jun 10 16:56:50 SEVER2 Filesystem[7007]: INFO:  Resource is stopped
> Jun 10 16:56:50 SEVER2 ResourceManager[6951]: info: Running /etc/ha.d/
> resource.d/Filesystem /dev/drbd0 /usr1 ext3 start
> Jun 10 16:56:50 SEVER2 Filesystem[7088]: INFO: Running start for /dev/drbd0 
> on /usr1
> Jun 10 16:56:50 SEVER2 kernel: kjournald starting.  Commit interval 5 
> seconds
> Jun 10 16:56:50 SEVER2 kernel: EXT3-fs warning: maximal mount count reached, 
> running e2fsck is recommended
> Jun 10 16:56:50 SEVER2 kernel: EXT3 FS on drbd0, internal journal
> Jun 10 16:56:50 SEVER2 kernel: EXT3-fs: mounted filesystem with ordered data 
> mode.
> Jun 10 16:56:50 SEVER2 Filesystem[7077]: INFO:  Success
> Jun 10 16:56:51 SEVER2 ResourceManager[6951]: info: Running /etc/init.d/
> httpd  start
> Jun 10 16:56:51 SEVER2 ResourceManager[6951]: info: Running /etc/init.d/
> postgresql  start
> ============================================================================
> ===
> ↑ PostgreSQLサービスが自動起動できた。
> ============================================================================
> ===
> Jun 10 16:56:54 SEVER2 IPaddr[7311]: INFO:  Resource is stopped
> Jun 10 16:56:54 SEVER2 ResourceManager[6951]: info: Running /etc/ha.d/
> resource.d/IPaddr 192.168.0.110/24/eth0 start
> Jun 10 16:56:54 SEVER2 IPaddr[7409]: INFO: Using calculated netmask for 192.
> 168.0.110: 255.255.255.0
> Jun 10 16:56:55 SEVER2 IPaddr[7409]: INFO: eval ifconfig eth0:0 192.168.0.
> 110 netmask 255.255.255.0 broadcast 192.168.0.255
> Jun 10 16:56:55 SEVER2 avahi-daemon[3552]: Registering new address record 
> for 192.168.0.110 on eth0.
> Jun 10 16:56:55 SEVER2 IPaddr[7380]: INFO:  Success
> Jun 10 16:56:55 SEVER2 MailTo[7516]: INFO:  Resource is stopped
> Jun 10 16:56:55 SEVER2 ResourceManager[6951]: info: Running /etc/ha.d/
> resource.d/MailTo test****@yahoo***** SEVER_FailOver start
> Jun 10 16:56:55 SEVER2 MailTo[7561]: INFO:  Success
> Jun 10 16:56:55 SEVER2 mach_down[6925]: info: /usr/share/heartbeat/
> mach_down: nice_failback: foreign resources acquired
> Jun 10 16:56:55 SEVER2 mach_down[6925]: info: mach_down takeover complete 
> for node SEVER1.domain.
> Jun 10 16:56:55 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (1))
> Jun 10 16:56:55 SEVER2 heartbeat: [6387]: info: mach_down takeover complete.
> Jun 10 16:56:55 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'mach_down' (1))
> Jun 10 16:56:55 SEVER2 heartbeat: [6387]: WARN: G_CH_dispatch_int: Dispatch 
> function for FIFO took too long to execute: 60 ms (> 50 ms) (GSource: 
> 0x9829960)
> Jun 10 16:56:55 SEVER2 heartbeat: [6387]: info: Managed status process 6896 
> exited with return code 0.
> Jun 10 16:56:59 SEVER2 heartbeat: [6387]: info: Local Resource acquisition 
> completed. (none)
> Jun 10 16:56:59 SEVER2 heartbeat: [6387]: info: local resource transition 
> completed.
> Jun 10 16:56:59 SEVER2 heartbeat: [6387]: info: AnnounceTakeover(local 1, 
> foreign 1, reason 'T_RESOURCES(us)' (1))
> 
> 以上となります。
> なにとぞ、よろしくお願い申し上げます。
>         
> ----- Original Message -----
> > From: "delta_syste****@yahoo*****" <delta_syste****@yahoo*****>
> > To: 赤松 <akamatsu_hiroshi_b1****@lab*****>; "linux-ha-japan @ lists.
> > sourceforge.jp" <linux****@lists*****>
> > Cc: 
> > Date: 2013/6/10, Mon 15:00
> > Subject: Re: [Linux-ha-jp] heartbeatからPostgreSQLサービスが自動起動できな
> > い
> > 
> > 赤松 様
> >  
> > O.Nです。
> > いろいろと助言いただき、ありがとうございました。
> >  
> >  自作RAを試みましたが、問題事象は変わらず、F-Secureが起動していると
> > PostgreSQLサービスは起動できませんでした。
> >  
> >>>   つまり O.N さんの環境で SERVER1 の Heartbeat を止めると F-Secure の
> >>>   PostgreSQL も止まる事になると思いますがいかがでしょうか。
> >>>   (それはそれで良くないでしょう)
> > Heartbeatを止めても、F-SecureのPostgreSQLは止まりません。
> > またService PostgreSQL stopでもF-SecureのPostgreSQLは止まりません。
> >  
> > ログに出力されているResourceManagerの振る舞いをスクリプト等から
> > 確認することはできませんでしょうか。
> >  
> > 恐れ入ります。他の解決の糸口がありましたら教えてください。
> > なにとぞ、よろしくお願い申し上げます。
> > 
> > 
> > ----- Original Message -----
> >>  From: 赤松 <akamatsu_hiroshi_b1****@lab*****>
> >>  To: delta_syste****@yahoo*****; linux****@lists*****
> >>  Cc: 
> >>  Date: 2013/6/7, Fri 19:21
> >>  Subject: Re: [Linux-ha-jp] heartbeatからPostgreSQLサービスが自動起動でき
> >> ない
> >> 
> >> T o: O.N さん
> >> 
> >>   赤松と申します。
> >> 
> >>   v1 モードはよく知らないですが、簡易な環境(Heartbeat 3.0.5)で試すと
> >>   同じ事象が発生しました。
> >>   起動時、既に稼働中のリソースがいたら無視する仕様のようです。
> >>   (ちなみに v2 及び Pacemaker はこの仕様は異なります)
> >> 
> >>   例えば haresources が下記の場合、既に snmpd が起動している状態で
> >>   heartbeat を起動すると httpd の次に snmpd の起動処理は行われず
> >>   ntpd の起動処理が行われます。
> >>   ---
> >>   node1 httpd snmpd ntpd
> >>   ---
> >> 
> >>   ただし standby コマンド等でリソースを追いだす際、snmpd を止めます。
> >>   つまり O.N さんの環境で SERVER1 の Heartbeat を止めると F-Secure の
> >>   PostgreSQL も止まる事になると思いますがいかがでしょうか。
> >>   (それはそれで良くないでしょう)
> >> 
> >> 
> >>   で対策ですが、やっぱり自作の RA が必要と思います。
> >>   参考になるのがさっと見る限り /etc/ha.d/resource.d/Filesystem に
> >>   なるかと。
> >> 
> >>   この中を読むと(bash なんで大したことないです)RA への引数を export で
> >>   宣言して最後に ra_execocf というものを起動してます。
> >>   この ra_execocf は別の RA(/usr/lib/ocf/resource.d/heartbeat)に
> >>   処理を(start とか stop とか)渡してます。
> >> 
> >>   別の RA の場所には pgsql という PostgreSQL を制御する為の RA が
> >>   あるので、これを利用しましょう。
> >> 
> >>   もうパラメータが固定で決まってるなら下記のように全部書いておいて...
> >> 
> >>  ---
> >>  #!/bin/sh
> >>  . /etc/ha.d/resource.d/hto-mapfuncs
> >>  OCF_TYPE=pgsql
> >>  export OCF_RESKEY_pgctl="<pg_ctl をフルパスで>"
> >>  export OCF_RESKEY_start_opt="-p <ポート番号:5432>"
> >>  export OCF_RESKEY_psql="<psql をフルパスで>"
> >>  export OCF_RESKEY_pgdata="<DB のディレクトリ>"
> >>  export OCF_RESKEY_pgport="<ポート番号:5432>"
> >>  ra_execocf $1
> >>  ---
> >> 
> >>   とりあえず上記くらいのパラメータで良さそう。
> >>   上記を RA として /etc/ha.d/resource.d/ONpgsql として作成。
> >>   実行権限(# chmod 755 /etc/ha.d/resource.d/ONpgsql)を付与し
> >>   haresources の "postgresql" を "ONpgsql" にしておけば行けると
> >>   思います。
> >>   (完全に机上なので、行けなかったらスイマセン)
> >>   もちろん両系共に設定しておいて下さい。
> >> 
> >>   それと RA 内の改行は全て 0x0a のみ。
> >>   Windows 上で例えばメモ帳で書いた RA をそのまま転送すると
> >>   起動できません。
> >>   UTF-8 とかで保存して転送する等、対処して下さい。
> >> 
> >>   では、うまくいく事を願っております...。
> >> 
> >> 
> >>   ちなみに ha.cf の ucast の件は直ってないようです。
> >>   両系の ha.cf を下記のようにしておきましょう。
> >> 
> >>   ---
> >>    ucast eth1 10.10.10.10  <-- server1 の eth1 の実 IP
> >>    ucast eth1 10.10.10.11  <-- server2 の eth1 の実 IP
> >>   ---
> >> 
> >> 
> >>>   O.Nと申します。
> >>>    
> >>>   解決の糸口が見つからず、困っております。
> >>>   問合せ先が違うかも知れませんが、少しでも解決の糸口が見つかればと思い、
> >>> 投稿さ
> >>>   せていただきました。
> >>>    
> >>>   1.問題
> >>>    物理サーバ上で、Red Hat Enterprise Linux 5.5、Heartbeat-2.1.4-1をイ
> >>> ンス
> >>>   トールし
> >>>   httpd,PostgreSQLのクラスタ構成になっております。
> >>>    postgreSQLのバージョンアップ後、heartbeatからPostgreSQLサービスが自
> >>> 動起動
> >>>   しなくなりました。
> >>>   デバックログにも、ResourceManagerからPostgreSQLサービスを起動した形跡
> >>> が見当
> >>>   たりません。
> >>>   想定される原因を教えてください。
> >>> 
> >>>   2.現象(再現性)
> >>>    セキュリティ対策ソフトとして、エフセキュアLinuxセキュリティフルエデ
> >>> ィショ
> >>>   ン 9.20を導入して
> >>>   おります。セキュリティソフトにもpostgresqlが使用されており、自動起動を
> >>> 設定す
> >>>   ると、heartbeat
> >>>   からpostgreSQLサービスが起動できません。
> >>>   セキュリティソフトの自動起動の設定を解除すると、heartbeatから
> >>> PostgreSQLサー
> >>>   ビスが自動起動
> >>>   できます。
> >>>    PostgreSQLはそれぞれ別々のディレクトリー、ポート番号を使用しておりま
> >>> す。
> >>>   エフセキュアのHPには、ディレクトリー、ポート番号が異なることによって、
> >>> 干渉は
> >>>   しないと言われて
> >>>   います。
> >>> 
> >>>   できればエフセキュアもheartbeatのPostgreSQLサービスもサービス監視(スク
> >>> リプ
> >>>   ト)等で制御するのでは
> >>>   なく、別々のサービスを起動、管理したい。
> >>> 
> >>>   3.環境
> >>>   Red Hat Enterprise Linux 5.5
> >>>   heartbeat-2.1.4-1
> >>>   SERVER1(物理:eth0)192.168.0.120
> >>>   SERVER2(物理:eth0)192.168.0.121 
> >>>   VIP 192.168.0.110
> >>>   SERVER1(物理:eth1)10.10.10.10
> >>>   SERVER2(物理:eth1)10.10.10.11
> >>>   heartbeatからpostgresql
> >>>    
> >>>   4.PostgreSQLのバージョン
> >>>   postgresql-devel-8.1.23-6.el5_8
> >>>   postgresql-libs-8.1.23-6.el5_8
> >>>   f-secure-postgresql-8.1.9-13
> >>>   postgresql-server-8.1.23-6.el5_8
> >>>   postgresql-python-8.1.23-6.el5_8
> >>>   postgresql-8.1.23-6.el5_8
> >>>    
> >>>   5.PostgreSQLのポート番号
> >>>   postgresql 5432
> >>>   F-secure 28078
> >>>    
> >>>   6./etc/ha.d/haresourcesの抜粋
> >>>   SERVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 \
> >>>   postgresql httpd IPaddr::192.168.0.110/24/eth0 
> > MailTo::test****@yahoo*****::
> >>>   server_FailOver
> >>> 
> >>>   7./etc/ha.d/ha.cfの抜粋
> >>>   debugfile  /var/log/ha-debug
> >>>   logfile  /var/log/ha-log
> >>>   logfacility local0
> >>>   keepalive 10
> >>>   deadtime 60
> >>>   warntime 30
> >>>   initdead 120
> >>>   udpport 694
> >>>   ucast eth1 10.10.10.11
> >>>   auto_failback off
> >>>   node SEVER1.domain SEVER2.domain
> >>>   ping 192.168.0.1
> >>>   respawn hacluster /usr/lib/heartbeat/ipfail
> >>>   apiauth ipfail gid=haclient uid=hacluster
> >>>   debug 3
> >>> 
> >>>   8.ha-debugの抜粋
> >>>   下記のログは、stand‐aloneにて実行した結果です。
> >>>   =================================================
> >>>   正常:F-Secureを起動せず、heartbeatを実行した結果
> >>>   =================================================
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 info: Acquiring resource 
> > group: 
> >>>   SEVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 postgresql 
> > httpd 
> >> 
> >>>   IPaddr::192.168.0.110/24/eth0 MailTo::test****@yahoo*****::sever_FailOver
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/drbddisk  start
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 debug: Starting /etc/ha.d/
> >>>   resource.d/drbddisk  start
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 debug: 
> > /etc/ha.d/resource.d/
> >>>   drbddisk  start done. RC=0
> >>>   Filesystem[4042]: 2013/06/07_14:13:41 INFO:  Resource is stopped
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/Filesystem /dev/drbd0 /usr1 ext3 start
> >>>   ResourceManager[3986]: 2013/06/07_14:13:41 debug: Starting /etc/ha.d/
> >>>   resource.d/Filesystem /dev/drbd0 /usr1 ext3 start
> >>>   Filesystem[4123]: 2013/06/07_14:13:41 INFO: Running start for 
> > /dev/drbd0 on 
> >> 
> >>>   /usr1
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: hb_rsc_isstable: 
> >>>   ResourceMgmt_child_count: 1, other_is_stable: 1, takeover_in_progress: 
> > 1, 
> >>>   going_standby: 0, standby running(ms): 0, resourcestate: 3
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:42 debug: [We are SEVER2.domain]
> >>>   Filesystem[4112]: 2013/06/07_14:13:42 INFO:  Success
> >>>   INFO:  Success
> >>>   ResourceManager[3986]: 2013/06/07_14:13:42 debug: 
> > /etc/ha.d/resource.d/
> >>>   Filesystem /dev/drbd0 /usr1 ext3 start done. RC=0
> >>>   ResourceManager[3986]: 2013/06/07_14:13:42 info: Running /etc/init.d/
> >>>   postgresql  start
> >>>   ResourceManager[3986]: 2013/06/07_14:13:42 debug: Starting 
> > /etc/init.d/
> >>>   postgresql  start
> >>>   Starting postgresql service: heartbeat[3378]: 2013/06/07_14:13:42 
> > debug: 
> >>>   APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:42 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:42 debug: auto_failback -> 0 (off)
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:43 debug: Setting message filter mode
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:43 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:43 debug: Starting node walk
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:44 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:44 debug: Cluster node: 192.168.0.1: 
> > status: 
> >> 
> >>>   ping
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug:  return TRUE;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: return 1;
> >>>   heartbeat[3378]: 2013/06/07_14:13:45 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ipfail[3930]: 2013/06/07_14:13:45 debug: Cluster node: SEVER2.domain: 
> >>>   status: active
> >>>    
> >>>   =================================================
> >>>   異常:F-Secureを起動し、heartbeatを実行した結果
> >>>   =================================================
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 info: Acquiring resource 
> > group: 
> >> 
> >>>   SEVER1.domain drbddisk Filesystem::/dev/drbd0::/usr1::ext3 postgresql 
> > httpd 
> >> 
> >>>   IPaddr::192.168.0.110/24/eth0 MailTo::test****@yahoo*****::sever_FailOver
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[2] : [from_id=ipfail]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[3] : [to_id=ipfail]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[4] : 
> > [src=SEVER2.domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[5] : [info=signon]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[6] : [client_gen=0]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[7] : 
> > [(1)srcuuid=0x9624c60
> >>>   (36 27)]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[8] : [seq=17]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[9] : [hg=519e18e3]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[10] : [ts=51b16da3]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[11] : [ld=2.29 1.36 
> > 0.54 2/
> >>>   272 15746]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[12] : [ttl=4]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: MSG[13] : [auth=1 
> > 781ac7ff]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > create_seq_snapshot_table:no 
> >>>   missing packets found for node SEVER2.domain
> >>>   heartbeat[3436]: 2013/06/07_14:20:36 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: Signing on API client 
> > 15691 
> >>>   (ipfail)
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 WARN: G_CH_dispatch_int: Dispatch 
> > 
> >>>   function for API client took too long to execute: 230 ms (> 100 ms) 
> > 
> >>>   (GSource: 0x9620f98)
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/drbddisk  start
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: hb_rsc_isstable: 
> >>>   ResourceMgmt_child_count: 1, other_is_stable: 1, takeover_in_progress: 
> > 1, 
> >>>   going_standby: 0, standby running(ms): 0, resourcestate: 3
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: [We are SEVER2.domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 debug: Starting /etc/ha.d/
> >>>   resource.d/drbddisk  start
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: auto_failback -> 0 (off)
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Setting message filter mode
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 debug: 
> > /etc/ha.d/resource.d/
> >>>   drbddisk  start done. RC=0
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Starting node walk
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Cluster node: 192.168.0.1: 
> >>  status: 
> >>>   ping
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Cluster node: SEVER2.domain: 
> > 
> >>>   status: active
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Cluster node: SEVER1.domain: 
> > 
> >>>   status: dead
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: [They are SEVER1.domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Setting message signal
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > APIclients_input_dispatch() {
> >>>   Filesystem[15803]: 2013/06/07_14:20:36 INFO:  Resource is stopped
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: ProcessAnAPIRequest() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug:  return TRUE;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: Waiting for messages...
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: }/*ProcessAnAPIRequest*/;
> >>>   ipfail[15691]: 2013/06/07_14:20:36 debug: 
> > G_main_IPC_Channel_constructor
> >>>   (sock=4,4)
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: return 1;
> >>>   heartbeat[3374]: 2013/06/07_14:20:36 debug: 
> > }/*APIclients_input_dispatch*/;
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/Filesystem /dev/drbd0 /usr1 ext3 start
> >>>   ResourceManager[15747]: 2013/06/07_14:20:36 debug: Starting /etc/ha.d/
> >>>   resource.d/Filesystem /dev/drbd0 /usr1 ext3 start
> >>>   Filesystem[15884]: 2013/06/07_14:20:36 INFO: Running start for 
> > /dev/drbd0 
> >>  on 
> >>>   /usr1
> >>>   Filesystem[15873]: 2013/06/07_14:20:37 INFO:  Success
> >>>   INFO:  Success
> >>>   ResourceManager[15747]: 2013/06/07_14:20:37 debug: 
> > /etc/ha.d/resource.d/
> >>>   Filesystem /dev/drbd0 /usr1 ext3 start done. RC=0
> >>>   ResourceManager[15747]: 2013/06/07_14:20:37 info: Running 
> >>  /etc/init.d/httpd  
> >>>   start
> >>>   ResourceManager[15747]: 2013/06/07_14:20:37 debug: Starting 
> > /etc/init.d/
> >>>   httpd  start
> >>>   Starting httpd: [  OK  ]
> >>>   ResourceManager[15747]: 2013/06/07_14:20:38 debug: /etc/init.d/httpd  
> > start 
> >> 
> >>>   done. RC=0
> >>>   IPaddr[16010]: 2013/06/07_14:20:39 INFO:  Resource is stopped
> >>>   ResourceManager[15747]: 2013/06/07_14:20:39 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/IPaddr 192.168.0.110/24/eth0 start
> >>>   ResourceManager[15747]: 2013/06/07_14:20:39 debug: Starting /etc/ha.d/
> >>>   resource.d/IPaddr 192.168.0.110/24/eth0 start
> >>>   IPaddr[16116]: 2013/06/07_14:20:39 INFO: Using calculated netmask for 
> > 192.
> >>>   168.0.110: 255.255.255.0
> >>>   IPaddr[16116]: 2013/06/07_14:20:39 DEBUG: Using calculated broadcast 
> > for 
> >>  192.
> >>>   168.0.110: 192.168.0.255
> >>>   IPaddr[16116]: 2013/06/07_14:20:39 INFO: eval ifconfig eth0:0 
> > 192.168.0.110 
> >> 
> >>>   netmask 255.255.255.0 broadcast 192.168.0.255
> >>>   IPaddr[16116]: 2013/06/07_14:20:39 DEBUG: Sending Gratuitous Arp for 
> >>  192.168.
> >>>   0.110 on eth0:0 [eth0]
> >>>   IPaddr[16087]: 2013/06/07_14:20:39 INFO:  Success
> >>>   INFO:  Success
> >>>   ResourceManager[15747]: 2013/06/07_14:20:39 debug: 
> > /etc/ha.d/resource.d/
> >>>   IPaddr 192.168.0.110/24/eth0 start done. RC=0
> >>>   MailTo[16223]: 2013/06/07_14:20:40 INFO:  Resource is stopped
> >>>   ResourceManager[15747]: 2013/06/07_14:20:40 info: Running 
> >>  /etc/ha.d/resource.
> >>>   d/MailTo test****@yahoo***** sever_FailOver start
> >>>   ResourceManager[15747]: 2013/06/07_14:20:40 debug: Starting /etc/ha.d/
> >>>   resource.d/MailTo test****@yahoo***** sever_FailOver start
> >>>   MailTo[16268]: 2013/06/07_14:20:40 INFO:  Success
> >>>   INFO:  Success
> >>>   ResourceManager[15747]: 2013/06/07_14:20:40 debug: 
> > /etc/ha.d/resource.d/
> >>>   MailTo test****@yahoo***** sever_FailOver start done. RC=0
> >>>   mach_down[15721]: 2013/06/07_14:20:40 info: 
> > /usr/share/heartbeat/mach_down: 
> >> 
> >>>   nice_failback: foreign resources acquired
> >>>   heartbeat[3406]: 2013/06/07_14:20:40 debug: fifo_child message:
> >>>   heartbeat[3406]: 2013/06/07_14:20:40 debug: MSG: Dumping message with 
> > 3 
> >>>   fields
> >>>   heartbeat[3406]: 2013/06/07_14:20:40 debug: MSG[0] : [t=resource]
> >>>   heartbeat[3406]: 2013/06/07_14:20:40 debug: MSG[1] : 
> > [rsc_hold=foreign]
> >>>   heartbeat[3406]: 2013/06/07_14:20:40 debug: MSG[2] : [info=mach_down]
> >>>   mach_down[15721]: 2013/06/07_14:20:40 info: mach_down takeover 
> > complete for 
> >> 
> >>>   node SEVER1.domain.
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: FIFO_child_msg_dispatch() 
> > {
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 info: AnnounceTakeover(local 1, 
> >>  foreign 
> >>>   1, reason 'T_RESOURCES(us)' (1))
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 info: mach_down takeover 
> > complete.
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: process_resources(3):  
> > other 
> >>  now 
> >>>   stable
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 info: AnnounceTakeover(local 1, 
> >>  foreign 
> >>>   1, reason 'mach_down' (1))
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: hb_rsc_isstable: 
> >>>   ResourceMgmt_child_count: 1, other_is_stable: 1, takeover_in_progress: 
> > 0, 
> >>>   going_standby: 0, standby running(ms): 0, resourcestate: 3
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: 
> > }/*FIFO_child_msg_dispatch*/;
> >>>   heartbeat[3436]: 2013/06/07_14:20:40 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 WARN: G_CH_dispatch_int: Dispatch 
> > 
> >>>   function for FIFO took too long to execute: 60 ms (> 50 ms) 
> > (GSource: 
> >>>   0x9609088)
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 info: Managed status process 
> > 15692 
> >>>   exited with return code 0.
> >>>   heartbeat[3374]: 2013/06/07_14:20:40 debug: RscMgmtProc 
> > 'status' 
> >>  exited code 
> >>>   0
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: hb_send_local_status() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: PID 3374: Sending local 
> > status 
> >>>   curnode = 807aaec status: active
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: }/*hb_send_local_status*/;
> >>>   heartbeat[3436]: 2013/06/07_14:20:45 debug: Packet authenticated
> >>>   heartbeat[3437]: 2013/06/07_14:20:45 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: read_child_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: process_clustermsg: node 
> > [192.
> >>>   168.0.1]
> >>>   heartbeat[3374]: 2013/06/07_14:20:45 debug: }/*read_child_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 info: Local Resource acquisition 
> >>>   completed. (none)
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 info: local resource transition 
> >>>   completed.
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 debug: Sending hold resources 
> > msg: 
> >>  all, 
> >>>   stable=1 # <none>
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 info: AnnounceTakeover(local 1, 
> >>  foreign 
> >>>   1, reason 'T_RESOURCES(us)' (1))
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 debug: hb_rsc_isstable: 
> >>>   ResourceMgmt_child_count: 0, other_is_stable: 1, takeover_in_progress: 
> > 0, 
> >>>   going_standby: 0, standby running(ms): 0, resourcestate: 4
> >>>   heartbeat[3374]: 2013/06/07_14:20:46 debug: hb_rsc_isstable: 
> >>>   ResourceMgmt_child_count: 0, other_is_stable: 1, takeover_in_progress: 
> > 0, 
> >>>   going_standby: 0, standby running(ms): 0, resourcestate: 4
> >>>   heartbeat[3436]: 2013/06/07_14:20:46 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: hb_send_local_status() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: PID 3374: Sending local 
> > status 
> >>>   curnode = 807aaec status: active
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: }/*hb_send_local_status*/;
> >>>   heartbeat[3436]: 2013/06/07_14:20:55 debug: Packet authenticated
> >>>   heartbeat[3437]: 2013/06/07_14:20:55 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: read_child_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: process_clustermsg: node 
> > [192.
> >>>   168.0.1]
> >>>   heartbeat[3374]: 2013/06/07_14:20:55 debug: }/*read_child_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: hb_send_local_status() {
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: PID 3374: Sending local 
> > status 
> >>>   curnode = 807aaec status: active
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: process_clustermsg: node 
> >>  [SEVER2.
> >>>   domain]
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: }/*hb_send_local_status*/;
> >>>   heartbeat[3436]: 2013/06/07_14:21:05 debug: Packet authenticated
> >>>   heartbeat[3436]: 2013/06/07_14:21:05 debug: Packet authenticated
> >>>   heartbeat[3437]: 2013/06/07_14:21:05 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: read_child_dispatch() {
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: Packet authenticated
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: process_clustermsg: node 
> > [192.
> >>>   168.0.1]
> >>>   heartbeat[3374]: 2013/06/07_14:21:05 debug: }/*read_child_dispatch*/;
> >>>   heartbeat[3374]: 2013/06/07_14:21:15 debug: hb_send_local_status() {
> >>> 
> >>>   以上です。
> >>>   なにとぞ、よろしくお願い申し上げます。
> >>> 
> >>>   _______________________________________________
> >>>   Linux-ha-japan mailing list
> >>>   Linux****@lists*****
> >>>   http://lists.sourceforge.jp/mailman/listinfo/linux-ha-japan
> >>   
> > 
> > _______________________________________________
> > Linux-ha-japan mailing list
> > Linux****@lists*****
> > http://lists.sourceforge.jp/mailman/listinfo/linux-ha-japan
> > 





Linux-ha-japan メーリングリストの案内
Back to archive index