Помогите победить эту беду.Есть VPN сервер к которому нужно установить pptp подключение из под FreeBSD.
Установил pptpclient, соединяемся, авторизуемся, интерфейсу присваивается IP адрес,
через 10 секунд линк рвется. В логи смотрю - мало что понятно:
----------------------------ppp.log-------------------------------------
Sep 30 16:50:09 ns ppp[22118]: Phase: Using interface: tun0
Sep 30 16:50:09 ns ppp[22118]: Phase: deflink: Created in closed state
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: set authname MyUserName
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: set authkey ********
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: set timeout 0
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: set ifaddr 0 0
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: add XXX.XXX.XXX.XXX HISADDR
Sep 30 16:50:09 ns ppp[22118]: tun0: Command: rp: disable ipv6cp
Sep 30 16:50:09 ns ppp[22118]: tun0: Phase: PPP Started (direct mode).
Sep 30 16:50:09 ns ppp[22118]: tun0: Phase: bundle: Establish
Sep 30 16:50:09 ns ppp[22118]: tun0: Phase: deflink: closed -> opening
Sep 30 16:50:09 ns ppp[22118]: tun0: Phase: deflink: Connected!
Sep 30 16:50:09 ns ppp[22118]: tun0: Phase: deflink: opening -> carrier
Sep 30 16:50:10 ns ppp[22118]: tun0: Phase: deflink: /dev/pts/3: CD detected
Sep 30 16:50:10 ns ppp[22118]: tun0: Phase: deflink: carrier -> lcp
Sep 30 16:50:10 ns ppp[22118]: tun0: LCP: FSM: Using "deflink" as a transport
Sep 30 16:50:10 ns ppp[22118]: tun0: LCP: deflink: State change Initial --> Closed
Sep 30 16:50:10 ns ppp[22118]: tun0: LCP: deflink: State change Closed --> Stopped
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: LayerStart
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: SendConfigReq(1) state = Stopped
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACFCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: PROTOCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACCMAP[6] 0x00000000
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRU[4] 1500
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MAGICNUM[6] 0x9f83b927
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: State change Stopped --> Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: RecvConfigReq(0) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRU[4] 1400
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: AUTHPROTO[5] 0xc223 (CHAP 0x81)
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MAGICNUM[6] 0x6bd94d69
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: PROTOCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACFCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: CALLBACK[3] CBCP
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRRU[4] 1614
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ENDDISC[23] Local Addr: \xa1M-^RT\xa3M-^S^WM\xcfM-^KM- ^XW\xce\x
e9\xe9e
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: LDBACP[4] 6085
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: SendConfigRej(0) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: CALLBACK[3] CBCP
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRRU[4] 1614
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: LDBACP[4] 6085
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: RecvConfigAck(1) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACFCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: PROTOCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACCMAP[6] 0x00000000
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRU[4] 1500
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MAGICNUM[6] 0x9f83b927
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: State change Req-Sent --> Ack-Rcvd
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: RecvConfigReq(1) state = Ack-Rcvd
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRU[4] 1400
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: AUTHPROTO[5] 0xc223 (CHAP 0x81)
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MAGICNUM[6] 0x6bd94d69
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: PROTOCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACFCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ENDDISC[23] Local Addr: \xa1M-^RT\xa3M-^S^WM\xcfM-^KM- ^XW\xce\x
e9\xe9e
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: SendConfigAck(1) state = Ack-Rcvd
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MRU[4] 1400
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: AUTHPROTO[5] 0xc223 (CHAP 0x81)
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: MAGICNUM[6] 0x6bd94d69
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: PROTOCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ACFCOMP[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: ENDDISC[23] Local Addr: \xa1M-^RT\xa3M-^S^WM\xcfM-^KM- ^XW\xce\x
e9\xe9e
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: State change Ack-Rcvd --> Opened
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: deflink: LayerUp
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: bundle: Authenticate
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: deflink: his = CHAP 0x81, mine = none
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: Chap Input: CHALLENGE (16 bytes from AIS)
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: Chap Output: RESPONSE (MyUserName)
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: Chap Input: SUCCESS (S=98B2FFD393F45525D19B77494CCEBC54319C9414)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: FSM: Using "deflink" as a transport
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: State change Initial --> Closed
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: LayerStart.
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: SendConfigReq(1) state = Closed
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: DEFLATE[4] win 15
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: PRED1[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x000000e0 (128/56/40 bits, stateful)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: State change Closed --> Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: deflink: lcp -> open
Sep 30 16:50:11 ns ppp[22118]: tun0: Phase: bundle: Network
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: FSM: Using "deflink" as a transport
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: State change Initial --> Closed
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: LayerStart.
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: SendConfigReq(1) state = Closed
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 0.0.0.0
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: State change Closed --> Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: RecvConfigReq(3) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x010000e1 (128/56/40 bits, stateless, compressed)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: SendConfigNak(3) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x01000040 (128 bits, stateless)
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: RecvConfigReq(4) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 212.24.51.1
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: SendConfigAck(4) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 212.24.51.1
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: State change Req-Sent --> Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: RecvConfigRej(1) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: DEFLATE[4] win 15
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: PRED1[2]
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: SendConfigReq(2) state = Req-SentSep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: RecvConfigRej(1) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: COMPPROTO[6] 16 VJ slots with slot compression
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: SendConfigReq(2) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 0.0.0.0
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: RecvConfigReq(5) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x01000040 (128 bits, stateless)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: SendConfigAck(5) state = Req-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x01000040 (128 bits, stateless)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: State change Req-Sent --> Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: RecvConfigNak(2) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x00000040 (128 bits, stateful)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: SendConfigReq(3) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x00000040 (128 bits, stateful)
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: RecvConfigNak(2) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 212.24.51.9
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] changing address: 0.0.0.0 --> 212.24.51.9
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: SendConfigReq(3) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 212.24.51.9
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: RecvConfigAck(3) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE[6] value 0x00000040 (128 bits, stateful)
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: State change Ack-Sent --> Opened
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: LayerUp.
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE: Input channel initiated
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: MPPE: Output channel initiated
Sep 30 16:50:11 ns ppp[22118]: tun0: CCP: deflink: Out = MPPE[18], In = MPPE[18]
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: Reducing MTU from 1400 to 1398 (CCP requirement)
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: RecvConfigAck(3) state = Ack-Sent
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: IPADDR[6] 212.24.51.9
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: State change Ack-Sent --> Opened
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: deflink: LayerUp.
Sep 30 16:50:11 ns ppp[22118]: tun0: IPCP: myaddr 212.24.51.9 hisaddr = 212.24.51.1
Sep 30 16:50:11 ns ppp[22118]: tun0: LCP: Reducing MTU from 1400 to 1398 (CCP requirement)
Sep 30 16:50:42 ns ppp[22118]: tun0: Phase: Signal 15, terminate.
Sep 30 16:50:42 ns ppp[22118]: tun0: IPCP: deflink: LayerDown: 212.24.51.9
Sep 30 16:50:42 ns ppp[22118]: tun0: IPCP: deflink: SendTerminateReq(4) state = Opened
Sep 30 16:50:42 ns ppp[22118]: tun0: IPCP: deflink: State change Opened --> Closing
Sep 30 16:50:42 ns ppp[22118]: tun0: Phase: Signal 15, terminate.
Sep 30 16:50:45 ns ppp[22118]: tun0: IPCP: deflink: SendTerminateReq(4) state = Closing
Sep 30 16:50:54 ns last message repeated 3 times
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: deflink: LayerFinish.
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: Connect time: 46 secs: 0 octets in, 0 octets out
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: 0 packets in, 0 packets out
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: total 0 bytes/sec, peak 0 bytes/sec on Thu Sep 30 16:50:11 2010
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: deflink: State change Closing --> Closed
Sep 30 16:50:57 ns ppp[22118]: tun0: Phase: bundle: Terminate
Sep 30 16:50:57 ns ppp[22118]: tun0: CCP: deflink: LayerDown.
Sep 30 16:50:57 ns ppp[22118]: tun0: CCP: deflink: State change Opened --> Starting
Sep 30 16:50:57 ns ppp[22118]: tun0: CCP: deflink: LayerFinish.
Sep 30 16:50:57 ns ppp[22118]: tun0: CCP: deflink: State change Starting --> Initial
Sep 30 16:50:57 ns ppp[22118]: tun0: LCP: deflink: LayerDown
Sep 30 16:50:57 ns ppp[22118]: tun0: LCP: deflink: SendTerminateReq(2) state = Opened
Sep 30 16:50:57 ns ppp[22118]: tun0: LCP: deflink: State change Opened --> Closing
Sep 30 16:50:57 ns ppp[22118]: tun0: Phase: deflink: open -> lcp
Sep 30 16:50:57 ns ppp[22118]: tun0: IPCP: deflink: State change Closed --> Initial
Sep 30 16:51:00 ns ppp[22118]: tun0: LCP: deflink: SendTerminateReq(2) state = Closing
Sep 30 16:51:09 ns last message repeated 3 times
Sep 30 16:51:12 ns ppp[22118]: tun0: LCP: deflink: LayerFinish
Sep 30 16:51:12 ns ppp[22118]: tun0: LCP: deflink: State change Closing --> Closed
Sep 30 16:51:12 ns ppp[22118]: tun0: LCP: deflink: State change Closed --> Initial
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: deflink: Disconnected!
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: deflink: Connect time: 63 secs: 542 octets in, 655 octets out
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: deflink: 12 packets in, 23 packets out
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: total 19 bytes/sec, peak 260 bytes/sec on Thu Sep 30 16:50:12 2010
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: deflink: lcp -> closed
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: bundle: Dead
Sep 30 16:51:12 ns ppp[22118]: tun0: Phase: PPP Terminated (normal).
----------------------------ppp.log-------------------------------------Гуглил, проблема есть у многих, решений не нашел, предлагают забить и перейти на mpd.
Установил mpd (и 4 и 5), другая беда, соединяемся, авторизуемся, интерфейсу IP адрес не назначается:----------------------------mpd.log-------------------------------------
Sep 30 17:11:52 ns mpd: process 22917 started, version 5.5 (root@ns 10:53 30-Sep-2010)
Sep 30 17:11:52 ns mpd: Label 'startup' not found
Sep 30 17:11:52 ns mpd: [B1] Bundle: Interface ng1 created
Sep 30 17:11:52 ns mpd: [L1] Link: OPEN event
Sep 30 17:11:52 ns mpd: [L1] LCP: Open event
Sep 30 17:11:52 ns mpd: [L1] LCP: state change Initial --> Starting
Sep 30 17:11:52 ns mpd: [L1] LCP: LayerStart
Sep 30 17:11:53 ns mpd: [L1] PPTP call successful
Sep 30 17:11:53 ns mpd: [L1] Link: UP event
Sep 30 17:11:53 ns mpd: [L1] LCP: Up event
Sep 30 17:11:53 ns mpd: [L1] LCP: state change Starting --> Req-Sent
Sep 30 17:11:53 ns mpd: [L1] LCP: SendConfigReq #1
Sep 30 17:11:53 ns mpd: [L1] ACFCOMP
Sep 30 17:11:53 ns mpd: [L1] PROTOCOMP
Sep 30 17:11:53 ns mpd: [L1] ACCMAP 0x000a0000
Sep 30 17:11:53 ns mpd: [L1] MRU 1500
Sep 30 17:11:53 ns mpd: [L1] MAGICNUM 01ee1bf7
Sep 30 17:11:53 ns mpd: [L1] LCP: rec'd Configure Request #0 (Req-Sent)
Sep 30 17:11:53 ns mpd: [L1] MRU 1400
Sep 30 17:11:53 ns mpd: [L1] AUTHPROTO CHAP MSOFTv2
Sep 30 17:11:53 ns mpd: [L1] MAGICNUM 479947c9
Sep 30 17:11:53 ns mpd: [L1] PROTOCOMP
Sep 30 17:11:53 ns mpd: [L1] ACFCOMP
Sep 30 17:11:53 ns mpd: [L1] CALLBACK 6
Sep 30 17:11:53 ns mpd: [L1] MP MRRU 1614
Sep 30 17:11:53 ns mpd: [L1] ENDPOINTDISC [LOCAL] a1 92 54 a3 93 17 4d cf 8b 89 18 57 ce e9 e9 65 00 00 0
Sep 30 17:11:53 ns mpd: [L1] BACP
Sep 30 17:11:53 ns mpd: [L1] Not supported
Sep 30 17:11:53 ns mpd: [L1] LCP: SendConfigRej #0
Sep 30 17:11:53 ns mpd: [L1] CALLBACK 6
Sep 30 17:11:53 ns mpd: [L1] MP MRRU 1614
Sep 30 17:11:53 ns mpd: [L1] BACP
Sep 30 17:11:53 ns mpd: [L1] LCP: rec'd Configure Ack #1 (Req-Sent)
Sep 30 17:11:53 ns mpd: [L1] ACFCOMP
Sep 30 17:11:53 ns mpd: [L1] PROTOCOMP
Sep 30 17:11:53 ns mpd: [L1] ACCMAP 0x000a0000
Sep 30 17:11:53 ns mpd: [L1] MRU 1500
Sep 30 17:11:53 ns mpd: [L1] MAGICNUM 01ee1bf7
Sep 30 17:11:53 ns mpd: [L1] LCP: state change Req-Sent --> Ack-Rcvd
Sep 30 17:11:53 ns mpd: [L1] LCP: rec'd Configure Request #1 (Ack-Rcvd)
Sep 30 17:11:53 ns mpd: [L1] MRU 1400
Sep 30 17:11:53 ns mpd: [L1] AUTHPROTO CHAP MSOFTv2
Sep 30 17:11:53 ns mpd: [L1] MAGICNUM 479947c9
Sep 30 17:11:53 ns mpd: [L1] PROTOCOMP
Sep 30 17:11:53 ns mpd: [L1] ACFCOMP
Sep 30 17:11:53 ns mpd: [L1] ENDPOINTDISC [LOCAL] a1 92 54 a3 93 17 4d cf 8b 89 18 57 ce e9 e9 65 00 00 0
Sep 30 17:11:53 ns mpd: [L1] LCP: SendConfigAck #1
Sep 30 17:11:53 ns mpd: [L1] MRU 1400
Sep 30 17:11:53 ns mpd: [L1] AUTHPROTO CHAP MSOFTv2
Sep 30 17:11:53 ns mpd: [L1] MAGICNUM 479947c9
Sep 30 17:11:53 ns mpd: [L1] PROTOCOMP
Sep 30 17:11:53 ns mpd: [L1] ACFCOMP
Sep 30 17:11:53 ns mpd: [L1] ENDPOINTDISC [LOCAL] a1 92 54 a3 93 17 4d cf 8b 89 18 57 ce e9 e9 65 00 00 0
Sep 30 17:11:53 ns mpd: [L1] LCP: state change Ack-Rcvd --> Opened
Sep 30 17:11:53 ns mpd: [L1] LCP: auth: peer wants CHAP, I want nothing
Sep 30 17:11:53 ns mpd: [L1] LCP: LayerUp
Sep 30 17:11:53 ns mpd: [L1] CHAP: rec'd CHALLENGE #0 len: 24
Sep 30 17:11:53 ns mpd: [L1] Name: "AIS"
Sep 30 17:11:53 ns mpd: [L1] CHAP: Using authname "MyUserNamev"
Sep 30 17:11:53 ns mpd: [L1] CHAP: sending RESPONSE #0 len: 63
Sep 30 17:11:53 ns mpd: [L1] CHAP: rec'd SUCCESS #0 len: 46
Sep 30 17:11:53 ns mpd: [L1] MESG: S=7E95E1ECAC9DF53CC8A134982180CDCECD111545
Sep 30 17:11:53 ns mpd: [L1] LCP: authorization successful
Sep 30 17:11:53 ns mpd: [L1] Link: Matched action 'bundle "B1" ""'
Sep 30 17:11:53 ns mpd: [L1] Link: Join bundle "B1"
Sep 30 17:11:53 ns mpd: [B1] Bundle: Status update: up 1 link, total bandwidth 64000 bps
Sep 30 17:11:53 ns mpd: [B1] IPCP: Open event
Sep 30 17:11:53 ns mpd: [B1] IPCP: state change Initial --> Starting
Sep 30 17:11:53 ns mpd: [B1] IPCP: LayerStart
Sep 30 17:11:53 ns mpd: [B1] IPCP: Up event
Sep 30 17:11:53 ns mpd: [B1] IPCP: state change Starting --> Req-Sent
Sep 30 17:11:53 ns mpd: [B1] IPCP: SendConfigReq #1
Sep 30 17:11:53 ns mpd: [B1] IPADDR 0.0.0.0
Sep 30 17:11:53 ns mpd: [B1] COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Sep 30 17:11:53 ns mpd: [L1] rec'd unexpected protocol CCP, rejecting
Sep 30 17:11:53 ns mpd: [B1] IPCP: rec'd Configure Request #4 (Req-Sent)
Sep 30 17:11:53 ns mpd: [B1] IPADDR 212.24.51.1
Sep 30 17:11:53 ns mpd: [B1] 212.24.51.1 is OK
Sep 30 17:11:53 ns mpd: [B1] IPCP: SendConfigAck #4
Sep 30 17:11:53 ns mpd: [B1] IPADDR 212.24.51.1
Sep 30 17:11:53 ns mpd: [B1] IPCP: state change Req-Sent --> Ack-Sent
Sep 30 17:11:53 ns mpd: [B1] IPCP: rec'd Configure Reject #1 (Ack-Sent)
Sep 30 17:11:53 ns mpd: [B1] COMPPROTO VJCOMP, 16 comp. channels, no comp-cid
Sep 30 17:11:53 ns mpd: [B1] IPCP: SendConfigReq #2
Sep 30 17:11:53 ns mpd: [B1] IPADDR 0.0.0.0
Sep 30 17:11:53 ns mpd: [L1] LCP: rec'd Terminate Request #5 (Opened)
Sep 30 17:11:53 ns mpd: [L1] LCP: state change Opened --> Stopping
Sep 30 17:11:53 ns mpd: [L1] Link: Leave bundle "B1"
Sep 30 17:11:53 ns mpd: [B1] Bundle: Status update: up 0 links, total bandwidth 9600 bps
Sep 30 17:11:53 ns mpd: [B1] IPCP: Close event
Sep 30 17:11:53 ns mpd: [B1] IPCP: state change Ack-Sent --> Closing
Sep 30 17:11:53 ns mpd: [B1] IPCP: SendTerminateReq #3
Sep 30 17:11:53 ns mpd: [B1] IPCP: Down event
Sep 30 17:11:53 ns mpd: [B1] IPCP: LayerFinish
Sep 30 17:11:53 ns mpd: [B1] Bundle: No NCPs left. Closing links...
Sep 30 17:11:53 ns mpd: [B1] IPCP: state change Closing --> Initial
Sep 30 17:11:53 ns mpd: [L1] LCP: SendTerminateAck #2
Sep 30 17:11:53 ns mpd: [L1] LCP: LayerDown
Sep 30 17:11:55 ns mpd: [L1] LCP: rec'd Terminate Request #6 (Stopping)
Sep 30 17:11:55 ns mpd: [L1] LCP: SendTerminateAck #3
Sep 30 17:11:55 ns mpd: [L1] LCP: state change Stopping --> Stopped
Sep 30 17:11:55 ns mpd: [L1] LCP: LayerFinish
Sep 30 17:11:55 ns mpd: [L1] LCP: state change Stopping --> Stopped
Sep 30 17:11:55 ns mpd: [L1] LCP: LayerFinish
Sep 30 17:11:55 ns mpd: [L1] PPTP call terminated
Sep 30 17:11:55 ns mpd: [L1] Link: DOWN event
Sep 30 17:11:55 ns mpd: [L1] LCP: Down event
Sep 30 17:11:55 ns mpd: [L1] LCP: state change Stopped --> Starting
Sep 30 17:11:55 ns mpd: [L1] LCP: LayerStart
Sep 30 17:11:55 ns mpd: [L1] Link: reconnection attempt 1 in 1 seconds
Sep 30 17:11:56 ns mpd: [L1] Link: reconnection attempt 1
----------------------------mpd.log-------------------------------------
Поднял локальный pptp сервер на Cisco, все OK и с pptpclient и с mpd.
Соединяюсь на нужный VPN сервере с Cisco в качестве pptp клиента - все OK.
Соединяюсь на нужный VPN сервер с WinXP pptp - все OK.Нуждаюсь в выпрямлении рук.
Спасибо.
нужен маршрут до сервера PPTP
до установления соединения он есть (маршрут по умолчанию), но после установления маршрут по умолчанию заменяется на новый, в результате чего сервер PPTP становится недостижимым
стандартные грабли для новичков
> нужен маршрут до сервера PPTP
> до установления соединения он есть (маршрут по умолчанию), но после установления маршрут
> по умолчанию заменяется на новый, в результате чего сервер PPTP становится
> недостижимым
> стандартные грабли для новичковСпасибо за ответ.
В моем случае это не так. На tun0 интерфейс роутится всего один нужный мне хост/32.
В логах его скрыл X_ами.
Маршрут по умолчанию у меня не меняется.
>> нужен маршрут до сервера PPTP
>> до установления соединения он есть (маршрут по умолчанию), но после установления маршрут
>> по умолчанию заменяется на новый, в результате чего сервер PPTP становится
>> недостижимым
>> стандартные грабли для новичков
> Спасибо за ответ.
> В моем случае это не так. На tun0 интерфейс роутится всего один
> нужный мне хост/32.
> В логах его скрыл X_ами.
> Маршрут по умолчанию у меня не меняется.вывод
netstat -rn
до соединения, сразу после и после разрыва
нежелательные для публикации IP спрятать по вкусу
совпадающие во всех трех выводах строки можно опустить
Таблица маршрутизации до соединения:
-------------------------------------------------------------------
Destination Gateway Flags Refs Use Netif Expire
default XXX.12X.4.129 UGS 24 4810120 alc0
10.0.10.1/32 192.168.150.2 UGS 0 0 vlan40
10.0.11.1/32 192.168.150.2 UGS 0 0 vlan40
10.0.26.1/32 192.168.150.2 UGS 0 0 vlan40
10.0.150.1/32 192.168.150.2 UGS 0 0 vlan40
10.0.170.1/32 192.168.150.2 UGS 0 0 vlan40
10.9.1.0/29 link#7 U 0 0 vlan27
10.9.1.6 link#7 UHS 0 0 lo0
10.9.1.8/29 10.9.1.1 UGS 1 259 vlan27
10.10.0.1 link#9 UHS 0 0 lo0
10.10.0.2 link#9 UH 0 0 gre0
10.20.0.1 link#10 UHS 0 0 lo0
10.20.0.2 link#10 UH 0 0 gre1
10.30.0.1 link#11 UHS 0 0 lo0
10.30.0.2 link#11 UH 0 0 gre2
10.40.0.1 link#12 UHS 0 0 lo0
10.40.0.2 link#12 UH 0 0 gre3
10.50.0.1 link#13 UHS 0 0 lo0
10.50.0.2 link#13 UH 0 0 gre4
XXX.12X.3.32/30 10.10.0.2 UGS 0 0 gre0
XXX.12X.3.36/30 10.20.0.2 UGS 0 0 gre1
XXX.12X.3.40/30 10.30.0.2 UGS 0 0 gre2
XXX.12X.3.44/30 10.40.0.2 UGS 0 0 gre3
XXX.12X.3.48/29 link#3 U 1 4406 wb1
XXX.12X.3.49 link#3 UHS 0 0 lo0
XXX.12X.3.60/30 10.50.0.2 UGS 0 0 gre4
XXX.12X.4.128/29 link#1 U 0 4 alc0
XXX.12X.4.130 link#1 UHS 0 2377411 lo0
XXX.12X.4.131 link#1 UHS 0 0 lo0 =>
XXX.12X.4.131/32 link#1 U 0 0 alc0
XXX.12X.4.132 link#1 UHS 0 0 lo0 =>
XXX.12X.4.132/32 link#1 U 0 0 alc0
127.0.0.1 link#5 UH 0 31 lo0
192.1.1.2/32 192.168.150.2 UGS 0 457 vlan40
192.168.0.0/24 link#2 U 2 1963 wb0
192.168.0.1 link#2 UHS 0 238 lo0
192.168.3.0/30 link#8 U 0 0 vlan28
192.168.3.1 link#8 UHS 0 0 lo0
192.168.5.0/24 link#3 U 0 4351 wb1
192.168.5.1 link#3 UHS 0 0 lo0
192.168.7.0/24 192.168.3.2 UGS 1 203 vlan28
192.168.8.0/24 link#3 U 1 195 wb1
192.168.8.1 link#3 UHS 0 0 lo0
192.168.9.0/24 link#3 U 1 466 wb1
192.168.9.1 link#3 UHS 0 0 lo0
192.168.10.0/24 link#3 U 1 67 wb1
192.168.10.1 link#3 UHS 0 0 lo0
192.168.150.0/24 link#6 U 0 0 vlan40
192.168.150.32 link#6 UHS 0 0 lo0
192.168.152.0/24 10.10.0.2 UGS 0 79 gre0
192.168.153.0/24 10.20.0.2 UGS 0 327 gre1
192.168.154.0/24 10.30.0.2 UGS 0 270 gre2
192.168.156.0/24 10.40.0.2 UGS 0 333 gre3
192.168.157.0/24 10.50.0.2 UGS 0 67 gre4
192.168.167.224/27 link#3 U 0 212 wb1
192.168.167.225 link#3 UHS 0 0 lo0
192.168.167.228/30 10.20.0.2 UGS 0 0 gre1
192.168.167.233/32 10.10.0.2 UGS 0 116 gre0
192.168.167.234/32 10.40.0.2 UGS 0 133 gre3
192.168.167.235/32 10.20.0.2 UGS 0 0 gre1
-------------------------------------------------------------------После соединения pptpclient, где 212.53.XXX.XXX нужный мне хост,
доступ к которому нужен через pptp тунель, все остальное через DG:
-------------------------------------------------------------------
Destination Gateway Flags Refs Use Netif Expire
default XXX.12X.4.129 UGS 11 4891922 alc0
skip------------------------- skip ----------------------------skip
212.24.51.1 link#15 UHS 0 0 tun0
212.24.51.29 link#15 UHS 0 0 lo0
212.53.XXX.XXX 212.24.51.1 UGS 0 0 tun0
-------------------------------------------------------------------Сразу после разрыва соединения:
-------------------------------------------------------------------
default XXX.12X.4.129 UGS 17 4918468 alc0
skip------------------------- skip ----------------------------skip
а первый вывод без тех строк, что удалены из второго и третьего, как выглядит?P.S. я имел в виду, удалить совпадающие во всех трех выводах строки из всех трех выводов
P.P.S. кроме маршрутов по умолчанию и маршрутов, важных в контексте обсуждения
Так же, как и третий вывод, то есть только DG.
Остальное удалил, как просили для лучшей читаемости.
> P.S. я имел в виду, удалить совпадающие во всех трех выводах строки
> из всех трех выводов
> P.P.S. кроме маршрутов по умолчанию и маршрутов, важных в контексте обсужденияТаблица маршрутизации до соединения:
-------------------------------------------------------------------
Destination Gateway Flags Refs Use Netif Expire
default XXX.12X.4.129 UGS 24 4810120 alc0
skip------------------------- skip ----------------------------skipПосле соединения pptpclient, где 212.53.XXX.XXX нужный мне хост,
доступ к которому нужен через pptp тунель, все остальное через DG:
-------------------------------------------------------------------
Destination Gateway Flags Refs Use Netif Expire
default XXX.12X.4.129 UGS 11 4891922 alc0
skip------------------------- skip ----------------------------skip
212.24.51.1 link#15 UHS 0 0 tun0
212.24.51.29 link#15 UHS 0 0 lo0
212.53.XXX.XXX 212.24.51.1 UGS 0 0 tun0
-------------------------------------------------------------------Сразу после разрыва соединения:
-------------------------------------------------------------------
default XXX.12X.4.129 UGS 17 4918468 alc0
skip------------------------- skip ----------------------------skipВсе, что совпадает skip-нуто.
>[оверквотинг удален]
> UGS 0
> 0 tun0
> -------------------------------------------------------------------
> Сразу после разрыва соединения:
> -------------------------------------------------------------------
> default
> XXX.12X.4.129 UGS
> 17 4918468 alc0
> skip------------------------- skip ----------------------------skip
> Все, что совпадает skip-нуто.http://www.freebsd.org/doc/ru/books/faq/ppp.html#CONNECTION-...
не поможет? (set timeout 0)P.S. там еще много чего интересного есть
> http://www.freebsd.org/doc/ru/books/faq/ppp.html#CONNECTION-...
> не поможет? (set timeout 0)Сейчас проверю - отпишусь.
Меня смущает факт, что если коннектится к своему собственному pptp серверу, который для эксперимента поднял на маршрутизаторе, то все нормально коннектится и отрабатывает,
как с pptpclient, так и с mpd в качестве клиента.
>> http://www.freebsd.org/doc/ru/books/faq/ppp.html#CONNECTION-...
>> не поможет? (set timeout 0)Нет, не помогает, все тоже самое, коннект отваливается через 10-20 сек.
Ладно, леший с ним, с pptpclient_ом.Почему при использовании mpd-client, интерфейс не получает IP адрес ?
>>> http://www.freebsd.org/doc/ru/books/faq/ppp.html#CONNECTION-...
>>> не поможет? (set timeout 0)
> Нет, не помогает, все тоже самое, коннект отваливается через 10-20 сек.
> Ладно, леший с ним, с pptpclient_ом.
> Почему при использовании mpd-client, интерфейс не получает IP адрес ?какая версия и по какой статье настройка?
Пробовал и 4 и 5 версии.
Результат один и тот же.
Опять же при подключении на свой pptp сервер - все OK.
Настройка по куче материала, как здесь так и из других источников.
Вот конфиг:
--------------mpd.conf---------------------
create bundle static B1
# set iface route default
set ipcp ranges 0.0.0.0/0 0.0.0.0/0create link static L1 pptp
set link action bundle B1
set auth authname MyUserName
set auth password MyPasswd
set link max-redial 0
set link mtu 1460
set link keep-alive 20 75
set pptp peer VPN-SERVER
set pptp disable windowing
open
--------------mpd.conf---------------------