使用免费的SSL

Standard

公司收的一大堆论坛都要加SSL,每个都要购买的话会是一笔不小的费用。
所以准备全部使用Let’s Encrypt的免费SSL。

wget -O -  https://get.acme.sh | sh
cd .acme.sh/
#确保通过域名可访问到/var/www/yemaosheng/htdocs/.well-known/下的内容
./acme.sh --issue -d yemaosheng.com -d www.yemaosheng.com -w /var/www/yemaosheng/htdocs
[Tue Mar  7 21:19:34 CST 2017] Multi domain='DNS:www.yemaosheng.com'
[Tue Mar  7 21:19:34 CST 2017] Getting domain auth token for each domain
[Tue Mar  7 21:19:34 CST 2017] Getting webroot for domain='yemaosheng.com'
[Tue Mar  7 21:19:34 CST 2017] Getting new-authz for domain='yemaosheng.com'
[Tue Mar  7 21:19:36 CST 2017] The new-authz request is ok.
[Tue Mar  7 21:19:36 CST 2017] Getting webroot for domain='www.yemaosheng.com'
[Tue Mar  7 21:19:36 CST 2017] Getting new-authz for domain='www.yemaosheng.com'
[Tue Mar  7 21:19:36 CST 2017] The new-authz request is ok.
[Tue Mar  7 21:19:37 CST 2017] yemaosheng.com is already verified, skip http-01.
[Tue Mar  7 21:19:37 CST 2017] Verifying:www.yemaosheng.com
[Tue Mar  7 21:19:39 CST 2017] Success
[Tue Mar  7 21:19:39 CST 2017] Verify finished, start to sign.
[Tue Mar  7 21:19:40 CST 2017] Cert success.
-----BEGIN CERTIFICATE-----
MIIFFzCCA............FlYV3RaDYYpw=
-----END CERTIFICATE-----
[Tue Mar  7 21:19:40 CST 2017] Your cert is in  /root/.acme.sh/yemaosheng.com/yemaosheng.com.cer 
[Tue Mar  7 21:19:40 CST 2017] Your cert key is in  /root/.acme.sh/yemaosheng.com/yemaosheng.com.key 
[Tue Mar  7 21:19:40 CST 2017] The intermediate CA cert is in  /root/.acme.sh/yemaosheng.com/ca.cer 
[Tue Mar  7 21:19:40 CST 2017] And the full chain certs is there:  /root/.acme.sh/yemaosheng.com/fullchain.cer
 
crontab -l
7 0 * * * "/root/.acme.sh"/acme.sh --cron --home "/root/.acme.sh" > /dev/null
 
vi /etc/httpd/conf.d/ssl.conf
...
<VirtualHost *:443>
        DocumentRoot "/var/www/yemaosheng/htdocs"
        ServerName yemaosheng.com
 
        SSLEngine on
        SSLProtocol all -SSLv2 -SSLv3
        SSLCipherSuite "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH EDH+aRSA !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"
        SSLCertificateFile "/root/.acme.sh/yemaosheng.com/yemaosheng.com.cer"
        SSLCertificateKeyFile "/root/.acme.sh/yemaosheng.com/yemaosheng.com.key"
        SSLCertificateChainFile "/root/.acme.sh/yemaosheng.com/fullchain.cer"
        ...
</VirtualHost>
...

Sublime正则替换修改Hosts文件为MikroTik配置

Standard

感谢老D博客整理更新这些科学上网Hosts

我需把它们配到MikroTik路由器上时格式要改一下
说真的,每次用正则都要查一下才行,这东西真难记

MikroTik:
[admin@MikroTik] ip dns static> add address=10.0.0.1 name=www.example.com

Sublime:
Find What: (\d+.\d+.\d+.\d+)\s+([A-Za-z0-9-]+.[A-Za-z0-9-]+)
Replace With: add address=$1 name=$2

How to clone a Azure VM

Standard

run on your sample-vm

waagent -deprovision+user

run on your azure-cli env

$rgName = "VMTestGroup"
$template = "Template-test.json"
$vmName = "VMTest"
$vhdName = "VHDTest"
 
azure vm deallocate -g $rgName -n $vmName
azure vm generalize $rgName -n $vmName
azure vm capture $rgName $vmName $vhdName -t $template
 
# the $template should looks like this. and you have to change 'newvmname' before use.
...
         "storageProfile": {
          "dataDisks": [
            {
              "caching": "ReadOnly",
              "vhd": {
                "uri": "https://yourdiskname.blob.core.windows.net/vhds/dataDisk-0.newvmname.vhd"
              },
              "image": {
                "uri": "https://yourdiskname.blob.core.windows.net/system/Microsoft.Compute/Images/vhds/yourcapturedvmname-dataDisk-0.ff60129b-...3cf59bf9315a.vhd"
              },
              "createOption": "FromImage",
              "name": "yourcapturedvmname-dataDisk-0.ff60129b-4ec5-4dcd-ae97-3cf59bf9315a.vhd",
              "lun": 0
            }
          ],
          "osDisk": {
            "caching": "ReadWrite",
            "vhd": {
              "uri": "https://yourdiskname.blob.core.windows.net/vhds/osDisk.newvmname.vhd"
            },
            "image": {
              "uri": "https://yourdiskname.blob.core.windows.net/system/Microsoft.Compute/Images/vhds/yourcapturedvmname-osDisk.ff60129b-...3cf59bf9315a.vhd"
            },
            "createOption": "FromImage",
            "name": "yourcapturedvmname-osDisk.ff60129b-4ec5-4dcd-ae97-3cf59bf9315a.vhd",
            "osType": "Linux"
          }
        },
...
 
 
azure group deployment create $rgName MyDeployment -f Template-test-modified.json
    info:    Executing command group deployment create
    info:    Supply values for the following parameters
    vmName: NewVmName
    adminUserName: username
    adminPassword: password
    networkInterfaceId: /subscriptions/61719d1b-...ab74b6f77865/resourceGroups/VMTestGroup/providers/Microsoft.Network/networkInterfaces/YourNetworkInterfaceName
 
#If you do not have an existing NetworkInterface, you need create first. 
azure network nic create $rgName YourNetworkInterfaceName -k default -m YourSubnetVnetName  -l "westus2"

Percona Monitoring Plugins for Zabbix3

Standard
#Install
apt-get install percona-zabbix-templates
cp /var/lib/zabbix/percona/templates/userparameter_percona_mysql.conf /etc/zabbix/zabbix_agentd.conf.d/
#Configure 
vi /var/lib/zabbix/percona/scripts/ss_get_mysql_stats.php
...
$mysql_user = 'uid';
$mysql_pass = 'pwd';
...
#Test
/var/lib/zabbix/percona/scripts/get_mysql_stats_wrapper.sh gg

zbx_percona_mysql_template

A ton of data to import with a shell

Standard

schema_import.sh

#!/bin/bash
for f in *schema*.gz; do
  DBName=$(echo $f | cut -d - -f 5)
  echo "Create "${DBName}
  echo "create database if not exists ${DBName}" | /usr/bin/mysql -u root -pXXXXXX
  zcat ${f} | /usr/bin/mysql -u root -pXXXXXX ${DBName}
  echo "Created"
done

data_import.sh

#!/bin/bash
for f in *data*.gz; do
  DBName=$(echo $f | cut -d - -f 5)
  echo "Import "${DBName}
  zcat ${f} | /usr/bin/mysql -u root -pXXXXXX ${DBName}
  echo "Imported"
done

一大坨文件

/nfs/dbs/31/mysql-slave-31.lololololol.com-schema-user_cluster_1-11-27-16.sql.gz
/nfs/dbs/31/mysql-slave-31.lololololol.com-schema-user_cluster_2-11-27-16.sql.gz
...
 
/nfs/dbs/31/mysql-slave-31.lololololol.com-data-user_cluster_1-11-27-16.sql.gz
/nfs/dbs/31/mysql-slave-31.lololololol.com-data-user_cluster_2-11-27-16.sql.gz
...

ps:
公司收了一家美国的论坛服务商,那边留了个人交接.
放了个NFS,上去一看,好家伙.每台机器目录下有几千个小的备份文件.
问其why,答long story…
好吧…好在文件命名还算规律。

dump.sh

MYSQL_PORT=$1
FILE_NAME=$2
MYSQL_IP=x.x.x.x
MYSQL_USER=uid
MYSQL_PASS=pwd
MYSQL_CONN="-u${MYSQL_USER} -p${MYSQL_PASS} -h${MYSQL_IP} -P${MYSQL_PORT}"
#
# Collect all database names except for
# mysql, information_schema, and performance_schema
#
SQL="SELECT schema_name FROM information_schema.schemata WHERE schema_name NOT IN"
SQL="${SQL} ('mysql','information_schema','performance_schema')"
 
DBLISTFILE=/tmp/DatabasesToDump-${FILE_NAME}.txt
mysql ${MYSQL_CONN} -ANe"${SQL}" > ${DBLISTFILE}
 
DBLIST=""
for DB in `cat ${DBLISTFILE}` ; do DBLIST="${DBLIST} ${DB}" ; done
 
MYSQLDUMP_OPTIONS="--routines --triggers --single-transaction"
mysqldump ${MYSQL_CONN} ${MYSQLDUMP_OPTIONS} --databases ${DBLIST} | pv | gzip > db-${FILE_NAME}.sql.gz
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb && dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb && apt-get update && apt-get -y upgrade && apt-get -y install lvm2 xfsprogs percona-server-server-5.7
 
# add 2 x 128 ssd and 1 x 256 hdd
# ssd n enter enter enter enter t enter 8e enter w enter
fdisk /dev/sdc
fdisk /dev/sdd
 
# hdd
fdisk /dev/sde
 
pvcreate /dev/sdc1 /dev/sdd1 /dev/sde1 && vgcreate mysql /dev/sdc1 /dev/sdd1 && vgcreate data /dev/sde1 && lvcreate --name data --size 240G mysql && lvcreate --name backups --size 240G data
 
mkfs.xfs /dev/mapper/mysql-data && mkfs.xfs /dev/mapper/data-backups
 
edit /etc/fstab
/dev/mapper/mysql-data /var/lib/mysql auto  defaults,nobarrier 0 2
/dev/mapper/data-backups /data auto  defaults,nobarrier 0 2
 
service mysql stop && cd /var/lib && mv mysql mysql.old && mkdir mysql && mkdir /data && mount -a && mv mysql.old/* mysql/  && chown mysql:mysql mysql
 
vi /etc/security/limits.d/91-mysql.conf
mysql   soft    nofile 400000
mysql   hard    nofile 400000 
 
/etc/sysctl.conf
fs.file-max = 20000000
net.ipv4.tcp_fin_timeout = 10
kernel.pid_max = 65535
kernel.randomize_va_space = 1
net.core.netdev_max_backlog=32768
net.core.rmem_max = 8388608
net.core.somaxconn = 16384
net.core.wmem_max = 8388608
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.secure_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.default.log_martians = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.secure_redirects = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.ip_forward=0
net.ipv4.ip_local_port_range = 2000 65000
net.ipv4.tcp_max_syn_backlog=8192
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_syncookies=0
net.ipv4.tcp_syncookies = 0
net.ipv4.tcp_tw_recycle = 0
net.ipv4.tcp_tw_reuse = 0
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_wmem = 4096 87380 8388608
vm.overcommit_memory = 1
vm.swappiness = 0
fs.aio-max-nr = 500000
 
sysctl -p
 
echo session required pam_limits.so >> /etc/pam.d/common-session
 
vi /etc/mysql/percona-server.conf.d/mysqld.cnf 
[mysqld]
user   = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket   = /var/run/mysqld/mysqld.sock
port   = 3306
basedir    = /usr
datadir    = /var/lib/mysql
tmpdir   = /tmp
lc-messages-dir  = /usr/share/mysql
explicit_defaults_for_timestamp
innodb_use_native_aio=1
max_connections=4096
wait_timeout=5
interactive_timeout=120
myisam_sort_buffer_size=1024M
sort_buffer_size=1024M
innodb_file_per_table=ON
skip-name-resolve
default-storage-engine=InnoDB
max_allowed_packet=64M
# 48 hours
expire_logs_days = 3
server-id              = 108
innodb_data_file_path = ibdata1:10M:autoextend
#innodb_buffer_pool_size = 28000M
innodb_flush_method = O_DIRECT
innodb_file_per_table
bind-address = 0.0.0.0
log-error    = /var/log/mysql/error.log
#log-erorr = /dev/null
log_error_verbosity=3
# Recommended in standard MySQL setup
#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_ALL_TABLES
sql_mode=""
symbolic-links=0
 
# Forum requires this
lower_case_table_names  = 1
master-info-repository=TABLE
relay-log-info-repository=TABLE
replicate-do-table              = account.identity_data
replicate-do-table              = account.signature_data
replicate-do-table              = account.account_usertypes
replicate-do-table              = account.banned_words
replicate-wild-do-table         = forum\_skeleton.%
replicate-wild-do-table = domain\_%.%
replicate-wild-do-table = user\_cluster\_%.%
slave-skip-errors               = 1062
open-files-limit=400000
myisam-recover-options=FORCE,BACKUP

Installing SoftEther VPN Server

Standard
apt-get update;
apt-get install build-essential;
 
cd;
wget http://www.softether-download.com/files/softether/v4.21-9613-beta-2016.04.24-tree/Linux/SoftEther_VPN_Server/64bit_-_Intel_x64_or_AMD64/softether-vpnserver-v4.21-9613-beta-2016.04.24-linux-x64-64bit.tar.gz;
tar zxf softether-vpnserver-v4.21-9613-beta-2016.04.24-linux-x64-64bit.tar.gz;
 
cd vpnserver/;
make;
 
cd;
mv vpnserver /usr/local/;
 
vi /etc/init.d/vpnserver;
#!/bin/sh
DAEMON=/usr/local/vpnserver/vpnserver
LOCK=/var/lock/subsys/vpnserver
test -x $DAEMON || exit 0
case "$1" in
start)
$DAEMON start
touch $LOCK
;;
stop)
$DAEMON stop
rm $LOCK
;;
restart)
$DAEMON stop
sleep 3
$DAEMON start
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit 0
 
chmod +x /etc/init.d/vpnserver;
/etc/init.d/vpnserver start;

Using SaltStack to deploy Auto-scaling EC2

Standard

SaltStack Master: 172.66.1.100

Create AMI by default VM:

root@ip-x.x.x.x:~# cat /etc/rc.local
/root/PkgInit.sh;
/root/SaltMinionInit.sh;
/root/SaltCall.sh;
 
root@ip-x.x.x.x:~# cat /root/PkgInit.sh 
add-apt-repository ppa:saltstack/salt -y;
apt-get update;
apt-get install salt-minion -y;
apt-get install awscli -y;
 
root@ip-x.x.x.x:~# cat /root/SaltMinionInit.sh
INSTANCE_ID=$(ec2metadata --instance-id);
REGION=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | grep region | awk -F\" '{print $4}');
TAG=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region=$REGION --output=text --max-items=1 | cut -f5);
/bin/echo -e "master: 172.66.1.100\ngrains:\n  roles:\n    - "$TAG > /etc/salt/minion;
service salt-minion restart;
 
root@ip-x.x.x.x:~# cat /root/SaltCall.sh
sleep 15s;
salt-call state.highstate;

Setup Saltstack Master:

root@ip-172-66-1-100:~# add-apt-repository ppa:saltstack/salt
root@ip-172-66-1-100:~# apt-get update
root@ip-172-66-1-100:~# apt-get install salt-master
root@ip-172-66-1-100:~# cat /etc/salt/master | grep -v '^#' | grep -v '^$'
file_roots:
  base:
    - /srv/salt
pillar_roots:
  base:
    - /srv/pillar
reactor:
  - 'salt/auth':
    - /srv/reactor/auth-pending.sls
 
# Automating Key Acceptance
# salt-run state.event pretty=True
root@ip-172-66-1-100:~# cat /srv/reactor/auth-pending.sls
{% if 'act' in data and data['act'] == 'pend' and data['id'].startswith('ip-172') %}
minion_add:
  wheel.key.accept:
    - match: {{ data['id'] }}
{% endif %}
 
# Get grains item
root@ip-172-66-1-100:~# salt '*' grains.item os
ip-172-66-2-214:
    ----------
    os:
        Ubuntu
ip-172-66-4-93:
    ----------
    os:
        Ubuntu
root@ip-172-66-1-100:/srv/reactor# salt '*' grains.item roles
ip-172-66-2-214:
    ----------
    roles:
        - YeMaosheng_com
ip-172-66-4-93:
    ----------
    roles:
        - YeMaosheng_com
 
root@ip-172-66-1-100:~# salt -G 'roles:YeMaosheng_com' state.highstate -t 60 test=True
root@ip-172-66-1-100:~# salt -G 'roles:YeMaosheng_com' state.highstate

网站所用EC2的安装及发布配置

├── pillar
│   ├── yemaosheng_com
│   │   ├── nginx.sls
│   │   ├── php56.sls
│   │   └── website.sls
│   └── top.sls
├── reactor
│   └── auth-pending.sls
└── salt
    ├── crontab
    │   └── init.sls
    ├── mysql-client
    │   └── init.sls
    ├── nginx
    │   ├── configs
    │   │   └── yemaosheng_com
    │   │       ├── blockrules.conf
    │   │       ├── nginx.conf
    │   │       └── sites-enabled
    │   │           └── yemaosheng.com
    │   └── init.sls
    ├── php56
    │   ├── configs
    │   │   └── yemaosheng_com
    │   │       └── php5-fpm
    │   │           └── www.conf
    │   └── init.sls
    ├── top.sls
    ├── website
    │   ├── configs
    │   │   └── yemaosheng_com
    │   │       ├── dhparam.pem
    │   │       └── sslkey
    │   └── init.sls
    └── websitefiles
        └── yemaosheng_com -> /var/www/yemaosheng_com
 
cat /srv/salt/top.sls 
base:
 'roles:yemaosheng_com':
 - match: grain
 - mysql-client
 - php56
 - nginx
 - website
 - crontab
 
cat /srv/pillar/top.sls 
base : 
 'roles:yemaosheng_com':
 - match: grain
 - yemaosheng_com.nginx
 - yemaosheng_com.php56
 - yemaosheng_com.website
 
cat /srv/pillar/yemaosheng_com/nginx.sls 
nginx_conf: nginx/configs/yemaosheng_com/nginx.conf
nginx_site-enable: nginx/configs/yemaosheng_com/sites-enabled
 
cat /srv/salt/nginx/init.sls 
{% set site_name = pillar['site_name'] %}
 
nginx:
  pkg:
    - name: nginx
    - installed
 
nginx_conf:
  service.running:
    - name: nginx
    - enable: True
    - reload: True
    - watch:
      - file: /etc/nginx/*
  file.managed:
    - name: /etc/nginx/nginx.conf
    - source: salt://{{ pillar['nginx_conf'] }}
    - user: root
    - group: root
    - mode: '0640'
    - require:
      - pkg: nginx
 
{% if site_name == 'yemaosheng_com' %}
upload_sslkey_to_nginx:
  file.recurse:
    - name: /srv/ssl
    - user: root
    - group: root
    - file_mode: '0644'
    - source: salt://website/configs/yemaosheng_com/sslkey
    - include_empty: True
 
upload_dhparam_to_nginx:
  file.managed:
    - name: /etc/nginx/dhparam.pem
    - source: salt://website/configs/yemaosheng_com/dhparam.pem
    - user: root
    - group: root
    - mode: '0644'
    - require:
      - pkg: nginx
{% endif %}
 
/etc/nginx/sites-enabled:
  service.running:
    - name: nginx
    - enable: True
    - reload: True
    - watch:
      - file: /etc/nginx/sites-enabled
  file.recurse:
    - name: /etc/nginx/sites-enabled
    - user: root
    - group: root
    - dir_mode: 2775
    - file_mode: '0644'
    - source: salt://{{ pillar['nginx_site-enable'] }}
    - include_empty: True
    - clean: True
    - require:
      - pkg: nginx

AWS VPC point to point with gre tunnel

Standard

related AWS VPC通过IPsec连接不同Region

AWS China EC2:

root@ip-10-33-30-103:/home/ubuntu# cat /etc/network/interfaces.d/gre1.cfg
auto gre1
iface gre1 inet tunnel
  mode gre
  netmask 255.255.255.255
  address 10.0.0.2
  dstaddr 10.0.0.1
  endpoint 52.63.189.251
  local 10.33.30.103
  ttl 255
 
root@ip-10-33-30-103:/home/ubuntu# route add -net 172.33.0.0 netmask 255.255.0.0 gw 10.0.0.2

AWS Sydney EC2:

root@ip-172-33-1-190:/home/ubuntu# cat /etc/network/interfaces.d/gre1.cfg 
auto gre1
iface gre1 inet tunnel
  mode gre
  netmask 255.255.255.255
  address 10.0.0.1
  dstaddr 10.0.0.2
  endpoint 54.222.193.171
  local 172.33.1.190
  ttl 255
 
root@ip-172-33-1-190:/home/ubuntu# route add -net 10.33.0.0 netmask 255.255.0.0 gw 10.0.0.1

Mikrotik L2TP with IPsec for mobile clients

Standard

转自: http://www.firstdigest.com/2015/01/mikrotik-l2tp-with-ipsec-for-mobile-clients/

1.Add a new pool

GUI
IP > Pool
Name: L2TP-Pool
Adresses: 172.31.86.1-172.31.86.14
Next Pool: None
 
CLI
/ip pool add name="L2TP-Pool" ranges=172.31.86.1-172.31.86.14

L2TP Configuration

1. Configure L2TP Profile

GUI
PPP > Profiles
Name: l2tp-profile
Local Address: L2TP-Pool
Remote Address: L2TP-Pool
DNS Server: 8.8.8.8
Change TCP MSS: yes
Use Encryption: required
 
CLI
/ppp profile add name=l2tp-profile local-address=L2TP-Pool remote-address=L2TP-Pool use-encryption=required change-tcp-mss=yes dns-server=8.8.8.8

2. Add a L2TP-Server

GUI
PPP > Interface > L2TP Server
Enabled: Checked
Max MTU: 1460
Max MRU: 1460
Keepalive Timeout: 30
Default Profile: mschap2
Use IPsec: Checked
IPsec Secret: MYKEY
 
CLI
/interface l2tp-server server set authentication=mschap2 default-profile=l2tp-profile enabled=yes ipsec-secret=MYKEY max-mru=1460 max-mtu=1460 use-ipsec=yes

3. Add PPP Secrets

GUI
PPP > Secrets
Enabled: Checked
Name: MYUSER
Password: MYPASSWORD
Service: l2tp
Profile: l2tp-profile
 
CLI
/ppp secret add name=MYUSER password=MYPASSWORD service=l2tp profile=l2tp-profile

IPsec Configuration

1. IPsec Proposals

GUI
IPsec > Proposals
Enabled: Checked
Name: L2TP-Proposal
Auth. Algorithm: sha1
Encr. Algorithm: 3des, aes-256 cbc
PFS Group: none
 
CLI
/ip ipsec proposal add name=L2TP-Proposal auth-algorithms=sha1 enc-algorithms=3des,aes-256-cbc pfs-group=none

2. IPsec Peers

GUI
IPsec > Peers
Enabled: Checked
Address: 0.0.0.0
Auth. Method: pre shared key
Secret: MYKEY
Policy Template Group: default
Exchange Mode: main l2tp
Send Initial Contact: Checked
NAT Traversal: Checked
My ID: auto
Proposal check: obey
Hash Algorithm: sha1
Encryption Algorithm: 3des, aes-256
DH Group: modp1024
Generate policy: port override
 
CLI
/ip ipsec peer add address=0.0.0.0/0 port=500 auth-method=pre-shared-key secret="MYKEY" generate-policy=port-override exchange-mode=main-l2tp send-initial-contact=yes nat-traversal=yes hash-algorithm=sha1 enc-algorithm=3des,aes-256 dh-group=modp1024

3. IPsec Policies

GUI
Enabled: Checked
Src. Address: ::/0
Dst. Address: ::/0
Protocol: 255(all)
Template: Checked
Group: default
Action: encrypt
Level: require
IPsec Protocols: esp
Tunnel: Not checked
SA Src. Address: 0.0.0.0
SA Dsr. Address: 0.0.0.0
Proposal: L2TP-Proposal
 
CLI
/ip ipsec policy add src-address=::/0 dst-address=::/0 protocol=all template=yes group=default action=encrypt level=require ipsec-protocols=esp tunnel=no sa-src-address=0.0.0.0 sa-dst-address=0.0.0.0 proposal=L2TP-Proposal

PS:
因为同事们在家里连国外的VPN不太稳,所以用它从园区分给公司的IP上绕一绕。
具体可见之前发的那贴: http://yemaosheng.com/?p=1587