FastForward是一个DevOps自动化平台

fastforward的Python项目详细描述


#快进[![戈多克](https://godoc.org/github.com/nofdev/fastford?status.svg)(https://godoc.org/github.com/nofdev/fastford)[![构建状态](https://travis-ci.org/nofdev/fastford.svg?branch=master)(https://travis ci.org/nofdev/fastforward)
fastforward是一个终极的devops平台。



主机至少是两个NIC(外部和内部)
*我们假设您安装了CEPH,Cinder Bachend默认使用CEPH,运行中的Instace默认使用CEPH作为本地存储。关于ceph,请访问:http://docs.ceph.com/docs/master/rbd/rbd-openstack/或查看下面的(选项)ceph指南。
*有关调整大小的实例,nova用户可以通过ssh无密码(包括sudo)登录到每个计算节点,而且所有计算节点都需要重新启动libvirt bin以启用实时迁移
*FastForward节点与ceph deploy节点相同,可以登录到每个OpenStack节点无密码和sudo无密码
*FastForward节点默认使用~/.ssh/id_rsa ssh私钥登录远程服务器
*您需要重新启动"nova compute",`如果选择CEPH作为后端,则使用Cinder Volume`和'Glance API'服务完成安装
*FastForward支持将来使用的一致性组,但默认的LVM和CEPH驱动程序尚不支持一致性组,因为一致性技术不支持在存储级别

--用户ubuntu——托管haproxy1,\
haproxy2,\
controller1,\
controller2,\
compute1,\
compute2,\
compute3,\
compute4,\
compute5,\
compute6,\
compute7,\
compute8,\
compute9,\
计算10\
环境\
准备主机--公共接口eth1

mysql ha


deploy to controller1

ff--用户ubuntu--主机controller1 openstack mysql config--wsrep集群地址"gcomm://controller1,controller2--wsrep node name="galera1"--wsrep node address="controller1"


deploy to controller2

ff--用户ubuntu--hosts controller2 openstack mysql install
ff--用户ubuntu--hosts controller2 openstack mysql config--wsrep cluster address"gcomm://controller1,controller2--wsrep node name="galera2"--wsrep node address="controller2"


启动集群

ff--用户ubuntu--主机controller1 openstack mysql manage--wsrep新集群
ff--用户ubuntu--主机controller2 openstack mysql manage--启动
ff--用户ubuntu--hosts controller1 openstack mysql manage--更改根密码changme

显示群集状态




--用户ubuntu--hosts haproxy1 openstack haproxy install


deploy to haproxy2



生成haproxy配置并上载到目标主机(不要忘记编辑生成的配置)

ff——用户ubuntu——主机haproxy1,haproxy2 openstack haproxy config——上传conf haproxy.cfg


configure keepalived

ff——用户ubuntu——主机haproxy1 openstack haproxy config——配置keepalived——路由器id lb1——优先级150——状态主机--接口eth0——VIP控制器r/>ff——用户ubuntu——主机haproxy2 openstack haproxy config——配置keepalived——路由器id lb2——优先级100——状态从机——接口eth0——vip控制器vip

rabbitmq ha


deploy to controller1 and controller2

ff——用户ubuntu——主机controller1,controller2 openstack rabbitmq install--erlang cookie changemachangeme--rabbit用户openstack--rabbit pass changme

create cluster(确保controller2可以通过主机名访问controller1)


\keystone ha

create keystone database


install keystone on controller1 and controller2

ff——用户ubuntu--hosts controller1 openstack keystone install--admin token changme--connection mysql+pymysql://keystone:changme@controller\u vip/keystone--memcached servers controller1:11211,controller2:11211--populate
ff--user ubuntu--hosts controller2 openstack keystone install--admin token changme--connection mysql+pymysql://keystone:changme@controller\u vip/keystone--memcached servers controller1:11211,controller2:11211


创建服务实体和api终结点



--demo pass changme

(选项)您需要创建openstack客户端环境脚本
admin openrc.sh

export os_user_domain_name=default
export os_project_name=admin
export os_tenant_name=admin
export os_username=admin
export os_password=changme
export os_auth_url=http://controller_vip:35357/v3
export os_identity_api_version=3
export os_image_api_version=2
export os_auth_version=3

demo openrc.sh

os_project_domain_name=default
导出os_user_domain_name=default
导出os_project_name=demo
导出os_tenant_name=demo
导出os_username=demo
导出os_password=changme
导出os_auth_url=http://controller_vip:5000/v3
导出os_identity_api_version=3
导出os_image_api_version=2
导出os_auth_version=3

changme

create service credentials


install glance on controller1 and controller2

ff--user ubuntu--hosts controller2 openstack glass install--connection mysql+pymysql://glass:glass@controllerVIP/glass--auth uri http://controllerVIP:5000--auth url http://controllerVIP:3535357--glass changme--memcached servers controller1:11211,controller2:11211

--主机控制器1 OpenStack Nova创建服务凭据--操作系统密码更改--操作系统身份验证URL http://controller-vip:35357/v3--Nova-pass-changme--公共终结点'http://controller-vip:8774/v2.1/%\(租户ID\)s'--内部终结点'http://controller-vip:8774/v2.1/%\(租户ID\)s'--管理终结点"http://controller\u vip:8774/v2.1/%\(租户id\)s"


install nova on controller1



install nova on controller2

ff--user ubuntu--hosts controller2 openstack nova install--connection mysql+pymysql://nova:nova pass@controller@vip/nova--api connection mysql+pymysql://nova:novapass@controller@vip/novaapi--auth uri http://controller@vip:5000--auth url http://controller@vip:35357--novapass changme--my ip management----memcached servers controller1:11211,controller2:11211——兔子主机controller1,controller2--rabbit用户openstack--rabbit pass changme--glass api服务器http://controller-vip:9292--neutron端点http://controller-vip:9696--neutron pass changme--metadata proxy shared secret changme

ceph uuid)

ff——用户ubuntu——主机compute1 openstack nova compute install——我的IP管理——rabbit主机控制器1,controller2--rabbit用户openstack--rabbit pass changme--auth uri http://controller-vip:5000--auth url http://controller-vip:35357--nova pass changme--novncproxy基url http://controller-vip:6080/vnc-u auto.html--glance api服务器http://controller-vip:9292--neutron端点http://controller-vip:9696--neutron pass changme--rbd secret uuid changme changme changme--memcached servers controller1:11211,controller2:11211
ff--user ubuntu--hosts compute2 openstack nova compute install--my ip management--rabbit hosts controller1,controller2--rabbit用户openstack--rabbit pass changme--auth uri http://controller-vip:5000--auth url http://controller-vip:35357--nova pass changme--novncproxy基url http://controller-vip:6080/vnc-u auto.html--glance api服务器http://controller-vip:9292--neutron端点http://controller_vip:9696--neutron pass changme--rbd secret uuid changme changme changme--memcached servers controller1:11211,controller2:11211

libvirt默认使用ceph作为共享存储,运行实例的ceph池是vms。如果你不使用ceph作为单身汉,您必须删除以下参数:


images\u type=rbd
images\u rbd\u pool=vms
images\u rbd\u ceph conf=/etc/ceph/ceph.conf
rbd\u user=cinder
rbd\u secret\u uid=changme changme changme changme
disk\cachemodes="network=writeback"
live_migration_flag="vir_migrate_undefine_source,vir_migrate_peer2peer,vir_migrate_live,vir_migrate_persist_dest,动态迁移_define_source,vir_migrate_peer2peer,动态迁移,virvir_u migrate懔u tunnelled"




\n懔neutron ha


create nova database


创建服务凭据



install neutron for self service

ff——用户ubuntu——主机controller1 openstack neutron install——连接mysql+pymysql://neutron:neutron-pass@controller-vip/neutron——兔子主机controller1,controller2——rabbit用户openstack——rabbit pass changme——auth uri http://controller-vip:5000——auth url http://controller-vip:35357——neutron pass changme——nova url http://controller-vip:8774/v2.1——nova pass changme——公共接口eth1——本地IP管理接口--nova metadata ip controller_vip—元数据代理共享密钥changme changme changme—memcached servers controller1:11211,controller2:11211--populate
ff--user ubuntu--hosts controller2 openstack neutron install--connection mysql+pymysql://neutron:neutron\u pass@controller\u vip/neutron--rabbit hosts controller1,controller2——rabbit用户openstack——rabbit pass changme——auth uri http://controller-vip:5000——auth url http://controller-vip:35357——neutron pass changme——nova url http://controller-vip:8774/v2.1——nova pass changme——公共接口eth1——本地IP管理接口--nova metadata ip controller_vip—元数据代理共享密钥changme changme changme—memcached servers controller1:11211,控制器2:11211



\n中子代理


在计算节点上安装中子代理

ff--user ubuntu--主机Compute2 OpenStack中子代理安装——Rabbit主机控制器1,controller2——rabbit用户openstack——rabbit pass changme——auth uri http://controller-vip:5000——auth url http://controller-vip:35357——neutron pass changme——公共接口eth1——本地IP管理接口——memcached服务器controller1:11211,控制器2:11211



\horizon ha


install horizon on controller nodes

ff——用户ubuntu——主机控制器1,控制器2 OpenStack Horizon安装——OpenStack主机控制器VIP——memcached服务器控制器1:11211——时区亚洲/上海



\crinder ha


create cinder service creadentials



在控制器节点上安装煤渣api和煤渣卷,卷后端默认为ceph(必须安装ceph)

ff--用户ubuntu--主机controller1OpenStack Cinder安装--连接mysql+pymysql://Cinder:Cinder\u pass@controller\u vip/Cinder--Rabbit用户OpenStack--Rabbit pass changme--Rabbit主机控制器1,控制器2--验证uri http://controller\u vip:5000--验证url http://controller_vip:35357--cinder pass changme--my ip management_ip--glance api servers http://controller_vip:9292--rbd secret uuid changme changme changme--memcached servers controller1:11211,controller2:11211——填充
ff——用户ubuntu——主机controller2 openstack cinder install——连接mysql+pymysql://cinder:cinder\u pass@controller\u vip/cinder——rabbit用户openstack——rabbit pass changme——rabbit主机controller1,控制器2——验证uri http://controller-vip:5000——验证url http://controller-vip:35357——Cinder Pass Changeme——我的IP管理接口——Glance API服务器http://controller-vip:9292——RBD secret Uuid changme changme changme changme——memcached servers controller1:11211,控制器2:11211

\swift proxy ha


create the identity service credentials


install swift proxy

ff--user ubuntu--hosts controller1,controller2 openstack swift install--auth urihttp://controller-vip:5000--auth url http://controller-vip:35357--swift pass changme--memcached servers controller1:11211,controller2:11211



\swift storage



install swift storage on storage node

ff--user ubuntu--hosts object2 openstack swift storage install--address管理接口IP——绑定IP管理接口IP


在控制器节点上创建帐户环

ff——用户ubuntu——主机控制器1openstack swift存储帐户生成器add--region 1--zone 1--ip object1--ip设备sdb--weight 100
ff--user ubuntu--hosts controller1 openstack swift存储帐户生成器add--region 1--zone 1--ip object1--ip设备sdc--weight 100
ff--userubuntu--hosts controller1 openstack swift storage account builder add--region 1--zone 1--ip object1--ip设备sdd--weight 100
ff--user ubuntu--hosts controller1 openstack swift storage account builder add--region 1--zone 1--ip object1--ip设备sde--weight 100
ff——用户ubuntu——主机控制器1 openstack swift存储帐户生成器add——区域1——区域1——IP对象2——IP管理——设备SDB——weight 100
ff——用户ubuntu——主机控制器1 openstack swift存储帐户生成器add——区域1——区域1——IPobject2_management_ip--设备sdc--权重100
ff--用户ubuntu--主机controller1 openstack swift存储帐户生成器add--区域1--区域1--IP object2_management_ip--设备sdd--权重100
ff--用户ubuntu--主机controller1 openstack swift存储帐户生成器add--区域1——区域1——IP对象2——管理——IP——设备SDE——权重100
FF——用户ubuntu——主机控制器1 OpenStack SWIFT存储帐户生成器重新平衡

在控制器节点上创建容器环

FF——用户ubuntu——主机控制器1 OpenStack SWIFT存储创建容器生成器文件--分区10--副本3--移动1
ff--用户ubuntu--主机控制器1 openstack swift存储容器生成器add--区域1--区域1--IP对象1_管理IP--设备SDB--权重100
ff--用户ubuntu--主机控制器1 openstack swift storage container builder add--region 1--zone 1--ip object1_management撸ip--device sdc--weight 100
ff--user ubuntu--hosts controller1 openstack swift storage container builder add--region 1--zone 1--ip object1撸management撸ip--device sdd--weight 100
ff----hosts controller1 openstack swift storage container builder add--region 1--zone 1--IP object1--IP设备SDE--weight 100
FF--user ubuntu--hosts controller1 openstack swift storage container builder add--region 1--zone 1--IP object2--IP设备SDB--weight100
ff——用户ubuntu——主机控制器1 openstack swift storage container builder add——区域1——区域1——IP对象2——IP管理——设备SDC——权重100
ff——用户ubuntu——主机控制器1 openstack swift storage container builder add——区域1——区域1——IPobject2_management_ip--设备sdd--权重100
ff--用户ubuntu--主机控制器1 openstack swift存储容器生成器add--区域1--区域1--IP object2_management_ip--设备sde--权重100
ff--用户ubuntu--主机控制器1 openstack swift存储容器生成器重新平衡


在控制器节点上创建对象环

ff--用户ubuntu--主机controller1 openstack swift storageobject builder add--region 1--zone 1--ip object1_management_ip--device sdb--weight 100
ff--user ubuntu--hosts controller1 openstack swift storage object builder add--region 1--zone 1--ip object1_management_ip--device sdc--weight 100
ff--user ubuntu hosts controller1openstack swift storage object builder add--region 1--zone 1--ip object1_management撸ip--device sdd--weight 100
ff--user ubuntu--hosts controller1 openstack swift storage object builder add--region 1--zone 1--ip object1_management撸ip--device sde--weight 100
ff--userubuntu--hosts controller1 openstack swift storage object builder add--region 1--zone 1--ip object2--ip--device sdb--weight 100
ff--user ubuntu--hosts controller1 openstack swift storage object builder add--region 1--zone 1--ip object2--ip--device sdc--weight100
ff——用户ubuntu——主机控制器1 openstack swift存储对象生成器add——区域1——区域1——IP对象2——IP管理——设备sdd——权重100
ff——用户ubuntu——主机控制器1 openstack swift存储对象生成器add——区域1——区域1——IPobject2_management_ip--device sde--weight 100
ff--用户ubuntu--主机控制器1 openstack swift存储对象生成器重新平衡


将生成器文件从控制器节点同步到每个存储节点和其他任何代理节点

ff--用户ubuntu--主机控制器1 openstackswift storage sync builder文件--到controller2,object1,object2


完成所有节点上的安装

ff--用户ubuntu--主机controller1,controller2,object1,object2 openstack swift finalize install--swift hash path suffix changme--swift hash path prefix changme


\ceph guide


>有关ceph后端访问的详细信息:

[preflight](http://docs.ceph.com/docs/jewel/start/quick start preflight/)

[cinder and glance驱动程序](http://docs.ceph.com/docs/jewel/rbd/rbd openstack/)


xenial上请使用ceph deploy 1.5.34版


install ceph deploy(1.5.34)

wget-q-o-'https://download.ceph.com/keys/release.asc';sudo apt key add-
echo debhttp://download.ceph.com/debian-jewel/$(lsb_release-sc)main sudo tee/etc/apt/sources.list.d/ceph.list
sudo apt get update&;sudo apt get install ceph deploy

create ceph cluster directory

mkdir ceph cluster
cd ceph cluster

create cluster并将初始监视器添加到ceph.conf

ceph deploy new controller1 controller2 compute1 compute2 block1 block2
echo"osd pool default size=2"tee-a ceph.conf

install ceph client(可选使用"--release jewel"安装jewel版本,ceph deploy 1.5.34默认版本是jewel)然后可以使用"-repo url http://your-local-repo.example.org/mirror/download.ceph.com/debian-jewel"指定本地存储库。


ceph deploy install playback-node controller1 controller2 compute1 compute2 block1 block2

ceph deploy mon create initial如果要添加其他监视器,这样做

ceph deploy mon add{additional monitor}


add ceph osd

ceph deploy osd create--zap disk block1:/dev/sdb
ceph deploy osd create--zap disk block1:/dev/sdc
ceph deploy osd create--zap disk block2:/dev/sdb
ceph deploy osd create--zap disk block2:/dev/sdc

sync admin key

ceph deploy admin playback-node controller1 controller2 compute1 compute2 block1 block2
sudo chmod+r/etc/ceph/ceph.client.admin.keyring在所有ceph clients节点上

实例

ceph osd pool create volumes 512
ceph osd pool create vms 512
ceph osd pool create images 512


setup ceph client authentication

ceph auth get or create client.cinder mon'allow r'osd'allow class read object_prefix rbd_children,allow rwxpool=volumes,allow rwx pool=vms,allow rx pool=images'
ceph auth get or create client.glance mon'allow r'osd'allow class read object_prefix rbd_children,允许rwx pool=vms,允许rx pool=images'
允许rwx pool=images'

将"client.cinder"和"client.glance"的密钥环添加到适当的节点并更改其所有权

ceph auth get或create client.cinder sudo tee/etc/ceph/ceph.client.cinder.keyring在所有煤渣卷节点上
sudo chown cinder:cinder/etc/ceph/ceph.client.cinder.keyring";在所有Cinder卷节点上

sudo chown glance:glance/etc/ceph/ceph.client.glance.keyring"节点

运行"nova compute"的节点需要"nova compute"进程的keyring文件

ceph auth get or create client.cinder sudo tee/etc/ceph/ceph.client.cinder.keyring在所有nova compute节点上

它们还需要存储"client.cinder用户"的密钥。`在"libvirt"中。libvirt进程需要它在连接来自cinder的块设备时访问群集。
在所有nova计算节点上运行"nova compute"的节点上创建密钥的临时副本

ceph auth get key client.cinder tee client.cinder.key然后,在"compute nodes"上,将密钥添加到"libvirt",并删除密钥的临时副本(uuid与--rbd secret uuid选项相同,您必须保存uuid以备以后使用)

457eb676-33da-42ec-9a8c-9293d545c337

457EB676-33DA-42EC-9A8C-9293D545C337<;uuid>;
<;用法类型='ceph'>;
<;名称>;客户端。煤渣机密<;名称>;
<;用法>;
<;机密>;
eof
sudo virsh secret define--file secret.xml
secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret set value--secret 457eb676-33da-42ec-9a8c-9293d545c337--base64$(cat client.cinder.key)&;rm client.cinder.key secret.xml

(可选)现在在每个计算节点上编辑ceph配置文件,添加客户端部分

[客户端]
rbd cache=true
rbd cache writethrough until flush=true
rbd并发管理ops=20

[client.cinder]
keyring=/etc/ceph/ceph.client.cinder.keyring


(可选)在每个glash api节点上编辑ceph配置文件,添加客户端部分

[client.glass]
keyring=/etc/ceph/ceph.client.glass.keyring

(可选)如果要删除osd

sudo stop ceph mon all&;sudo stop ceph osd all on osd node
ceph osd out{osd-num}
ceph osd crush remove osd.{osd-num}
ceph auth del osd.{osd-num}
ceph osd rm{osd-num}
ceph osd crush remove{host}

(可选)如果要删除监视器


ceph mon remove{mon-id}

注意:您需要重新启动"nova计算",` Cinder Volume和"Glass API"服务来完成安装。


changme
ff——用户ubuntu——主机controller1 openstack马尼拉创建服务凭据——操作系统密码changme——操作系统验证URL http://controller-vip:35357/v3——马尼拉通行证changme——public-endpoint-v1"http://controller-vip:8786/v1/%\(tenant-id\)s"——internal-endpoint-v1"http://controller-vip:8786/v1/%\(租户id\)s"--admin-endpoint-v1"http://controller-vip:8786/v1/%\(租户id\)s"--public-endpoint-v2"http://controller-vip:8786/v2/%\(租户id\)s"--internal-endpoint-v2"http://controller-vip:8786/v2/%\(租户id\)s"--admin-endpoint-v2"http://controller\u vip:8786/v2/%\(租户id\)s"


http://controller-vip:35357--马尼拉通行证更改名--my ip controller1--memcached servers controller1:11211,controller2:11211--rabbit hosts controller1,controller2--rabbit用户openstack--rabbit pass changme--populate
ff--user ubuntu--hosts controller2 openstack manila install--connection mysql+pymysql://manila:changme@controller@vip/manila--auth uri http://controller@vip:5000--auth url http://controller@vip:3535357--manila passchangme——我的IP控制器2——memcached服务器控制器1:11211,控制器2:11211——rabbit主机控制器1,controller2——rabbit用户openstack——rabbit pass changme

http://controller-vip:5000--验证URL http://controller-vip:35357--马尼拉通行证更改名--我的IP控制器1--memcached服务器控制器1:11211,控制器2:11211--兔子主机控制器1,controller2——rabbit用户openstack——rabbit pass changme——中子端点http://controller\u vip:9696——中子pass changme——nova pass changme——煤渣pass changme
ff——用户ubuntu——主机controller2 openstack马尼拉共享安装——连接mysql+pymysql://manila:changme@controller-vip/manila--auth-uri http://controller-vip:5000--auth-url http://controller-vip:35357--manila-pass-changme--my-ip-controller2--memcached-servers-controller1:11211,controller2:11211--rabbit-hosts-controller1,controller2——rabbit用户openstack——rabbit pass changme——中子端点http://controller-vip:9696——中子pass changme——nova pass changme——炉渣pass changme

创建马尼拉

http://docs.openstack.org/mitaka/install guide ubuntu/launch instance manila.html


create shares with share servers management support

http://docs.openstack.org/mitaka/install-guide-ubuntu/launch-instance-manila-dhss-true-option2.html

欢迎加入QQ群-->: 979659372 Python中文网_新手群

推荐PyPI第三方库


热门话题
如何在Java中使用ENUM生成随机数   Spring4REST应用程序使用Java配置(无xml)IllegalArgumentException   java在Jar中加载新的FXML   java无法将字符串转换为long(时间戳)或long转换为字符串   流我如何通过Java中的grpc(如broadcase)将持续响应从服务器发送到客户端?   java类型不匹配:无法从一个连接转换到另一个连接   带有组织名称、用户名和密码的java Spring引导登录页面   java从Android设备向Windows CE设备发送/获取字符串数据?   java Selenium代码在localhost上运行良好,但无法捕获Jenkins上的StaleElementReferenceException   jodatime如何获取与下一个小时、分钟对应的日期时间?   java在一个int数组中,如何返回对应于最低值的索引?   在web3j中,如何为运行时发现的数组类型创建TypeReference?   java如何仅在Spring Security上对特定URL强制使用https?   java如何添加全局动作事件侦听器?