绑定完请刷新页面
取消
刷新

分享好友

×
取消 复制
yugabyte cloud native db 基本试用
2022-05-12 14:39:14

备注:

  测试环境使用docker进行安装试用

 

1. 安装

  1. a. Download
  2. mkdir ~/yugabyte && cd ~/yugabyte
  3. wget https://downloads.yugabyte.com/yb-docker-ctl && chmod +x yb-docker-ctl
  4. b. install
  5. docker ps && python --verions
  6. docker pull yugabytedb/yugabyte

2. 创建数据库集群

  1. 备注:使用yb-docker-ctl 创建
  2. a. create
  3. ./yb-docker-ctl create
  4. 操作日志如下:
  5. docker run --name yb-master-n1 --privileged -p 7000:7000 --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --replication_factor=3 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n1:7100
  6. Adding node yb-master-n1
  7. docker run --name yb-master-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --replication_factor=3 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n2:7100
  8. Adding node yb-master-n2
  9. docker run --name yb-master-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-master --fs_data_dirs=/mnt/disk0,/mnt/disk1 --replication_factor=3 --master_addresses=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-master-n3:7100
  10. Adding node yb-master-n3
  11. docker run --name yb-tserver-n1 --privileged -p 9000:9000 -p 9042:9042 -p 6379:6379 --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n1:9100 --yb_num_shards_per_tserver=2
  12. Adding node yb-tserver-n1
  13. docker run --name yb-tserver-n2 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n2:9100 --yb_num_shards_per_tserver=2
  14. Adding node yb-tserver-n2
  15. docker run --name yb-tserver-n3 --privileged --net yb-net --detach yugabytedb/yugabyte:latest /home/yugabyte/bin/yb-tserver --fs_data_dirs=/mnt/disk0,/mnt/disk1 --tserver_master_addrs=yb-master-n1:7100,yb-master-n2:7100,yb-master-n3:7100 --rpc_bind_addresses=yb-tserver-n3:9100 --yb_num_shards_per_tserver=2
  16. Adding node yb-tserver-n3
  17. PID Type Node URL Status Started At
  18. 9666 tserver yb-tserver-n3 http://192.168.16.7:9000 Running 2018-02-11T02:23:04.064743772Z
  19. 9498 tserver yb-tserver-n2 http://192.168.16.6:9000 Running 2018-02-11T02:23:03.799704303Z
  20. 9368 tserver yb-tserver-n1 http://192.168.16.5:9000 Running 2018-02-11T02:23:03.537778672Z
  21. 9231 master yb-master-n3 http://192.168.16.4:9000 Running 2018-02-11T02:23:03.2530083Z
  22. 9135 master yb-master-n2 http://192.168.16.3:9000 Running 2018-02-11T02:23:03.003740203Z
  23. 9053 master yb-master-n1 http://192.168.16.2:9000 Running 2018-02-11T02:23:02.746672273Z
  24. 其中 cql service 端口 localhost:9042 redis localhost:6379
  25. b. 检测集群状态
  26. ./yb-docker-ctl status
  27. PID Type Node URL Status Started At
  28. 9666 tserver yb-tserver-n3 http://192.168.16.7:9000 Running 2018-02-11T02:23:04.064743772Z
  29. 9498 tserver yb-tserver-n2 http://192.168.16.6:9000 Running 2018-02-11T02:23:03.799704303Z
  30. 9368 tserver yb-tserver-n1 http://192.168.16.5:9000 Running 2018-02-11T02:23:03.537778672Z
  31. 9231 master yb-master-n3 http://192.168.16.4:9000 Running 2018-02-11T02:23:03.2530083Z
  32. 9135 master yb-master-n2 http://192.168.16.3:9000 Running 2018-02-11T02:23:03.003740203Z
  33. 9053 master yb-master-n1 http://192.168.16.2:9000 Running 2018-02-11T02:23:02.746672273Z
 

 参考管理界面

admin ui 

 

 

master ui

 

 

 

3. 数据库链接

  1. a. cql
  2. I.连接
  3. docker exec -it yb-tserver-n3 /home/yugabyte/bin/cqlsh
  4. Connected to local cluster at 127.0.0.1:9042.
  5. [cqlsh 5.0.1 | Cassandra 3.9-SNAPSHOT | CQL spec 3.4.2 | Native protocol v4]
  6. Use HELP for help.
  7. cqlsh> describe keyspaces;
  8. system_schema system_auth system
  9. cqlsh>
  10. II. 创建表
  11. CREATE KEYSPACE myapp;
  12. CREATE TABLE myapp.stock_market (
  13. stock_symbol text,
  14. ts text,
  15. current_price float,
  16. PRIMARY KEY (stock_symbol, ts)
  17. )
  18. III. insert data
  19. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('AAPL','2017-10-26 09:00:00',157.41);
  20. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('AAPL','2017-10-26 10:00:00',157);
  21. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('FB','2017-10-26 09:00:00',170.63);
  22. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('FB','2017-10-26 10:00:00',170.1);
  23. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('GOOG','2017-10-26 09:00:00',972.56);
  24. INSERT INTO myapp.stock_market (stock_symbol,ts,current_price) VALUES ('GOOG','2017-10-26 10:00:00',971.91);
  25. IV. 查询数据
  26. SELECT * FROM myapp.stock_market WHERE stock_symbol = 'AAPL';
  27. stock_symbol | ts | current_price
  28. --------------+---------------------+---------------
  29. AAPL | 2017-10-26 09:00:00 | 157.41
  30. AAPL | 2017-10-26 10:00:00 | 157
  31. b. redis
  32. I.连接
  33. docker exec -it yb-tserver-n3 /home/yugabyte/bin/redis-cli
  34. II. 操作
  35. 127.0.0.1:6379>
  36. 127.0.0.1:6379> set name dalong
  37. OK
  38. 127.0.0.1:6379> get name
  39. "dalong"
 
 

创建结果可以从UI界面查看

 

 

4. 类似工具

  1. tidb cockroachdb vitess
  2. 总之,数据库变化越来越大了,同时对于以前需要关注的分库分表,基本上已经不用管理,平台自己处理了,同时我们对于我们只需要关注业务实现
  3. 官方也提供了kubernetes 的运行配置脚本
 
 

5. 参考资料

https://docs.yugabyte.com/quick-start/create-local-cluster/
分享好友

分享这个小栈给你的朋友们,一起进步吧。

YugabyteDB
创建时间:2022-04-11 16:35:43
Yugabyte DB 是一个全球部署的分布式数据库,和国内的 TiDB 和国外的 CockroachDB 类似,也是受到 Spanner 论文启发,所以在很多地方这几个数据库存在不少相似之处。
展开
订阅须知

• 所有用户可根据关注领域订阅专区或所有专区

• 付费订阅:虚拟交易,一经交易不退款;若特殊情况,可3日内客服咨询

• 专区发布评论属默认订阅所评论专区(除付费小栈外)

技术专家

查看更多
  • 飘絮絮絮丶
    专家
戳我,来吐槽~