Release Notes

1.1.1.2.4. Post Implementation Checks

Post Implementation

Accessing the kubecluster

Once the playbook has finished, you should be able to access the cluster using you client of choice and the kubeconfig file from the deployment.

Once you have access the cluster, you should be able to see all the details of the cluster including nodes, pods, deployments etc.

CEPH

In order to successfully implement the Data fabric or App fabric playbooks on top of an on-prem installation, you will need to create the ceph rbd and ceph fs pools for the persistent volumes.

In both cases you will need to access the Ceph Dashboard from inside the cluster.

  • Find the Ceph dashboard node-port listed under “Services” in Lens or use kubectl commands.

  • In your browser navigate to the Ceph dashboard by entering the ip address of one of the nodes followed by the Ceph dashboard node port for example:

    • nodeip:nodeport
  • Login with the admin user. You can find the password of the admin user in Kubernetes secrets with Lens

Node Sizes

The discussion of node sizes is very similar to the discussion of vm sizes in the old microservices (like Leopard). There is baseline resource information that can be supplied by the platform team but this is based on the size of nodes to run the platform and does not include size of data queries etc. This is a separate discussion that needs to had with the client. Whilst scaling horizontally in Kubernetes is easy, scaling vertically is more involved. As a concrete example, if the client has 5 nodes with 12GB each, they will be unable to pull a data query larger than 12GB because the available memory on any single node is too small. In this situation a process of replacing the nodes, cordoning and draining the containers would need to be had; and this isn’t automated to any degree.

This is something that ALWAYS needs to be discussed with the client prior to installation.

CEPH RBD

For RBD storage, you will need to create a pool called ‘replicapool’ with the rbd application tag.

CEPH FS

For FS storage, you will need to create a pool called ‘fraxses-fs’ with the cephfs application tag.

We are using cookies to give you the best experience on our site. To find out more see our Cookies Policy.  By continuing to use our website without changing the settings, you are agreeing to our use of cookies.