The multi-stage supports in docker image building was introduced with Docker v17.05 in 2017. This post summarizes the practical points which can benefit the development experience, secure the data and reduce the docker image size.
It is the last quarter of 2019. A large number of organizations already deployed docker containerized applications in production environment and usually the services are orchestrated with kubernetes or Openshift. As the well known saying, just moving applications to cloud doesn’t mean clouding, we are still in the middle of way to cloud. This post is also a retrospective on the issues discovered this year on migrating traditional technical stacks to cloud.
(This post is still in progress)
The journey to migrate dotnet experiences to springboot and build a demo app from scratch, deploy it to kubernetes with explanation on technical points and the cloud native practice notes.
After a few weeks sorting up and working with Python3 on my Mac Book Pro, the brew update failed to update and reported an error of aws command not found.
> brew update
The solution is straight through. Since aws cli is not found, it is a missed step in migrating Mac development environment from Python2 to Python3 – the corresponding aws cli is not installed well to Python3.
My python environment is managed via PyEnv. When a new python version is installed, the upstream depeendencies are not maintained via requirement.txt so it needs a manual step to re-enable awscli.
>pip3 install awscli --upgrade
For new app or repos with a close to ideal level code coverage, the populor code coverage solution on coverage metrics threshold check would be efficient. However, to maintain a legacy or low coverage level repo, it is not eonough to just check coverage percentage on metrics. This post described an idea to check coverage json diff with istanbul-diff on node.js repos.
Cheers! Completed the Deeplearning.ai course Convolutional Neural Networks in TensorFlow.
Following the roadmap, this is the 4th certificates on Coursera.org on the Machine Learning path.
Hurray! Completed the Deeplearning.ai course Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning and achieved the certificate on coursera!
After upgrading hexo and dependencies in local repo package.json, when regenerating the github pages and pushed to remote repo, the customized domain starts to respond 404.
Check github, the way to add customized domain is to add a CNAME file with each domain in one line. If user tries to manaully configure his/her own domain on github settings tab, a CNAME file will be pegged automatically by github. However, the manually grown CNAME file will be purged in next posting time if hexo is not correctly configured.
Searching the hexo document, the place to hold this CNAME file is not local repo root folder but the root folder of hexo theme. In My case, it is
./themes/next-wuxubj-5.0.2/. If your hexo applies other theme, please change to the corresponding folder name. This way, the CNAME file will be preserved.
This post is based on a quick note on how to create a docker image for openshift/k8s to debug dotnet core app with LLD online in the containter environment.
When participating the project to migrate Web Service and full pipeline to openshift, it is worthy to continue AWS study to professional level and compare the SAAS hybrid solution to on-premise PAAS with openshift.
These two certificates were achieved during the above project