DevOps in Practice – Architecting a CI/CD pipeline in Red Hat OpenShift Online

A few months ago, I had the opportunity to provide guidance to a small agile development team getting started with Red Hat OpenShift Online.

The team needed to quickly stand up a couple environments (development, test, staging, prod), set up a CI/CD pipeline (Git, Jenkins, SonarQube, Docker, postman, and Kubernetes) to work on a 6 week MVP (minimal viable product) to build a microservices-based application using container orchestration technology.

In a nutshell, it all boiled down to three keywords: IoT, Microservices, and Kubernetes.

In our previous posts, we explored Getting Started with Red Hat OpenShift where I provided an overview of OpenShift, Kubernetes and showed an example of an application using a microservice architecture.

In this post, we’re going to dive deeper and focus on how a development team quickly setup and use continuous integration / continuous delivery (CI/CD) pipeline to automatically build, deploy and test microservices through a set of environments (think Kubernetes namespaces) as the solution is evolved and promoted through the different phases where it is constructed, verified, and promoted from one working location to another (from development to integration and finally to production)

Screen Shot 2019-08-02 at 8.46.40 AM

So let’s get started with a quick review of microservices

Containerized Microservices

A microservices architecture consists of a collection of small, autonomous services. Each service is self-contained and should implement a single business capability.

In a microservices architecture, services are small, independent, and loosely coupled. They are also

  • Small enough that a single small team of developers can write and maintain
  • Can be deployed independently without rebuilding and redeploying the entire application
  • Communicate with each other by using well-defined APIs and their internal implementation details are hidden from other services.

Here is the example of a containerized microservice-based application composed of several microservices that we shared in our earlier blog.

Screen Shot 2019-05-16 at 6.11.07 PM

In this case, the application has been divided into several independent microservices. In addition, each microservice has

  • business focus or specific problem it is trying to solve (such as account profile or order, shipping, etc…)
  • separate development teams working on the project.
  • its own life and release cycle.
  • independent. Meaning few if any runtime dependencies with other microservices
  • to depend on a small ecosystem (such as Zuul, AAA, logging, etc).

While many people know the benefits of using microservices, such as shorter development time, decentralized governance,  and independent releases introduce challenges with versioning, testing, and configuration control.

To overcome several of these challenges we decided to use Red Hat OpenShift online as it was:

  1. Easy to get started with container-based development
  2. Cloud-based so supported geographically distributed development teams
  3. An extensive set of container templates that we could leverage

Red Hat OpenShift Online

Red Hat OpenShift is an enterprise-grade container platform that can be run on-premise, in the cloud. OpenShift® Online is Red Hat’s public cloud container platform used for on-demand access to OpenShift to build, deploy and manage scalable containerized applications.

openshift_logical_architecture_overview

It a Self-service environment that allows you to use the languages and tools you want and comes with a set of pre-created container images and templates that allow you to build and deploy your favorite application runtimes, frameworks, databases, and more in one click.

We decided to use Red Hat OpenShift online for a couple key reasons

  1. Easy to get started
  2. Support for team-based development
  3. An extensive set of container templates that we could leverage

Pipeline for a Single Microservice

From a team standpoint, the pipeline must allow them to be able to quickly build, test and deploy the microservice without disrupting other teams or destabilizing the application as a whole.

Here is a typical high-level workflow that many small development teams use to promote their work from one namespace (ie. project in OpenShift) to another

Screen Shot 2019-08-01 at 11.01.10 PM.pngSource:  cicd-for-containerised-microservices

The design principles used for building the pipeline are as follows::

  1. Each development team has its own build pipeline that they can use to build, test and deploy their services independently.
  2. Code changes that committed to the “develop” branch are automatically deployed to a production-like namespace (or project in OpenShift)
  3. Quality gates are used to enforce pipeline quality.
  4. A new version of the microservice can be deployed side by side with the previous version

Builds

By default, OpenShift provides support for Docker build, Source-to-Image (S2I) build, and Custom build. Using these strategies, developers can quickly produce runnable images.  The diagram below illustrates the relationship between containers, images, and registries are depicted in the following diagram:

Screen Shot 2019-08-02 at 8.33.07 AM.png

Jenkins Pipeline

In addition, OpenShift has extensive support for Jenkins and provides an out-of-the-box containerized Jenkins template that you can quickly deploy.

OpenShift’s pipeline build strategy uses pipelines for execution by the Jenkins pipeline plugin.  Pipeline workflows are defined in a Jenkinsfile, either embedded directly in the build configuration or supplied in a Git repository and referenced by the build configuration.

Here is a typical pipeline for a single containerized microservice with no dependencies that relies on a simple versioning strategy (version from the assembly such as pom, npm, or other).

Screen Shot 2019-08-02 at 8.13.11 AM.png

Here are the detailed steps:

  1. Developer makes code changes and commits code to the local git repo
  2. Push code changes into the “develop” branch where a trigger has been set up to kick off an automated build.
  3. Unit and component tests are performed during the build procedure.
  4. Static code analysis is also performed to pass the source code quality gate.
  5. Build artifacts are published into an artifact repository, such as Nexus or Artifactory
  6. Build image is pushed to image repo
  7. Jenkins deploys the image to “Dev” namespace (called “Project” in OpenShift) where any automated test cases are kicked off. Developers are also able to perform manual/ad-hoc testing as well (teams often use tools such as postman)
  8. If test cases pass, image is tagged and Jenkins promotes (deploys tagged image) to the Integration project (i.e. namespace) for integration testing. Trigger integration with other microservices.
  9. If integration tests pass, image is tagged again and publish into a production image repository for deployment to staging or prod.

SonarQube

In an earlier post,  we discussed how SonarQube scans are often included as part of the CI/CD pipelines to ensure quality by failing individual Jenkins jobs that don’t pass the Quality-Gates set by the project.

Screen Shot 2019-06-07 at 5.28.57 PM

SonarQube provides built-in integrations for Maven, MSBuild, Gradle, Ant, and Makefiles.  Using these tools, it is quite easy to integrate SonarQube into your CI pipeline. For example, for Maven you can use the Maven Sonar Plugin.

After CI completes, you just have a new build artifact – a Docker image file – which is pushed to the image repo.  Note that many of container images available on Docker Hub won’t run on OpenShift, as it has stricter security policies. For example, OpenShift won’t run a container as root

Deployment to OpenShift Cluster

Most agile teams still must promote their application code through a series of SDLC environments (think Kubernetes namespaces) to build and test code, validate code quality / perform, and provide users chance for acceptance testing.

In OpenShift these environments are modeled using “projects” (or in Kubernetes terminology “namespaces”).

Jenkins uses the kubernetes configuration files decide how you need to deploy it to the desired environment (Kubernetes cluster) and maybe also need to modify other Kubernetes resources, like configurations, secrets, volumes, policies, and others.

In this case, we set up the following OpenShift projects

  • Dev – OpenShift Online project supporting build, unit test, deployment to dev and functional testing
  • Test – OpenShfit Online project supporting integration, functional and performance testing
  • QA / UAT – OpenShift Online project support user acceptance testing

When running the application in Production, we often need to consider several factors such as scalability, security, privacy. In this case, we decided to setup a dedicated OpenShift (OCP) cluster in a public cloud.

Screen Shot 2019-08-01 at 11.59.14 PM.pngSource: OpenShift

Conclusion

In conclusion, while microservices architecture offer teams independence, separate CI/CD pipelines and excellent scalability; each new version must be deployed and tested, individually and as part of the entire application or large business capabilities.

Care must be taken when design the CI/CD pipelines to take into considerations the needs of the development, test and operations (essentially DevOps) or you run the risk of missing the value of microservices and Kubernetes.

For additional details, checkout

 

DevOps in Practice – Code Analysis with SonarQube

Last modified: June 12, 2019.                                                         

by Reedy Feggins,   IBM Cloud Architect, DevOps SME



Introduction

In this article, we’re going to be looking at how to quickly configure a CI/CD pipeline to use static source code analysis with SonarQube.  Using a simple java project we will walk through the steps for

  • Setting up a New SonarQube Project
  • Running Analysis of a Simple Java Project
  • Using Custom Profiles
  • Using Quality Gates to define fitness criteria for the sample project for production release.

Providing a set of at-a-glance dashboards, SonarQube helps teams quickly assess their code coverage and security risks as well as the overall Releasability (i.e., Quality Gates).

This article does not cover the installation and setup of Jenkins and several other CI/CD tools setups.  There are several good resources covering these topics and for additional information on installing and using SonarQube check documentation.

What is SonarQube

To get started lets quickly provide an overview of SonarQube and how it fits into a typical CI/CD pipeline, using Jenkins Server in this case.

  • SonarQube is a static analysis and continuous inspection code quality tool that supports 25+ languages.
  • Jenkinsis a continuous integration / continuous deployment (CI/CD) automation server that’s used for build pipelines and deployments.
  • Dockeris a virtualization solution that makes it easier to package pre-configured applications that can be deployed in other places.

The SonarQube platform can be grouped into the following components:

SonarQubePlatform.png

Content Source: 

SonarQube Server used to run the Web Server, Search Server, and Computer Engineer Server. The Web Server for developers, managers to browse quality snapshots and configure the SonarQube instance

SonarQube Database is used to store the SonarQube configuration for this instance (security, plugins settings, etc.) as well as the  Quality snapshots for each project.

SonarQube Plugins are used to extend SonarQube including such things as additional language support, Integrations (such as SCM), authentication, and governance plugins

SonarScanners are separate executables used for different languages (such as SonarJava). Teams can use One or more SonarScanners running on your Build / Continuous Integration Servers to analyze projects.

 

Where SonarQube fits in a typical CI/CD pipeline

SonarQube scans often are included as part of the CI/CD pipeline as one of the components of the build stage.  Here is a typical CI/CD pipeline

Screen Shot 2019-06-07 at 5.28.57 PM

SonarQube provides built-in integrations for Maven, MSBuild, Gradle, Ant, and Makefiles.  Using these tools, it is quite easy to integrate SonarQube into your CI pipeline. For example, for Maven you can use the Maven Sonar Plugin.

 

How SonarQube Analysis works?

SonarQube scans are often used to find common code issues as early as a poorly written codebase is always more expensive to maintain.  This approach helps to ensure quality, reliability, and maintainability over the life-span of the project.

The process for continuously collecting code quality metrics requires the use of the

  •  Sonar scanner, the client component, to perform the analysis and pushes the results to the SonarQube server and the 
  1. SonarQube server is responsible for persisting and providing access to the analysis results and historical access.

A typical workflow looks like this:

Screen Shot 2019-06-07 at 6.00.39 PM.png

Content Source: Continuous Code Quality Analysis with SonarQube

SonarQube uses the concept of a “projects” to group the results of each scan. The results of the scanning run can either be a Bug, Vulnerability, Code Smell, Coverage or Duplication.

During analysis, data is requested from the server, the files provided to the analysis are analyzed, and the resulting data is sent back to the server at the end in the form of a report, which is then analyzed asynchronously server-side.

Each category has a corresponding number of issues or a percentage value.  Here is an example of a SonarQube scan.

Screen Shot 2019-06-07 at 5.32.11 PM.png

Moreover, issues can have one of five different severity levels:

  • blocker,
  • critical,
  • major,
  • minor and
  • info.

Just in front of the project name is an icon that displays the Quality Gate status – passed (green) or failed (red).

Also, clicking on the project name will take us to a dedicated dashboard where we can explore issues particular to the project in greater detail

 

Quality Gates

In SonarQube, a “Quality Gate” is a set of conditions the project must meet before it should be identified for a production release. SonarQube offers a central place to view and define the rules used during the analysis of projects. These rulesets are organized in quality profiles which can be seen by every team member can see and administered by the project administrator.

The Quality Gate help teams to facilitate setting up rules for validating the codebase as well as every new line of code added to it during subsequent analysis. The objective is to prevent new issues from arising over time.

Screen Shot 2019-06-07 at 6.21.41 PM.png

SonarQube provides a set of default flags that can be used as a starting point. For example: 

  • the coverage on new code is less than 80%
  • percentage of duplicated lines on new code is greater than 3
  • maintainability, reliability or security rating is worse than A

Next Steps

In part 2 of this blog, we will walk the setup and scanning of a simple Java project using SonarQube, Jenkins, and Maven.

Additional Resources

  1. Nikolaus Huber, Feb 21, 2018, “Continuous Code Quality Analysis with SonarQube”, retrieved May 25, 2019, https://medium.com/@niko.huber/continuous-code-quality-analysis-with-sonarqube-6a9146912b7d
  2. SonarQube Documentation, https://docs.sonarqube.org/latest/analysis/overview/
  3. Maven – use the SonarScanner for Maven
  4. ibaut SAUTEREAU , Using SonarQube to Analyze a Java Project, https://medium.com/linagora-engineering/using-sonarqube-to-analyze-a-java-project-abeee15e3779

Model in Days, Not Months: Building AI Microservices – Part 3 – The Power of Design Thinking and the IBM Garage Method

Last modified: June 20, 2019.                                                         

by Reedy Feggins,   IBM Cloud Architect, DevOps SME



  • In Part 1 we covered an introduction to AI Microservices and how the guidance would need to be extended for microservices with significant AI/machine learning (ML) capabilities.
  • In Part 2, we provided the reader with a deep dive into ML and how the ML algorithms are often organized (e.g. Supervised, ML, Unsupervised ML, and Reinforcement Learning Algorithms).


In this third installment of a multi-part post, we will discuss using Design Think techniques and IBM Garage Method for identifying the right business opportunities for using AI / ML capabilities helping to ensure the initiative can make an impact on ROI, net revenue, cost reduction or improving quality. 

The core challenge with building AI solutions?

Many organizations have moved quickly to incorporate AI capabilities in their existing products, or build new AI services,  only to find out that they didn’t achieve the business goals that they expected. Often as not the initiative was driven from a technology perspective and not a business one.

While AI/ML services often can solve complexed business concerns, in some cases, the actual underlying problems may be better solved another approach such as issues with their data analytics, operations or logistics.  The essential challenge with AI is that there isn’t currently a universally accepted approach for quickly implementing these solutions nor a standard approach for handling the Data pipeline required to make these applications useful.

To usher in this next wave of digital innovation, C-Suite executives will need to apply design thinking methods to create the cross-functional coordination and mid-manager sponsorship required for enterprise adoption.

This is why it’s important to think about it from a design perspective.

This is where adopting Design Thinking and other related practices found in the IBM Garage Method can help you incrementally deliver business outcomes that deliver innovative user experiences more often and repeatable than your current approach.

Here is a brief overview along with some key terms:

What is IBM Garage Method

IBM Garage methodology seamlessly combines s industry best practices from Lean Startup, XP, DevOps into a layered experience used to drive transformational change by incrementally improving your process, resource/team skills and tools/technology.

IBM Garage Method

Source  IBM Garage Envision Practice

The IBM Garage method promotes an iterative approach where the team focuses on delivering the highest priority items, organized into a minimum viable product (MVP), often in 4 – 6 weeks.

What’s the smallest thing you can build that will provide the greatest impact in the shortest amount time?  

That thing is the minimum viable product (MVP), an outcome to show quick success when your objective is to test a business hypothesis. The Garage helps you figure out the MVP within your business objectives. Whether or not you accomplish success right away, the next step is an adjustment or evolution to achieve — or surpass — that ultimate outcome. It’s all about fail fast, learn fast.

Source What is the IBM Cloud Garage

For more details check out this link to the

While we won’t provide in-depth coverage of  the ML Algorithms, IBM Garage Method or Enterprise Design Thinking, here are several good references if you are interested in learning more 

IBM Garage Method

Machine Learning and Data Analytics

Next, let us provide a quick overview of design thinking

Design Thinking

Embarking on AI Microservice projects requires a clear understanding of how the initiative is going to be introduced, maintained, and further scaled.

Design thinking is defined as human-centric design that builds upon the deep understanding of our users (e.g., their tendencies, propensities, inclinations, behaviors) to generate ideas, build prototypes, share what you’ve made, embrace the art of failure (i.e., fail fast but learn faster) and eventually put your innovative solution out into the world.

The foundational elements of IBM’s approach are

  • A focus on user outcomes
  • Restless reinvention
  • Diverse empowered teams
  • A process of rapid iteration (Observe, Reflect and Make)

Enterprise Design Thinking combines three new core practices (hills, playbacks, and sponsor users) traditional design techniques such as personas, empathy maps, to-be scenarios, and minimum viable product (MVP).   Here is a diagram describing the IBM approach to Design Thinks.

IBM Garage Method

Content source: IBM design thinking model elements

Here some key definitions

Hills

A “Hill” is aligned to a set of business, user or market-driven outcomes. Hills are not Features but rather expressed an aspirational goal or end state that the users desire.

Hills help to define the “release” scope and serve to focus the project activities (data analysis, design, and development) work on desired, measurable outcomes. For each project, define no more than three major release hill objectives plus a technical foundation objective.

Screen Shot 2019-06-19 at 8.48.04 PM

Source: How Spotify Builds Products.

Playbacks

Everything is done in the project uses an iterative approach: from the data analysis, to design, to the development, to the deployment of the ML model or the AI microservice to prod environment.

Playbacks are frequent scheduled demos/checkpoints used to align your team, the stakeholders (e.g. Product Manager, Business, Data Scientist), and end-users with the value completed so far. 

Early playbacks align the team and ensure that it understands how to achieve a hill’s specific user objectives (e.g. which user AI stories will be modeled, ML Algorithms uses, Data Science tools selected such as IBM Watson Studio, Jupyter Notebooks, scikit-learn, pandas.TensorFlow, Keras to name a few)

In later playbacks, the development team demonstrates its progress on delivering high-value, end-to-end scenarios.

Sponsor users

Sponsor users are people who are selected from your real or intended user group. By working with sponsor users, you can better design experiences for real target users, rather than imagined needs.

If at all possible, engage sponsor uses when you create your personas, and continue to include them throughout the entire design and development process. Collaboration between sponsor users and your team ensures that your product is valuable, effortless, and enjoyable.

MVP (Minimum Viable Product)

In Enterprise Design Thinking, MVPs are closely aligned with a set of hills. An MVP is the smallest thing that can be built and delivered quickly to test one of your hypotheses and help you learn and evaluate your effort. Teams often define their MVP statements and their hills in parallel.

Personas

Personas are created to represent your target audience. Developed by having a deep understanding of the people that represent you intend to help with your product. Personas are used to drive the solution outcomes.

As you work toward your solution, return to the personas to ensure that what you are building is going to excite them and make them say “Wow.”

Empathy maps

Empathy maps are used to help Sponsors and squad to better understand the motivations driving your personas. d: After you define one or more personas, get to know them at a deeper level. Capture what they think, what they feel, what they say, and what they do. By doing so, you’ll begin to develop empathy for this person. You’ll use an empathy map to identify their major pain points.

Design ideation and prioritization: 

After you create a persona, an empathy map, and possibly an as-is scenario map, you’ll understand your target audience and the problems that it faces.  During design ideation, brainstorm and generate as many ideas as possible. 

This is often where available data sources, ML algorithms, and tools are identified

Generate as many ideas as possible, regardless of whether you know how to implement them. Then, organize those ideas into clusters and decide which clusters have the greatest promise.

For more information check one or more of the following links

 

Integration Design Thinking with Machine Learning

Integrating design thinking and machine learning disciplines has allowed the IBM Cloud Garage to help our customers to better understand: 

  1. The business areas most impacted through the adoption of deep learning, machine learning or other AI capabilities
  2. How design thinking techniques and tools can help create a more compelling user experience with a “delightful” user engagement
  3. The specific user scenarios to be implemented based on the superior insights into your customers’ usage objectives, operating environment and impediments to success.
  4. Which ML algorithms should be used to uncover new monetization opportunities, optimize key operational processes, reduce security and/or regulatory risk

In a recent article, Tapan Vora stressed the importance of approaching the subject area of AI from the philosophy of design thinking. 

We don’t need to restrict ourselves to thinking about technology as a sole-domain of innovation.  We can go from empathizing to prototyping much faster if we leverage design thinking in
– Tapan Vora, Posted on February 21, 2019

Here is a reference to the process shared by Tapan Vora below.

Content Source:    Design Thinking and AI – “Un” Complexing Automation

While Design thinking focuses on the discovery of unmet user needs in the context of a business driven scenario (such as purchasing a product, interacting with a custoScreen Shot 2019-06-16 at 5.12.37 PMmer agent or supply chain), there is a great deal of synergy with the approach many teams take for a potential ML modeling project (analyzing, synthesizing, ideate, tuning and validating).

The key is to take a human-centered approach to evaluate the potential value as we as the effort/cost to introduce the AI/ML.  In some cases, while the ML may be easy to introduce, it may not make financial or operational sense to implement the ML (for example not enough data, data pipe could be hard to realize and maintain).

Summary

Embarking on AI Microservice projects requires a clear understanding of how the initiative is going to be introduced, maintained, and further scaled. Here are some key points to remember:

  • It’s important to get these working parts in-sync before thinking about launching your next design-thinking program.
  • Design thinking is all about looking at the picture by empathizing with the end consumer.  For AI teams, Design Thinking can be a crucial tool to help them think about design from the customer’s perspective.
  • Is critical that have a clear understanding of where the maximum value for the AI/ML capabilities are and align them with your personas to ensure you consider the customer’s point of view, 

To ensure proper design guidance for an AI microservice, it’s best to think about AI from a design-thinking perspective.

Finally, for additional information, see the blog “What tomorrow’s business leaders need to know about Machine Learning?” for more about machine learning.

In Part 4 we will walk through a case study and look at exiting some code patterns to help accelerate your AI Microservice creating. 

 

Model in Days, Not Months: Building AI Microservices using Cloud-Native approach – Part 2: Machine Learning

Machine Learning: An Introduction

With the rapid growth of big data and availability of programming tools like Python and R machine learning is gaining mainstream presence for data scientists. In this second post we will explore some of the commonly used Machine Learning (ML) algorithms.  

What is Machine Learning?

ML is a powerful AI technique used to help cognitive systems learn and engage with the real world. ML is a branch of artificial intelligence that includes methods, or algorithms, for automatically creating models from data. Unlike a that perform a task following explicit rules, a machine learning system learns from experience.

ML uses specialized algorithms to learn from the data that they process and analyze it without explicitly having to be programmed where to look. These learning algorithms fail over and over again until it learns from those mistakes until it achieves it goal; i.e. learning from those failed experiences. Whereas a rule-based system will perform a task the same way every time (for better or worse), the performance of a machine learning system can be improved through training, by exposing the algorithm to more data.

ML uses different ML algorithms to solve different type of problems.  For example, Yelp is leveraging machine learning to improve users’ experience , while Twitter has ML algorithms that evaluate each tweet in real time and “scores” them to determine which tweets will likely to drive the most engagement.

Machine Learning Algorithms

Broadly ML algorithms can be classified into three (3) different categories: Supervised, ML, Unsupervised ML, and Reinforcement Learning Algorithms :

  1. Supervised Machine Learning Algorithms can make predictions on given set of samples. These algorithms searches for patterns within the value labels assigned to data points.
  2. Unsupervised Machine Learning Algorithms organize the data into a group of clusters to better describe its structure as well as make complex data look simple and organized for analysis. Unsupervised algorithms don’t use labels associated with data points
  3. Reinforcement Learning Algorithms relies on the ability of an agent to interact with the environment and find out what is the best outcome. Using a hit and miss approach the agent is rewarded or penalized (with points) for a correct or a wrong answer, to train the model. Once trained the model is used to predict the new data presented to it.

 

machine-learning-abdul-rahid

Image via Abdul Rahid

Machine Learning Algorithms Every Engineer Should Know

To address the complex nature of various real-world data problems, specialized machine learning algorithms.  Here is the list of commonly used machine learning algorithms. These ML algorithms can be applied to almost any data problem:

  1. Linear Regression
  2. Logistic Regression
  3. Decision Tree
  4. SVM
  5. Naive Bayes
  6. kNN
  7. K-Means
  8. Random Forest
  9. Dimensionality Reduction Algorithms
  10. Gradient Boosting algorithms such as
    • GBM
    • XGBoost
    • LightGBM
    • CatBoost

ML-Algorthms.PNG

Content Source: What is Machine Learning

While picking the right algorithm is extremely important, here is a short list of other important tasks the skilled ML data scientists often most do:

Select Model before Building Model.

Skilled ML analysts often compare one or more models and setup some sort of formal bake-off. Its often easy to implement models that they (a) have heard of and (b) have a library for. But care should be take to fit the model to problem and/or available data sources. 

Model checking / goodness-of-fit assessment

  • How well does your model describe the variation in the data?  
  • What are the reasons for the failures of fit?
  • Would a different model help? 
  • Are there significate outliers?

On-going model validation.

  • Does your model get the right answers?
  • Does it seem to get the right answer for the right reasons?

This often means checking for over-fitting, but can include a lot of other things, especially if you care about inference.   Also, if your machine is performing on-going prediction then you need to perform on-going validation. You can’t assume that a model that was good a year ago is still valid today.

Performing valid inference.

  • How are you interpreting the meaning of the model, a.k.a. the storytelling part.
  • Usually the problem here is the tendency to use the model and data as creative storytelling process instead of crafting a well-validated narrative.

Building real-time data pipelines

Building a real-time data pipelines requires infrastructure and technologies that must be able to accommodate ultrafast data capture and processing.

Real-time technologies share the following characteristics:

  1. data storage for high-speed ingest,
  2. distributed architecture for horizontal scalability, and
  3. they are query-able for real-time, interactive data exploration.

Additional Resources

Finally, given all the readily available sources, its impractical to dive into great detail on each of different ML Algorithms.  Here are some resources that I have found useful for you to review:

  1.  What is Machine Learning
  2. A Tour of Machine Learning AlgorithmsEssentials of Machine Learning Algorithms (with Python and R Codes)
  3. Top 10 Machine Learning Algorithms

Getting Started with Red Hat OpenShift

Microservices are quickly becoming a mainstay in the cloud native development space. Using practices such as 12-Factors, cloud development team are containerizing their microservices using technologies such as Docker and Kubernetes.

In this blog I will a brief overview of Kubernetes and Red Had OpenShift a container tool that is critical to the managing of cloud applications.

What is Red Hat OpenShift

Red Hat OpenShift is an enterprise-grade container platform based on industry standards, Docker and Kubernetes. Build, deploy, and scale on any infrastructure. OpenShift gives application teams a faster path to production.

Kubernetes has become the de facto standard in managing container-based workloads. In most cases, these customers want to use these container-based workloads to leverage resources from different cloud vendors to avoid vendor lock-in and to take advantage of any unique features or strengths that these vendors may have.

The Red Hat OpenShift Container Platform (or “OpenShift” for short) builds on top of the Kubernetes orchestrator by providing additions features that make development easier. OpenShift can be run on-premise, in a private cloud, or on a public cloud such as Amazon AWS.

What Is the OpenShift Architecture?

OpenShift has a microservice-based architecture that uses small decoupled components that work together. Architected to run on top of a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value store.

The services can be broken down by function:

  • REST APIs, which expose each of the core objects.
  • Controllers, which read those APIs, apply changes to other objects, and report status or write back to the object.

openshift_logical_architecture_overview

Figure 1 – OpenShift Architecture Overview (content source)

OpenShift v3 is a layered system designed to expose the underlying Docker and Kubernetes capabilities while a simple and easy to use environment for application development.  Using this layered approach:

  • Docker provides the abstraction for packaging and creating Linux-based, lightweight containers.
  • Kubernetes provides the cluster management and orchestrates Docker containers on multiple hosts.

In addition, OpenShift adds the following capabilities:

  • Source code management, application/service builds, and application/service deployments for developers
  • Managing and promoting images at scale as they flow through your system
  • Application management at scale
  • Team and user tracking for organizing a large developer organization

Logical Kubernetes Architecture

OpenShift enables you to use Docker application containers and the Kubernetes cluster manager to automate the way you create, ship, and run applications. OpenShift’s core technology, including Docker-based containers and Kubernetes, can be run on a variety of platforms, including as a virtual machine with OpenShift installed and configured on your local environment.
Checkout the following link for additional details

kubernetes cluster

Why use microservices

New architectural approaches like microservices are used by development organizations to accelerate the pace of innovation by allowing them to build small fine-grained business services, like buy product or checkout.

In addition, these new services are being built to run as Cloud native applications that allow the dev teams to define fine-grained resource optimization and enable them to rapidly scale both building and running the products

Screen Shot 2019-05-24 at 2.51.10 PM

While there is no “official” standard as to what defines a microservices architecture, here are some common characteristics:

  • Microservices architecture breaks your application from a single process into multiple components that work together to deliver value.
  • each microservice is designed for a set of capabilities and focuses on
    solving a specific problem.
  • Any communication between individual microservices or other components happens via well-defined APIs.
  • Each service in a microservices architecture can be developed, deployed, operated, and scaled without affecting the functioning of other services.

Example Microservice Architecture

Here is an example of an application build using microservices

Screen Shot 2019-05-16 at 6.11.07 PM.png

Finally one of the biggest misconceptions about Microservices I’ve observed is that many believe microservices are hard to implement because they often require a holistic, self-contained and completely self-sufficient approach.

In reality, microservices newer implementation approach for distributed systems architected and designed according to much wider concepts – Service Oriented Architecture, Event Driven Architecture, Domain Driven Design.

Looking at microservices in a disconnect from these architectural styles is a mistake that may lead to costly failures when you try to use these new services in production environments.

In a future blog I will walk you through using OpenShift to develop, test and deploy a sample Microservice application.

Models in Days, Not Months: Building AI Microservices using Cloud-Native approach – Part 1 – Introduction

Developers are incorporating Artificial intelligence (AI) – in the form of deep learning, machine learning, and other technologies such as microservice architectures to meet the growing demand for more intelligent business services.

Microservices architectures are the core development paradigm in the era of cloud-native computing. Going forward, the number of composable cloud-native AI microservices will dramatically increase as better tools are created to compose these features as data-driven microservices. 

Currently however to effectively develop AI microservices, developers must factor the underlying application capabilities into modular building blocks that can be deployed into cloud-native environments with minimal binding among resources.

In this series, we look at your some approaches to simplify your journey to implement a couple simple cloud-native AI microservices by leveraging technologies such as Red Hat OpenShift, Spring Boot and Spring Cloud.  We will also cover the nuts and bolts of the setting up, using and scaling fully operational Machine Learning platform

For this first blog, we will provide a brief introduction to microservices and will cover such topics as AI, ML, Data Engineering, Kubernetes and Red Hat in future blog posts.



An Introduction

As an IBM Cloud Garage Architect, I have had the opportunity over the last 3 to 4 years to work for a variety of clients looking to quickly implement new innovative services (such as Chatbots, Blockchain, AI) and/or move their existing workloads to the cloud.

Oftentimes as not the initial work is part of a multi-cloud adoption strategy where the organizations are looking to gain skills while working with experienced developers, architects and other SMEs experienced with private clouds (such as IBM Cloud Private or OpenShift) or public clouds such as (IBM Cloud, AWS,  Azure, Google to name a few).

In most cases, they are also looking for assistance with learning how to successfully

  • split their monolithic applications in smaller chunks (to implement the microservices architecture
  • implement one or more new services, usually with Microservices,
  • adopt better develop/operations practices (DevOps, CI/CD, Cloud Operations) and/or
  • extend their services using AI or Machine Learning (ML)

What are Microservices?

In general, most applications designed that have been designed using the microservices architecture are composed of small autonomous services. These services are generally focused only on a single functionality centered around a business capability (bounded context).

Microservices enable teams to build an application composed of focused services, each with its own code base and state that can be developed independently by smaller agile teams. Using microservice approach also provides many benefits for Agile and DevOps teams – as Martin Fowler points out, Netflix, eBay, Amazon, Twitter, PayPal, and other tech stars have all evolved from monolithic to microservices architecture.

Screen Shot 2019-05-16 at 5.49.21 PM.png

What does a typical Microservice Architect look like?

While there are no real standards that a microservices architecture must conform to there are several conventions. Each microservice should be

  • have a specific business purpose
  • be loosely coupled,
  • only communicate through APIs,
  • faster to develop and easier to maintain.

The figure below shows a simple application that is built using microservices:

Screen Shot 2019-05-15 at 3.46.58 PM

How Does Microservice Architecture Work?

Before building your own applications using microservices, here is a clearer evaluation of the scope, and functionalities of an application based on microservices.

Guidelines for designing Microservices

  • As a developer, when you decide to build an application separate the domains and be clear with the functionalities.
  • Each microservice you design shall concentrate only on one service of the application.
  • Ensure that you have designed the application in such a way that each service is individually deployable.
  • Make sure that the communication between microservices is done via a stateless server.
  • Each service can be furthered refactored into smaller services, having their own microservices.

Building AI Microservices

While AI and ML are being into cloud-native applications, widely adopted standards and practices for incorporating these algorithms have yet to emerge. Second, microservices architecture is an evolving development philosophy, but it may take a long time before legacy monolithic applications, toolsets, and development habits adopt these practices, such as exposing discretely separated functions through RESTful APIs.

As James Kobielus, noted in his article on Deep Learning, see reference 2,

“for developers working AI projects, the key guidance includes:

  • Factoring AI applications as modular functional primitives. Factoring AI into modular microservices requires decomposition of data-driven algorithmic functions into reusable primitives. For AI, the core primitives consist of algorithms that perform regression, classification, clustering, predictive analysis, feature reduction, pattern recognition, and natural language processing.
  • Using cloud-native approaches to build modular AI microservices. Deploying AI microservices into cloud-native environments requires containerization of the core algorithmic functionality. In addition, this functionality should expose a stateless, event-driven RESTful API so that it can be easily reused, evolved, or replaced without compromising interoperability.”

Additional guidelines for AI Microservice Developers

To effectively develop AI microservices, developers must factor the underlying application capabilities into modular building blocks that can be deployed into cloud-native environments with minimal binding among resources.

However like all development technologies, however, bad application design or poor execution of microservice principles can lead to complex, monolithic, and hard-to-maintain applications.

Here is a summary of the seven (7) AI microservice development guidelines from Jim’s article. 

  1. Break down AI Capabilities into reusable primitives
  2. Build an orchestration graph of AI functional microservice modules
  3. Use high-level AI programming languages to build modular microservice logic
  4. Apply standard AI application patterns to create modular microservices
  5. Reuse AI microservices functionality through a modular subdivision
  6. Link modular AI microservices into multifunctional solutions
  7. Transfer learning from existing AI modules into new microservices of similar domain scope

While we will cover Machine Learning (ML) in more detail in a future blog post, let’s quickly share the two types of ML techniques commonly used.

  • Supervised learning, which trains a model on known input and output data so that it can predict future outputs, and
  • Unsupervised learning, which finds hidden patterns or intrinsic structures in input data.

The goal of supervised machine learning is to build a model that makes predictions based on evidence in the presence of uncertainty using such techniques as Classification or regression. While unsupervised learning finds hidden patterns or intrinsic structures in data often using clustering techniques such as Neural Network, see the diagram below

Screen Shot 2019-06-01 at 7.52.40 PM

Content Source, Machine Learning in Matlab

Factoring AI into modular microservices requires decomposition of data-driven algorithmic functions into reusable primitives.  For AI, the core primitives typically consist of algorithms that perform

  • regression,
  • classification,
  • clustering,
  • predictive analysis,
  • feature reduction,
  • pattern recognition, and
  • natural language processing.

AI Microservice developers may also need to build an orchestration graph in which these microservices declare other submodules internally and have other modules passed to them at construction time, and share cross-module variables.

For more details, check out Jim’s article.    Jim was formerly IBM’s data science evangelist and is currently the Lead Analyst for Application Development, Deep Learning and Data Science at Wikibon.

Microservice Architecture

A typical Microservice Architecture (MSA) should consist of the following components:

  1. Clients
  2. Identity Providers
  3. API Gateway
  4. Messaging Formats
  5. Databases
  6. Static Content
  7. Management
  8. Service Discovery

Screen Shot 2019-05-16 at 6.11.07 PM

For additional information check out the following link .  In a future blog we will show how this architecture is extended to incorporate the AI microservices and other required components.

1. Clients

The architect usually must support different types of clients (e.g. desktop, web, Android phone, tablet, IPhone) from different devices trying to perform various capabilities

2. Identity Providers

These requests from the clients are then passed on the identity providers who authenticate the requests of clients and communicate the requests to API Gateway. The requests are then communicated to the internal services via well-defined API Gateway.

3. API Gateway

Since clients don’t call the services directly, API Gateway acts as an entry point for the clients to forward requests to appropriate microservices.

The advantages of using an API gateway include:

4. Messaging Formats

There are two types of messages through which they communicate:

  • Asynchronous Messages: In the situation where clients do not wait for the responses from a service, Microservices usually tend to use protocols such as AMQP, STOMP, MQTT. These protocols are used in this type of communication since the nature of messages is defined and these messages have to be interoperable between implementations.
  • Synchronous Messages: In the situation where clients wait for the responses from a service, Microservices usually tend to use REST (Representational State Transfer) as it relies on a stateless, client-server, and the HTTP protocol. This protocol is used as it is a distributed environment each and every functionality is represented with a resource to carry out operations

5. Data Handling

The next question that may come to your mind is how do the applications using Microservices handle their data?

Well, each Microservice owns a private database to capture their data and implement the respective business functionality. Also, the databases of Microservices are updated through their service API only. Refer to the diagram below:

The services provided by Microservices are carried forward to any remote service which supports inter-process communication for different technology stacks.

6. Static Content

After the Microservices communicate within themselves, they deploy the static content to a cloud-based storage service that can deliver them directly to the clients via Content Delivery Networks (CDNs).

7. Management

This component is responsible for balancing the services on nodes and identifying failures.

Service Discovery

Acts as a guide to Microservices to find the route of communication between them as it maintains a list of services on which nodes are located.

Summary

While microservices solve many of the challenges associated with monolithic systems, they are not a silver bullet. Care must be taken as by implementing a microservice design your project may be exposed to communication, planning, team coordination, and other problems that may have been previously seen with the monolithic design approach.

In the next post will provide a brief introduction to AI and ML along with providing a quick overview of the simple project to illustrate how to create a functional AI microservices that we will use through the rest of the series.

References

  1. Chris Richards, What are Microservices, Retrieved Jan 21, 2019
  2. JAMES KOBIELUS, “Building AI Microservices for Cloud Native Deployments”, Published May 31 2017
  3. Deploy AI Models as Microservice,  
  4.  “Microservice Architecture – Learn, Build and Deploy Microservices”, 
  5. Noam Shazeer, et al, “OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER”
  6. Machine Learning in Matlab

Is Blockchain Technology the perfect for Supply Chain Management challenges?

Many businesses understand that being able to manage their supply chain is vital to their success. Nevertheless many of these companies still struggle with their supply chain due to lack of visibly, the difficulty of obtaining accurate data in real-time, and the logistical problems associated of getting the right inventory to where it needs to be when it needs to be there.

To attempt to solve some of these challenges, many innovators are turning to blockchain technology with its smart contracts, immutable ledger and greater transparency. As Chris O’Connor stated in his recent article on “Driving industry advancements with Watson IoT and Blockchain”

“Traditionally, supply chain transactions are completed manually, creating delays and a higher risk for recording error, which can cause differences between what was recorded and what was loaded. By digitizing this process using blockchain and Watson IoT, the relevant information is captured directly from the sensors placed on the trucks, and entered onto the blockchain, creating a single, shared repository that all authorized participants can access and which can only be altered with consensus from all parties.” [1] – Chris Connor

Blockchain technology is all about providing an immutable distributed public general ledger where transactions are recorded and tracked. This makes it much easier to get real-time updates and to see what’s happening every step of the way.

In another article from the Harvard Business Review, Michael J. Casey and Pindar Wong observed that blockchain — an online globally distributed general ledger that keeps track of transactions via online “smart contracts” — will produce “dynamic demand chains in place of rigid supply chains, resulting in more efficient resource use for all.[2]”

As one of my Medical Device Supply Chain colleges observed recently “Pinpointing issues can be difficult when you have multiple suppliers across multiple states and countries, it can be hard to keep track of everything.”

With blockchain, the members of the network can see what’s going on as it happens. The inherit transparency of blockchain helps keep all those involved accountable for their end of the bargain. It’s a great way to get the whole picture, as well as drill down to individual aspects of the supply chain.

Smart Contracts and Using Blockchain Supply Management

Another reason why blockchain technology is so useful in supply chain management scenarios has to do with the smart contracts. With smart contacts, all the interested parties can see the terms of the agreement that enforce themselves.

To move forward with accepting changes to the smart contract, certain expectations have to be met. When the world state meet those expectations, the contracts can be fulfilled

Shared ledger consists of two data structures

The distributed replication of IBM Blockchain enables the business partners to access and supply IoT data without the need for central control and management. All business partners can verify each transaction, preventing disputes and ensuring each partner is held accountable for their roles in the overall transaction.

Leveraging blockchain for your IoT data opens up new ways of automating business processes among your partners without setting up an expensive centralized IT infrastructure.

Finally as the use of internet of things (IoT) devices and sensors becomes more and more commonplace, tracking the location and status (e.g., fitness, freshness, viability) is becoming easier.   With its new blockchain integration, the IBM Watson IoT platform is enabling IoT devices to send data from these “things” to a private blockchain network where the transaction can be added to the shared ledger with tamper-resistant records.

IoT-WatsonIoT-Blockchain-BusinessNetwork

In an upcoming post, I will dive deeper into how define, build and deploy blockchain applications using a combination of Blockchain, IBM Watson and IoT devices to solve some real world supply chain challenges.

For additional reading check out the following:

  1. Michael J. Casey and Pindar Wong, Global Supply Chains Are About to Get Better, Thanks to Blockchain, https://hbr.org/2017/03/global-supply-chains-are-about-to-get-better-thanks-to-blockchain
  2. Chris O’Connor, “Driving industry advancements with Watson IoT and Blockchain,” written July 19, 2017, https://www.ibm.com/blogs/internet-of-things/iot-blockchain-industry-advancements/
  3. Joe McKendrick, Why Blockchain May Be Your Next Supply Chain, retrieved 10-11-2017, https://www.forbes.com/sites/joemckendrick/2017/04/21/why-blockchain-may-be-your-next-supply-chain/#7162bac713cf

Walmart and 9 Food Giants Team Up on IBM Blockchain Plans

While blockchain technology is fairly, IBM has already helped several customers achieve success. Many financial organizations have already launched their own blockchain initiatives.

For instance, the diamond insurer, Everledger, has used blockchain to digitally store the provenance of diamonds to minimize and prevent everything from fraud to conflict stones. There are over 1.2 million diamonds on their blockchain today, which may save insurers up to $50B annually. ([1]).  However there are several areas where the use of blockchain are just now being explored.

For example, IBM partnered with Walmart, Nestle, Unilever and other food giants to trace food contamination with blockchain and thereby improve food safety.

Full coverage

 

 

Introducing the IBM Blockchain Network

 

While many financial institutions have explored, Bitcoin, the most well-established blockchain implementation for more the last couple years, the use of cryptocurrency has proven limited in scope and scalability.

However, Blockchain, the backend technology behind Bitcoin, has shown a lot of promise and several large companies including, IBM, are throwing their weight behind.  Fundamental units of blockchain are the transactions, where two parties exchange information.  The data is subsequently, verified and validated, whereby it is reviewed whether one party owns the respective rights for these transactions.

Blockchain Peer Network - 1

IBM Blockchain is based on Hyperledger Fabric from the Linux Foundation.  Hyperledger, an open source collaborative effort to advance cross-industry blockchain technologies, is hosted by The Linux Foundation®. IBM provides blockchain solutions and services leveraging Hyperledger technologies, including Hyperledger Fabric and Hyperledger Composer. For more see https://www.ibm.com/blockchain/hyperledger.html

IBM is already working on applying blockchain in the finance and logistics industries. It’s now working to use it to help food industry improve traceability by providing the businesses with a shared store of information that they can trust regarding the provenance and destination of ingredients.

To accomplish this, IBM is collaborating with a consortium of food manufacturing and distribution giants including Nestle, Tyson Foods, Dole, McCormick, Walmart and Kroger to identify new uses for blockchain technologies in the supply chain.   For more information checkout the article by Peter Sayer, “IBM wants to make blockchain good enough to eat”,.

In addition, IBM has recently launched an enterprise blockchain platform as part of its range of cloud services. The IBM Blockchain Platform is currently the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance, and operation of a multi-institution business network.

The IBM Blockchain Platform is currently the only fully integrated enterprise-ready blockchain platform designed to accelerate the development, governance, and operation of a multi-institution business network.

·  Based on Hyperledger Fabric V1 runtime optimized for enterprise requirements

·  Specialized compute for security, performance and resilience

·  Delivered via the IBM Cloud on a global footprint with 24×7 Integrated Support

·  Full lifecycle tooling to speed activation and management of  your network

 IBM Blockchain Network

Businesses that want to roll their own blockchain can access IBM’s array of developer tools, including the Hyperledger Composer framework for mapping business processes to code.

For more details checkout:

  1. The IBM Blockchain Platform, https://www.ibm.com/blockchain/platform/
  2. Peter Sayer, “IBM wants to make blockchain good enough to eat”, Aug 23rd 2017, retrieved 9-23-2017, http://www.cio.in/news/ibm-wants-make-blockchain-good-enough-eat
  3. IBM Blockchain, http://www.techrepublic.com/article/can-ibm-bring-bitcoins-blockchain-technology-to-mainstream-business/
  4. Dr. Thomas Kaltofen, “5 Points how Blockchain will change our lives in a revolutionary way”, retrieved 9-24-2017, https://ict.swisscom.ch/2017/06/5-points-how-blockchain-will-change-our-lives-in-a-revolutionary-way/

Cognitive AI Chatbot are key to winning future of Customer Service

Reposting a blog I created last year on building ChatBots

Last updated March 1, 2019

by Reedy Feggins, IBM Cloud Garage Architect, DevOps SME, SCM


Most brands are being asked to deliver client facing resolutions, as quickly as possible, to complete with demand and fend off the competition. Over the last few years many brands have started adopting new technologies like AI and chatbots to offer always-on self-service, at scale, cheaper than ever before.

 

pexels-photo-595804.jpeg

 

Chatbots and virtual agents are being implemented across multiple industries already, helping customers accomplish a wide range of tasks.

In a recent presentation, Dean Upton, Director of Strategic Consulting Blueworx, and Jeannette Browning, WW IBM Watson Sales shared portions of a survey(2) conducted Global Contact Centre Benchmark providing several reasons why self-service channels are key to the future of customer service.

Excerpt - Complete Cognitive Contact Center - Why Self-Service Channels are key to future customer service.jpg

While these technologies have helped reduce call center operational costs,

“Is your brand providing fast, effortless, accurate resolutions on the very first contact, regardless of channel?

In recent article, Jonathan Young, Program Director, Watson Engagement, referenced a study that stated that “more than 62% of customers will consider switching to a competitor after only 1-2 bad experiences with a brand.(1)

Here are some key points to consider:

  • Given a choice, 70% of customers today prefer messaging over voice for customer support
  • Most customers today expect seamless interactions with brands whenever, wherever and however they want.

So while AI-powered conversational agents often can address 60% – 80% of common Tier 1 support questions, incorporating the ability to escalate their issues to a human agent is still a necessity.

The goal should be to, integrate AI, bots, messaging and human agents into one intelligent platform,

allowing consumers to instantly get answers from AI-powered bots, with human care representatives brought in seamlessly in real-time, if a bot is not able to answer an issue satisfactorily

This new approach integrates AI, bots, messaging and human agents into one intelligent platform. This approach would also need to incorporate effective DevOps to ensure that these brands can quickly deploy conversational chatbots and scale to meet the growing customer demands.

Here is an example showing the logical view of a omni-channel cognitive contact center from, Jeannette Browning presentation on AI(2)

Excerpt - Complete Cognitive Contact Center - Architectural view of a cognitive contact center.jpg

In this system, Watson is like an agent itself. Watson sits alongside human agents. If Watson can’t answer something, the system passes is seamlessly to a human agent.

If you are looking at transforming your customer experience, you could consider investing in AI / cognitive solutions that include IBM Watson API Services such as Tone analysis, Natural Language Understanding Conversation, and Speech to Text.

For more information check out IBM Watson API Services. 

Also checkout my blog on “Adding cognitive insights to your applications using IBM Watson Tone Analyzer” to improve the effectiveness of cognitive chatbots.

Resources

1.    “Jonathan Young AI is redefining customer service. Does your callcenter stack-up?”,  https://www.ibm.com/blogs/watson/2017/11/ai-is-redefining-customer-service-does-your-call-center-stack-up/

2.    Dean Upton and Jeannette Browning, “The Complete Cognitive Contact Center – Creating the ultimate customer experiences”, https://www.slideshare.net/Goblueworx/ibm-watson-and-blueworx-the-complete-cognitive-contact-center

3.     Chris Vennard , “The future of call centers and customer service is being shaped by AI”, retrieved 1-13-2018, https://www.ibm.com/blogs/watson/2017/10/the-future-of-call-centers-is-shaped-by-ai/

Multi-part blog on microservices tutorials

This blog focuses on continuous delivery and test automation for the deployment of cloud native and hybrid applications.

Using IBM Garage approach we will show you how to implement some of the recommended “good” practices using a variety of cloud environment and deployment typologies

Outline

  • Prerequisites
  • Run the App locally

Prerequisites

To complete this quick start tutorial:

Download the sample

In a terminal window, run the following command to clone the sample app repository to your local machine.

   $ git clone https://github.com/

You use this terminal window to run all the commands in this quickstart.

Change to the directory that contains the sample code.

    $ cd nodejs-docs-hello-world

Run the app locally

Run the application locally by opening a terminal window and using the npm start script to launch the built in Node.js HTTP server.

npm start

Open a web browser, and navigate to the sample app at http://localhost:1337.

You see the Hello World message from the sample app displayed in the page.

 

DRAFT CONTENT

Has Jenkins become the default Continuous Integration (CI) tool for Microservices.

Here are some “good” practices for working with the Jenkins Pipeline

  1. Do: Use the right set of plugins
  2. Do: Pipeline as code
  3. Do: Divide work into a stage
  4. Do: All material work within a node
  5. Do: Work you can within a parallel step
  6. Do: Acquire nodes within parallel steps
  7. Do: Create inputs for automated or manual approvals
  8. Don’t: Use input within a node block
  9. Do: Wrap your inputs in a timeout
  10. Don’t: Set environment variables with env global variable
  11. Do: Prefer stashing files to archiving

Reference:

  1. Redbook on Creating Applications in IBM Bluemix Using the Microservices Approach
  2. Konrad Ohms, “Best practices for developing and organizing multi-stage app deployments in Bluemix”, written December 10, 2015, https://www.ibm.com/blogs/bluemix/2015/12/best-practices-for-multistage-app-deployments-in-bluemix/
  3. “Top 10 Best Practices for Jenkins Pipeline Plugin”, written 27 Jun 2016, retrieved 9-27-2017, https://www.cloudbees.com/blog/top-10-best-practices-jenkins-pipeline-plugin