Wednesday, November 12, 2014

My weekend getaway with IBM Bluemix – A cloud platform providing PaaS with DevOps

As my usual ritual of going through technology tweets over the weekend, I got interested in exploring IBM Bluemix (which is based on Cloud Foundry) and decided to have first-hand experience of the same.
For those who do not have context of IBM Bluemix, a very short description is – Bluemix is implementation of IBM's Open Cloud Architecture, built on Cloud Foundry, which enables rapid development, deployment and management of cloud applications.

4 Key Takeaways  - I have summarized key takeaways based on my experience with Bluemix:

Takeaway # 1 – A complete lifecycle for cloud-based software development
  • Bluemix does not provide only PAAS but also integrates seamlessly with IBM DevOps Services (which is completely cloud-based for continuous delivery).
  • Bluemix can support web application, mobile application, middle-tier services (e.g. cache service) and backend services (e.g. IBM Watson for Cognitive Applications) or system-of-record (e.g. NoSQL database like Mongo or MySQL)


Takeaway # 2 – A seamless integration of Paas & DevOps
  • DevOps in true sense -  It facilitates you to code online, track & plan and build & deploy applications completely on cloud platform. Also, it helps to automate unit testing & configure any build tool (Maven, Ant, Grunt, Gradle, npm, shell script) in few steps.
  • Workflow driven (aka delivery pipeline) to automatically control build & deploy your application to single/multiple cloud-based environments.


Takeaway # 3 – An open-source based platform to avoid vendor lock-in
  • Bluemix is an implementation of IBM's Open Cloud Architecture based on Cloud Foundry, an open source platform as a service (PaaS). Cloud Foundry is not vendor specific & does not lock you into any proprietary or custom cloud implementation.
  • You can choose to run Cloud Foundry in Public, Private, VMWare & OpenStack based clouds.

Takeaway # 4 – A future-ready extensible enterprise-level platform for Mobile, Big Data & IoT
  • Quickly scale-up like any cloud platform for your tenth or millionth user leveraging cloud services 
  • Provides ready-made templates (aka boilerplates), which provides configured runtime environment & predefined services for mobile apps & web apps. Also, scripts (aka buildpacks) available to support targeted PaaS (e.g. Java, Node.js) 
  • Can be extended to leverage current & future trends like Mobile, Cognitive Apps, Big Data & IoT (Internet-of-things) based applications.

For people interested in getting their hands dirty, here are detailed set of instructions to experience a sample web application using data cache service:

Step 1 – Get registered on IBM Bluemix & IBM DevOps Services
  • I got registered with IBM Bluemix (trial account for 30 days) at https://ace.ng.bluemix.net
  • I got registered on IBM DevOps (use existing IBM id or you can link different userid) at https://hub.jazz.net/
  • You can explore Bluemix dashboard, which is very user friendly and I liked the UX (though at times, it tends to respond slowly).
 Step 2 – Add DataCache Service using Bluemix Dashboard
  • Click on “Add A Service”
  • Choose “Web And Application” category from left-hand pane
  • Click on “Create”. You can notice that it is free service with terms & conditions (100 MB usage is free).
  • Data Cache dashboard is up & running now


Step 3 – Create & Deploy Web Application in Bluemix

  • Deploy WAR using following commands:
o   Connect to IBM Cloud: cf api https://api.ng.bluemix.net
o   Login to IBM Cloud: cf login
o   Deploy your app: cf push mycachewebbeta -p target\mycachewebbeta-0.0.1-SNAPSHOT.war
o   Access your app: http://mycachewebbeta.mybluemix.net

  • Bluemix dashboard gets updated with new application:


Step 5 – Bind Web Application & Bind Service
  • Click on “mycachewebbeta” web application on dashboard 
  • Click on “Bind A Service” link 
  • Choose previously created data service

Step 6 – Test the web application for Cache Put/Get
References
Disclaimer:
All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.



Friday, May 30, 2014

Ensuring Software Quality | A pragmatic approach in current dynamics


Ensuring software quality is a challenging task in the current dynamics with aggressive timelines, changing business requirements, increasing enterprise-level constraints and demanding user experience expectations.
These challenges make the development team's task more demanding and often there is a dilemma on how to balance between software cost (more time for ensuring quality) vs. software quality.

In this blog, I am sharing my thoughts based on my experience & research on this subject area.

How do we define Software Quality?

"the degree to which a set of static attributes of a software product satisfy stated and implied needs for the software product to be used under specified conditions" - (ISO 25010)

As we have both functional (critical to business stakeholders) & non-functional (critical to technical stakeholders) aspects when assessing quality of software system, it can be expressed as:




In this blog, focus is on improving the structural quality of software, which eventually impacts functional quality aspects as well.

How do we define Structural Quality?

"the degree to which non-functional attributes of a software product such as maintainability, modularity, robustness, etc. satisfy stated & implied needs for the software product"

Some structural quality attributes such as performance & security can be measured using static/dynamic code-metrics tool, whereas few attributes like modularity & maintainability might require manual review process.

What are the focus areas to ensure Software Quality?

Often, we are reactive as opposed to being proactive in addressing quality. Quality starts early & hence process needs to be established across all phases of SDLC:
  • Requirements
  • Architecture
  • Design 
  • UI Development (considering increasing focus on user experience, this is a must to consider it as a SDLC phase)
  • Development 
  • Testing
  • Maintenance
Each SDLC phase needs to have clearly defined quality process with transparent entry/exit criteria based on ETVX (Entry/Task/Verification/Exit) model & supported by following:
  • Templates – Aims to make sure agreed skeleton for artifacts
  • Standards & Guidelines – Aims to make sure actions are predictable, detailed & repeatable
  • Checklist – Aims to make sure there is consistency in completeness
  • Tools – Aims to support & bring efficiency in process and also bring consistency in usage across


What are the metrics & tools to measure Software Quality?



How do we provide governance to keep check on Software Quality?
  • Governance needs to be very simple & effective. A very complicated structure & complicated process does not go well with team and often people tend to deviate from it.
  • A periodic checkpoint with team as part of governance model is must and needs to be informal to assess actual health of the software system.
  • Finally, a simplistic viewpoint (either using any tool) or even manual dashboard can  bring transparency & visibility to the system (clearly defined KPIs to track quality parameters)



Do we have any future vision (as-is & to-be map)?



  • Moving from Subjective Development Quality towards Measurable Development Quality is very important. If we don't know how to measure quality, then it can't be measured against any internal/external benchmark.
  • Quality Governance Model is often ignore and not well established or not followed. Having Architecture Review Board, Design Review Board & Quality Review Board (on periodic basis) is a big plus and needs to be supported by Senior Leadership.
  • Automation is the key to meet aggressive timelines challenge & usually it is less prone to errors. Whether it is code generation, build generation or quality reports automation, it holds key to success.
  • Having Uniform & Consistent Methods to apply during SDLC phases ensures success with certitude.
  • Finally, establish benchmarks in your organization to compare quality score of each project. Without knowing where you stand, it's impossible to achieve the targeted goal.
Disclaimer:

All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.

Wednesday, January 15, 2014

Zero Downtime of Coherence Infrastructure (24x7 Availability) as part of Planned Deployment Strategy


Coherence is a
 reliable in-memory data grid product offering OOTB failover & continuous availability with extreme scalability. But we at times, face challenges during Coherence deployment and tend to lean towards clean restart of entire Coherence Cluster. This defeats the purpose of 24x7 availability of data grid layer and eventually the availability of dependent applications as well.
I came across this discussion with several people and hence sharing my thoughts on the entire Coherence Deployment Strategy, which does not require any downtime ensuring continuous availability.

In my opinion, there are particularly three high-level scenarios with respect to Coherence deployment:

Scenario 1 - Deployment of Application, which is using Coherence Data Grid Layer
  • Problem Statement: Typically, this is the case when there are multiple web or native applications backed-up by Coherence data grid layer. Often, infrastructure team tends to restart Coherence Cluster during the process of deployment causing downtime to cache layer & eventually the entire application. This causes extended downtime of entire application (even hours) as clean restart of Coherence usually takes time.
  • Solution Approach: 
    • As a best practice, Coherence Cluster shutdown & restart should be avoided wherever possible. Coherence does not require to be cleanly restarted unless there are changes in libraries (which is second scenario below).
    • If there is requirement to clean-up existing Cache entries and replace them with new cache entries, then it is more of a change in application version maintenance of cache items than cache system. Typically, each cache item can have version information (getter method like getVersion()) attached to it and post deployment, previous version entries can be discarded by the application. 
    • You can also refer to Cache Invalidation Strategies, which comes as an OOTB feature in Coherence.
Scenario 2 - Deployment of Application with updated Coherence Application Libraries, which is using Coherence Data Grid Layer
  • Problem Statement: This scenario is applicable in cases where there is usage of Coherence Application Cache particularly where read-through or write-through patterns are implemented. In this case, application specific JAR files or libraries need to be updated on Coherence Nodes & hence infrastructure team tends to shutdown entire Coherence Cluster with clean restart.
  • Solution Approach: 
    • As a best practice, Coherence Cluster shutdown & restart should be avoided wherever possible.
    • A cyclic restart (or rolling restart) can help in this case along with version based maintenance & cache invalidation strategies of cache items (as explained in scenario 1).
    • Note that invalidation or cache item clean-up plays critical role as even if Coherence Nodes get restarted, data will get automatically backed-up in data grid layer (by other nodes). In essence, failover feature is acting against clean deployment in this case and hence need to be careful in clean-up approach in this case.
Scenario 3 - Coherence Configuration Change as part of Deployment
  • Problem Statement: This scenario is applicable in cases where there are changes in Coherence Configuration (Cluster Configuration or otherwise). Note that even if there is any difference (even minor) in configuration of any Coherence Node, it will get rejected by Coherence Cluster. For example, if there is change in Security Configuration (using override file) or TTL change or Coherence Edition Change.
  • Solution Approach: 
    • The easiest approach is to shutdown entire Coherence Cluster (JMX monitoring can help to make sure all Coherence Nodes are down) and post configuration change, restart all nodes. But it defeats our purpose of ZERO DOWNTIME.
    • If Zero downtime is needed, then we need to:
      • Setup an entirely new Coherence Cluster (e.g. by assigning a new multicast IP address or change of mutlicast port)
      • Make Configuration Changes &  do fresh deployment on new cluster
      • Do cyclic restart of dependent application servers using new Coherence Cluster setup
      • Discard Old Coherence Cluster post migration of old applications to new Coherence Cluster
There can be multiple other deployment scenarios possible but they can be variation of scenarios described above (at least in my mind).

Hope it helps to all those people who are seeking Zero Downtime Deployment without paying extra for other products like Oracle GoldenGate to achieve the same.

Disclaimer:

All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.