IMDG products offers the capability to handle transactions in-memory (so faster) & facilitates creating a data grid (so linear scalability) for managing extreme processing (XTP) needs.
There are both commercial & open-source products available in market but before deciding upon the product or even using IMDG as technology, I would recommend considering following design/architectural points first:
1. Your needs: First & foremost, like any other solution, this is the most important factor to determine whether there is a need for such product or not. License cost for commercial product can cost a lost (~ $ 2500/- per processor for enterprise edition) and hence cost-benefit needs to be assessed first. I don’t see the need of it if there is no requirement for XTP (e.g. not needed for < 200 TPS).
2. Parallel Computing:
A distributed grid can offer processing ability similar to a mainframe processing utilizing cumulative capacity of the nodes in the grid. Processing can be seamlessly distributed across available nodes facilitating “parallel query” execution for faster response.
3. Caching Needs:
All the caching needs can be fulfilled using IMDG products and they offer support to all types of caches, e.g. distribute cache, replicated cache, partitioned cache, local cache with distribute as backup cache. But if you have just caching needs, then you are better off with specific caching related products (see at the end of the article).
4. Events based Processing Needs:
IMDG products support Complex Event Processing (CEP) based business architecture & ability to consume many events in scalable way.
5. High Availability (Failover support):
Failure of any node does not impact the cluster of nodes and as soon as failed node comes back, it starts contributing again seamlessly (without any configuration change or manual efforts). Also, real-time configuration change (e.g. changing cache high units) or product upgrade is possible without any downtime.
6. Scalability:
If there is need to add more nodes to your grid, it is seamless without any impact on existing grid. Mostly, IMDG products offer capabilities to be “linear scalable” to take full advantage of added capacity.
7. In-memory Database (IMDB) Support:
It also offers the entire database to be maintained in memory for faster response, throughput & performance. All the transactions can happen in memory and persisted asynchronously to database during non-peak hours.
8. Monitoring & Management:
Some products great real-time monitoring & management capabilities (also with JMX support) and it is very handy in troubleshooting or in finding out bottlenecks for improvements.
9. In-line with Cloud Computing:
With cloud computing as future, this is more important as it can offer “data as service” or “data virtualization”.
Commercial Products:
Oracle Coherence (earlier known as Tangosol Coherence), Gigaspaces XAP, IBM WebSphere eXtreme Scale (WXS), Tibco ActiveSpaces (recently launched), ScaleOut StateServer
Open-source Products:
Terracotta, JBoss Infispan, Hazelcast
Other Distributed Caching Solutions are also available but in my opinion, they are not exactly offering entire IMDG capabilities, but you have only caching needs then they are worth considering (though out of context of this discussion):
NCache (only for Distributed Caching for .Net)
Apache JCS, Terracotta EhCache, OpenSymphony OSCache
Disclaimer:
All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, correctness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.
The purpose of this blog is to provide information on best practices & discuss practical and feasible architectural solutions specially for digital architecture. Also, there will be information about emerging technologies. Feel free to ask any questions or share your comments.
Friday, November 26, 2010
Friday, July 2, 2010
Enterprise Architecture | Why is it important?
Enterprise Architecture practice has been there for decades but in last few years it is regaining popularity as it is one of the key initiatives driven by CxOs.
So, why is it important for an Enterprise?
Here are some of the key rationale behind the same:
1. Holistic Approach
Individual divisions across the enterprise only address business problems in their vicinity; but EA team will have holistic point-of-view and work towards in addressing the problems across the enterprise.
For example, having a Credit Card Loan Processing System for London division is addressing locale specific business demands but EA team can align this solution at enterprise level & might provide inputs to make it generic for making it reusable across the enterprise.
2. Consistency in Delivering Solutions to Business Problem
Once we have EA in place, a business solution can be delivered in more structured & consistent way. Established Reference Models for Business Demands
For example, if there is need to develop a Credit Check System, then enterprise reference model (if not available, then Industry Reference Model) will help in establishing consistent architecture for this solution.
3. Building Enterprise-wide Repository
Repository created in the process of establishing EA in an organization like Tools Repository, Architectural Artifacts Repository will encourage reuse & standardization across the enterprise.
4. IT Governance
EA goes hand-in-hand with IT Governance & collaboratively helps in establishing governance across the enterprise, which helps in building controlled & directed corporation. It acts like a framework for leadership, organizational structure, business processes, standards, practices, etc.
5. Defined Business/Technical/Information System Architecture:
Last but not least, as part of EA establishment in the organization, a clearly defined business, technical and information system architecture gets developed during the process. This also creates opportunity to business & IT people to come together & re-validate them.
Some of the popular EA frameworks are TOGAF, Zachman, FEA, Gartner, DoDAF & I have seen most of time there is a custom EA architecture extracting best of the practices/standards/tools from all of them.
Feel free to discuss it further in more detail.
Disclaimer:
All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.
So, why is it important for an Enterprise?
Here are some of the key rationale behind the same:
1. Holistic Approach
Individual divisions across the enterprise only address business problems in their vicinity; but EA team will have holistic point-of-view and work towards in addressing the problems across the enterprise.
For example, having a Credit Card Loan Processing System for London division is addressing locale specific business demands but EA team can align this solution at enterprise level & might provide inputs to make it generic for making it reusable across the enterprise.
2. Consistency in Delivering Solutions to Business Problem
Once we have EA in place, a business solution can be delivered in more structured & consistent way. Established Reference Models for Business Demands
For example, if there is need to develop a Credit Check System, then enterprise reference model (if not available, then Industry Reference Model) will help in establishing consistent architecture for this solution.
3. Building Enterprise-wide Repository
Repository created in the process of establishing EA in an organization like Tools Repository, Architectural Artifacts Repository will encourage reuse & standardization across the enterprise.
4. IT Governance
EA goes hand-in-hand with IT Governance & collaboratively helps in establishing governance across the enterprise, which helps in building controlled & directed corporation. It acts like a framework for leadership, organizational structure, business processes, standards, practices, etc.
5. Defined Business/Technical/Information System Architecture:
Last but not least, as part of EA establishment in the organization, a clearly defined business, technical and information system architecture gets developed during the process. This also creates opportunity to business & IT people to come together & re-validate them.
Some of the popular EA frameworks are TOGAF, Zachman, FEA, Gartner, DoDAF & I have seen most of time there is a custom EA architecture extracting best of the practices/standards/tools from all of them.
Feel free to discuss it further in more detail.
Disclaimer:
All data and information provided on this site is for informational purposes only. This site makes no representations as to accuracy, completeness, currentness, suitability, or validity of any information on this site and will not be liable for any errors, omissions, or delays in this information or any losses, injuries, or damages arising from its display or use. All information is provided on an as-is basis.This is a personal weblog. The opinions expressed here represent my own and not those of my employer or any other organization.
Tuesday, May 4, 2010
Benchmarking – One of the Best Practices for Measurement of your Application Stack Performance
With growing number of “change requests” coming to your application, it is increasingly difficult task to ensure that your application is not introducing performance issues. One of the best practices to ensure this is to “baseline” your performance statistics and “make it a yardstick” to measure the performance of application post major release (having many CRs). Across the industry, it is generally called as “Benchmarking".
Benchmarking is one of the best strategies to measure your hardware performance (to know its capacity) or any OS/application server performance or any framework performance, generally for “capacity modeling” or “evaluate & choose the best hardware/software/framework” on the basis of benchmarking numbers.
For Java, Benchmarking JVM, GC (Garbage Collectors) or any JDBC library can be used to evaluate & choose the best among the pack.
Now, coming back to Benchmarking your own custom application, it will give you following benefits:
• Capacity Modeling – Knowing the capacity of your application stack
• Baseline (Yardstick) – Establishing a yardstick against which all future releases will be based upon; subsequent releases should improve these numbers, not decrease.
• Measurability
• Choosing the best alternative on the basis of stats
Suggested Benchmarking Parameters for your application:
• Response Time in Seconds
• Throughput – Requests Processed Per Second
• Memory Usage, CPU Usage, Database Connections Usage, Disk I/O Usage
One of the most important steps in establishing Benchmarking numbers is “cycles of performance runs” (at least 3 cycles, more is better as it gives more reliable data).
References:
1. Microsoft Article - Benchmarking Web Services using Doculabs
2. IBM Article – Benchmarking Method for Comparing Open Source App Servers
3. Benchmarking AOP Implementations - http://docs.codehaus.org/display/AW/AOP+Benchmark
4. Benchmarking ESB: http://esbperformance.org/wiki/ESB_Performance_Test_Framework
Benchmarking is one of the best strategies to measure your hardware performance (to know its capacity) or any OS/application server performance or any framework performance, generally for “capacity modeling” or “evaluate & choose the best hardware/software/framework” on the basis of benchmarking numbers.
For Java, Benchmarking JVM, GC (Garbage Collectors) or any JDBC library can be used to evaluate & choose the best among the pack.
Now, coming back to Benchmarking your own custom application, it will give you following benefits:
• Capacity Modeling – Knowing the capacity of your application stack
• Baseline (Yardstick) – Establishing a yardstick against which all future releases will be based upon; subsequent releases should improve these numbers, not decrease.
• Measurability
• Choosing the best alternative on the basis of stats
Suggested Benchmarking Parameters for your application:
• Response Time in Seconds
• Throughput – Requests Processed Per Second
• Memory Usage, CPU Usage, Database Connections Usage, Disk I/O Usage
One of the most important steps in establishing Benchmarking numbers is “cycles of performance runs” (at least 3 cycles, more is better as it gives more reliable data).
References:
1. Microsoft Article - Benchmarking Web Services using Doculabs
2. IBM Article – Benchmarking Method for Comparing Open Source App Servers
3. Benchmarking AOP Implementations - http://docs.codehaus.org/display/AW/AOP+Benchmark
4. Benchmarking ESB: http://esbperformance.org/wiki/ESB_Performance_Test_Framework
Sunday, January 31, 2010
Many Development Methodologies - Which one to choose, Hyrbid might be the ANSWER
Well, the time has changed like the speed of sound since the era when Waterfall model was considered to be best methodologies for Software Development.
But now in new information age, where frequently changing user requirements, challenging timelines, tight budget and competitive bids are driving factors, IT industry offers many methodologies namely Prototyping, Spiral, RAD, Rational RUP, Agile (Scrum, XP, DSDM - Dynamic System Development Method, FDD - Features Driven Development, Lean Software Development) and the list goes on.
Considering all the available options as Development Methodologies in today's world, it is increasingly difficult to choose a single methodology for all your projects inside a single organization. But having multiple methodologies in a single organization generally creates chaos and obscures roadmap for future projects.
"A slightly different approach to tackle this challenge is to adapt best practices from short-listed methodologies, which suits best for your organization and formulate a hybrid-development methodology specific to your organization."
To illustrate, lets imagine a Development Methodology, which has following features:
But now in new information age, where frequently changing user requirements, challenging timelines, tight budget and competitive bids are driving factors, IT industry offers many methodologies namely Prototyping, Spiral, RAD, Rational RUP, Agile (Scrum, XP, DSDM - Dynamic System Development Method, FDD - Features Driven Development, Lean Software Development) and the list goes on.
Considering all the available options as Development Methodologies in today's world, it is increasingly difficult to choose a single methodology for all your projects inside a single organization. But having multiple methodologies in a single organization generally creates chaos and obscures roadmap for future projects.
"A slightly different approach to tackle this challenge is to adapt best practices from short-listed methodologies, which suits best for your organization and formulate a hybrid-development methodology specific to your organization."
To illustrate, lets imagine a Development Methodology, which has following features:
- Sprint approach of Scrum for handling features/requirements in quick turnaround
- Daily Scrum (or Standup Meeting) to check progress of the project
- Feedback, Continuous Integration approach of XP
- Monitoring/Control of Waterfall SDLC (specially of larger projects)
- Eliminate Waste of Lean software development
Thursday, January 7, 2010
Compare Persistence Mechanisms in Java using factors: ease of development, performance, scalibility, extensibility & security
Ease of development | Performance | Scalability | Extensibility | Security | |
Entity Beans - CMP | High – Bean developer concentrates on business logic; persistence logic provided by EJB vendor | High - Application programmers delegate the details of persistence to the container, which can optimize data access patterns for optimal performance. | High – Container provides scalability (configuration based) | High | High – Container provided |
Entity Beans - BMP | Low – Bean developer is responsible for providing persistence logic. | Uncertain – Depends upon the proficiency of bean developer | High – Container provides scalability (configuration based) | Low – Bean developer’s persistence logic needs to understand by others. | High – Container provided |
JDO | High – Bean developer concentrates on business logic; persistence logic provided by JDO vendor | High - Application programmers delegate the details of persistence to the JDO implementation, which can optimize data access patterns for optimal performance. | High | High | High |
JPA | High – Bean developer concentrates on business logic; persistence logic provided by JPA vendor | High – Best ideas from ORM (Hibernate,TopLink) and JDO | High – Container provides scalability | High - supports the use of pluggable persistence providers | High – Supports Standardized Security Model (Java) |
ORM Frameworks | Medium – Reduces development time | Depends - some O/R mapping tools do not perform well during bulk deletions of data | High-to-Medium – Depends on ORM vendor | Low – Non-standardized | Medium-to-High – Depends on Vendor |
DAO with direct JDBC | Low – Bean developer responsible for persistence logic. | High-to-Medium - But Depends upon the proficiency of bean developer | Medium-to-High – Depends on expertise of Developers | Medium-to-High – Depends on expertise of Developers | Medium-to-Low – Needs to be handled by bean developer |
Subscribe to:
Posts (Atom)