Enterprise Architecture and Business Process Management

We know that Enterprise Architecture is a logical framework that helps forge a relationship between business, strategy and technology. Within those macro concepts lies various organizational structures, processes and informational flows that help businesses meet their end goals.

With respect to business processes, businesses themselves are dynamic and must change to adapt with the latest market conditions in order to remain a going concern. Thus, proper attention must be paid to processes and the continuous improvement of those processes.

As organizations grow, they need to continuously analyze and refine their processes to ensure they are doing business as effectively and efficiently as possible. Fine-tuning processes gives an organization a competitive advantage in a global marketplace.(Project Management Approach For Business Process Improvement, 2010)

EA and business process management (BPM) are not mutually exclusive. Redshaw (2005. Pg. 3) defines BPM as “the management of all the processes supporting a business transaction/event from the beginning to the end while applying the policies/rules needed to support an organization’s stated business model at a specific point in time.” BPM offers advantages to large institutions as it enables a linkage between IT systems and business processes. Jensen (2010) offers this summarization:

“When done together, BPM provides the business context, understanding and metrics, and EA the discipline for translating business vision and strategy into architectural change. Both are, in fact, needed for sustainable continuous optimization. It is important to realize the value of direct collaboration across BPM and EA boundaries. Only when supported by appropriate collaboration and governance processes can BPM and EA roles work effectively together towards the common goals of the enterprise.” (Jensen, 2010)

EA can support BPM projects by helping project teams become better acquainted with the very processes they are trying to improve. A project manager assigned to a new project can simply access the EA repository to get up to date information on the current processes pertinent to his/her domain. With respect to EA3 framework, “The enterprise’s key business and support processes are documented at the Business level of the EA framework” (Bernard, 2012. Pg. 127).

As processes are improved and changed and project wins or losses are accumulated, this knowledge is shared back into the EA repository for reuse and can be leveraged across the organization.

Quick process improvement wins and one off pinpoint projects may embody a “silo-ed” or parochial approach not in keeping with a broader strategic outlook. Ignoring emerging business strategies can be a costly mistake. For example, energy and resources could be mobilized by a bank to architect a new customer account management or card/payments processing system within the enterprise, accompanied by revised processes. The bank could simultaneously be moving forward with emerging cloud strategies that render the new architected solutions meaningless and obsolete. This hypothetical example of creating solutions in isolation from the overall strategy would be a very costly endeavor in terms of time and money and should obviously be avoided.

By definition, business process management projects embedded within an EA framework are guaranteed to align to the overall organizational strategy. EA becomes a key enabler to ensure process improvement projects are aligned to the strategy for the existing enterprise, as well as any future state strategies.

Wells Fargo and its use of Enterprise Architecture and BPM

As with most organizations of comparable size, Wells Fargo wrestled with issues from both the business and IT (Information Technology) ends of the house. The business had to gain a better understanding of what it needed. It also had to become better acquainted with the capabilities and solutions available from IT. On the other side of the coin, IT had to remain agile enough to deliver and react to changes in business conditions. In this manner IT could be better positioned to deliver solutions that met various business needs.

Olding (2008) found that Wells Fargo operated a very decentralized structure but lacked the coordinated ability to understand what was occurring in other groups that were employing business process management initiatives. A disadvantage of not embedding the BPM experiences within an EA framework was the failure to capitalize on successes that were gained across other “silo-ed” groups. Integrating EA into the approach dramatically simplified the process of capturing those wins for organizational reuse.

At Wells Fargo, a BPM Working Group was established with EA as its champion. The business set out to capture the current state of BPM technologies and approaches around a dozen lines of business. The results indicated that there were over 20 different BPM technologies being employed, each with their own varying approaches to implementation (Olding, 2008). In order to maximize the value of BPM, coordination had to occur across these lines of business.

A seasoned Enterprise Architect within the company made use of a communications strategy to raise awareness of the duplicative uncoordinated approaches dotting the landscape. Business analysts, project managers, executives, and technology professionals were engaged and best practices from the various approaches were discussed and reworked into an EA framework.

A year later, senior executives were presented with the best practices from various approaches, which had since been re-developed using a common framework. The commonality gained from the EA framework allowed for patterns of success to be easily identified, communicated and thus ultimately standardized. With senior level executive backing, the EA framework will persist in the organization allowing the bank to quickly identify opportunities for standardization.

Burns, Neutens, Newman & Power (2009, pg. 11) state, “Successful EA functions measure, and communicate, results that matter to the business, which in turn only strengthens the message that EA is not simply the preserve of the IT department.” This dovetails into the approach that Wells Fargo’s Enterprise Architect employed; the communication of pertinent information back to various business lines to gain acceptance.

The lessons learned from Wells Fargo’s use of BPM and EA as paraphrased from (Olding, 2008. Pgs 5-6):

  • Communicate at all levels of the enterprise.
  • Build BPM adoption from the bottom up. Approach business groups with proven examples and internal successes that will help drive the willingness to adopt new approaches.
  • Facilitate, do not own. Allow business groups to manage their own processes aligned within the framework.
  • Build EA from the top down.
  • Use BPM to derive the needed context and then incorporate it into the EA

As of 2008 Wells Fargo Financial (a business unit of the Wells Fargo & Co.) currently had nine BPM deployments in production and another four projects in the works. Gene Rawls, VP of continuous development, information services, for Wells Fargo Financial has stated that not having to reinvent the wheel saves months of development work for every deployment (Feig, 2008). Project turnaround time from the initial go-ahead for a BPM project to its actual deployment, is just three months.

References:

Bernard, Scott A. (2012). Linking Strategy, Business and Technology. EA3 An Introduction to Enterprise Architecture (3rd ed.). Bloomington, IN: Author House.

Burns, P., Neutens, M., Newman, D., & Power, Tim. (2009). Building Value through Enterprise Architecture: A Global Study. Booz & Co. Retrieved November 14, 2012.

Feig, N. (2008, June 1). The Transparent Bank: The Strategic Benefits of BPM — Banks are taking business process management beyond simple workflow automation to actually measure and optimize processes ranging from online account opening to compliance. Bank Systems + Technology, Vol 31. Retrieved from Factiva database.

Olding, Elise. (2008, December 7). BPM and EA Work Together to Deliver Business Value at Wells Fargo Bank. Retrieved from Gartner October 29, 2012.

Jensen, Claus Torp. (2010, February 10). Continuous improvement with BPM and EA together. Retrieved November 13, 2012.

Project Management Approach For Business Process Improvement. Retrieved November 12, 2012 from http://www.pmhut.com/project-management-approach-for-business-process-improvement

Redshaw, P. (2005, February 24). How Banks Can Benefit From Business Process Management. Retrieved from Gartner October 29, 2012.

Image courtesy of Stuart Miles at FreeDigitalPhotos.net

 

The Competitive Advantage of Process Innovation

This post summarizes a Harvard Business Review article entitled “The New Logic of High-Tech R&D“, written by Gary P. Pisano and Steven C. Wheelwright. The article focuses on the revelation that few companies within the pharmaceutical industry view manufacturing and process improvement as a competitive advantage. The authors assert that manufacturing process innovation is conducive towards product innovation. Companies traditionally spend money on product R&D but tend to neglect focusing on process R&D.

For example, Sigma Pharmaceuticals refused to invest significant resources to process development until the company was confident that the drug would win FDA approval. As a result, when demand for the drug increased they could not meet the higher demand without major investments in additional capacity. During this interim ramp up period the company lost two years of potential sales. Underinvestment in process development on the front end clearly put the company in a sub-optimal position to capitalize on additional revenue.

Process development and process innovation provide a litany of benefits. The first of which is accelerated time to market. According to one drug company, the time required to prepare factories for production generally added a year to the product-development lead time. Senior management was unaware of this fact while the managers within the process development organization were fully aware.

Rapid ramp up is also invaluable because it allows companies to more quickly realize revenue, penetrate a market, and recoup its development investments. Also the faster the ramp up occurs the faster critical resources can be freed to support the next product.

Innovative process technologies that are patent protected can hinder a competitor’s push into the market. Pisano and Wheelwright state that it is easier to stay ahead of a competitor that must constantly struggle to manufacture a product at competitive cost and quality levels.

Process development capabilities can also serve as a hedge against various forces in high tech industries. Shorter lifecycles elevates the value of fast to market processes. Semiconductor fabrication facilities can cost upwards of one billion dollars and depreciate at a rapid pace. For this reason, rapid ramp up is very important. Those companies with strong process development and manufacturing capabilities will have more freedom in choosing the products they wish to develop rather than forced to stick with simple to manufacture designs.

Pharmaceutical companies traditionally operated in the following manner. They delayed significant process R&D expenditures until they were reasonably sure that the product was going to be approved for launch. They didn’t delay product launch by keeping the process R&D off of the critical path. Manufacturing and process engineering were on hand to make sure the company could bring on additional capacity and didn’t stock out. Manufacturing was located in a tax haven even if it was far from R&D and process development, while process development was introduced later in the lifecycle in order to thwart the threat of generic competition. Today however, pharmaceutical companies find themselves squeezed by shorter product life cycles, less pricing flexibility and higher costs.

The article states that the earlier that a company makes process improvements the greater the total financial return. It is costly and time consuming to rectify process design problems on the factory floor. The earlier these problems are found in the development cycle the shorter the process development lead time.

Image courtesy of Areeya at FreeDigitalPhotos.net

The Benefits of Service Oriented Architecture for Financial Services Organizations

The banking and financial industry is an industry where legacy systems are prevalent. Banking systems tend to skew older and are very heterogeneous in nature. This heterogeneity of legacy banking systems is also coupled with the fact that replacement and integration of these systems is a difficult undertaking.

Mazursky (as cited in Baskerville, Cavallari, Hjort-Madsen, Pries-Heje, Sorrentino & Virili, 2010) states that older architectures complicate integration of enterprise applications because the underlying elements have created ‘closed’ architectures. Closed architectures restrict access to vital software and hardware configurations and force organizations to rely on single-vendor solutions for parts of their information computer technology. Thus, closed architectures hinder a bank’s ability to innovate and roll out new integrated financial products.

The flexibility of SOA facilitates easier connectivity to legacy backend systems from outside sources. Because of the size and complexity of most enterprise banking systems, a potential reengineering of these systems to flawlessly accommodate interaction with newer systems is not economically feasible. Inflexible systems can be difficult to modernize and maintain on an on-going basis. Furthermore, banks’ legacy systems can suffer from a lack of interoperability, which can stifle innovation.

“According to a survey commissioned by Infosys and Ovum in May 2012, approximately three quarters of European banks are using outdated core legacy systems. What’s more, 80% of respondents see these outdated systems as barriers to bringing new products to market, whilst 75% say such systems hinder, rather than enable, process change. Integration cost is effectively a barrier to service provision flexibility and subsequent innovation” (Banking Industry Architecture Network, 2012).

The greater the number of banking systems that can easily interact, connect and share data with other systems and services, the more innovative banks can become with respect to their product offerings for consumers. Market demands can be more easily responded to when new product creation is allowed to flourish with appreciably lower system integration costs. New applications can also be developed in much shorter time frames in order to react to customer demand as long as services are maintained and shared across the enterprise.

According to Earley & Free (2002), a bank that identifies a new business opportunity such as wealth management but has never provided such a service in the past can quickly ramp up functionality by utilizing services previously designed for other applications. The new wealth management application is designed without duplicating functionality in a more cost efficient and rapid manner. Smaller and mid-tier banks can capitalize on bringing new applications and services to market quickly and economically. This responsiveness allows smaller banks to offer similar products as those offered by their larger competitors. The modularity of SOA design precludes the need for substantial re-engineering of smaller/mid-tier banks’ information computer technology. This is an important benefit because smaller banks do not have comparable financial resources with respect to the more sizable industry players.

Additionally, SOA has the potential to free up development resources sooner, enabling those resources to engage in other development initiatives. “Few banks will be able to buy or build new capabilities quickly enough to satisfy market requirements without an SOA” (Earley & Free, 2002). In the survey conducted by Infosys Consulting and Ovum (a London based research organization), 100% of banks responded that SOA would become the “dominant, accepted future banking architecture“ (Banking Industry Architecture Network, 2012).

Furthermore, banks can employ multiple services to complete a business process. One service that determines the value of a portfolio of financial assets at market rates (mark to market calculations) could be coupled with another service that calculates the Value at Risk (VaR) of the bank’s entire portfolio. In a similar fashion to the new wealth management application example previously cited, the dual components could be made available to many other enterprise wide financial services and applications that require portfolio valuation and risk related information. In this manner, the functionalities are not inefficiently repeated across every application where they are requested.

Additionally, via use of SOA, “New technology provides the ability to mix and match outsourced business processes with on-premise assets and services” (Essvale Corporation Limited, 2011). Software designed or in use by third party vendors can become more easily integrated with bank systems and processes due to the high connectivity of an SOA approach. According to Gartner research, smaller and mid-tier banks are adopting SOA in order to make the most of their limited IT budgets and resources. “Until now, midtier banks had to rely on customized software packages from a single vendor, and assumed all of the maintenance costs and function limitations inherent in a single, proprietary set of solutions” (Earley et al., 2005). Due to a rising interest in SOA, technology vendors that serve the financial services industry are increasingly working with integration providers to offer a standard set of component integration (Earley, et al., 2005).

One of the benefits of SOA standardization is the enablement of more functionality, performed by much less underlying code. This leads to less complex, more cost effective system maintenance; thereby reducing operational risks.

“A fully implemented SOA provides a bank with a highly predictable application environment that reduces risk in day-to-day operations, due to the minimization and isolation of change to the production systems. Banks that fail to take this approach must constantly change their interfaces as external and internal requirements change. This introduces significant risk and the need for near-continuous testing to ensure that the customer ‘touchpoints’ and the back-end processes do not fail, while ensuring that one data or service change doesn’t adversely affect other data changes integrated through an interface” (Earley et al., 2005).

Conclusion

SOA has an important role to play in the architectural repertoire of banking and financial organizations. Its loosely coupled design characteristic allows services to be shared and reused across the enterprise without disparate systems concerning themselves with the underlying development code. Multiple services can be combined together to form reusable chunks of a business process. Outside sources can connect to legacy backend systems via an API, which increases the opportunity to mix and match vendor capabilities with in-house assets. SOA also helps banks and financial firms ramp up new applications and functionality quickly and economically, increasing product responsiveness to market demands. When SOA is combined with Event Driven Architecture, dynamic event driven systems can be developed that do not rely solely on the less proactive request/reply paradigm.

Banks and financial companies need to remain innovative, cost effective and anticipate customer needs in order to remain profitable. SOA allows organizations to become more agile and flexible with their application development. The rise of applications on mobile cloud enabled platforms means that customers will need to connect to data wherever it dwells. “Bank delivery is focused on reactively providing products on customer request, and mass-market, one-size-fits-all products (for mainstream retail banking). However, it is no longer feasible to force-fit mass-market bank products when technology devices, context and location are key elements to the type of customized bank service a customer needs” (Moyer, 2012). As SOA continues to mature with cloud enabled solutions and the rise of mobile computing, it is primed to be the building block for the next generation of banking application functionality.

References:

Baskerville, R., Cavallari, M., Hjort-Madsen, K., Pries-Heje, J., Sorrentino, M., & Virili, F. 2010. The strategic value of SOA: a comparative case study in the banking sector. International Journal of Information Technology and Management, Vol. 9, No. 1, 2010

Banking Industry Architecture Network. (2012). SOA, standards and IT systems: how will SOA impact the future of banking services? Available from https://bian.org/wp-content/uploads/2012/10/BIAN-SOA-report-2012.pdf

Early, A., & Free, D. (2002, September 4). SOA: A ‘Must Have’ for Core Banking (ID: SPA-17-9683). Retrieved from Gartner database.

Early, A., & Free, D., & Kun, M. (2005, July 1). An SOA Approach Will Boost a Bank’s Competitiveness (ID: G00126447). Retrieved from Gartner database.

Essvale Corporation Limited. (2011). Business knowledge for it in global retail banking: a complete handbook for it professionals.

Overview of Service Oriented Architecture

 

Service Oriented Architecture (SOA) can be described as an architectural style or strategy of “building loosely coupled distributed systems that deliver application functionality in the form of services for end-user applications” (Ho, 2003). Ho (2003) proclaims that a service can be envisioned as a simple unit of work as offered by a service provider. The service produces a desired end result for the service consumer. Another way to envision the concept of a service is to imagine a “reusable chunk of a business process that can be mixed and matched with other services” (Allen, 2006). The services either communicate with each other (i.e. pass data back and forth) or work in unison to enable or coordinate an activity.

When SOA is employed for designing applications and/or IT systems, the component services can be reused across the enterprise, which helps to lower overall development costs amongst other benefits. Reuse fosters consistency across the enterprise. For example, SOA enables banks to meet the needs of small, but profitable market segments without the need to redevelop new intelligence for a broad set of applications (Earley, Free & Kun, 2005). Furthermore, any number of services can be combined together to mimic a business processes.

“One of the most important advantages of a SOA is the ability to get away from an isolationist practice in software development, where each department builds its own system without any knowledge of what has already been done by others in the organization. This ‘silo’ approach leads to inefficient and costly situations where the same functionality is developed, deployed and maintained multiple times” (Maréchaux, 2006).

Architectural Model

Services are only accessed through a published application-programming interface, better known as the API. The API, which acts as the representative of the service to other applications, services or objects is “loosely coupled” with its underlying development and execution code. Any outside client invoking the service is not concerned with the service’s development code and is hidden from the outside client. “This abstraction of service implementation details through interfaces insulates clients from having to change their own code whenever any changes occur in the service implementation” (Khanna, 2008). In this manner, the service acts as a “black box” where the inner workings and designs of the service are completely independent from requestors. If the underlying code of the service were switched from java to C++, this change would be completely oblivious to would-be requestors of the service.

Allen, (2006) describes the concept of loose coupling as, “a feature of software systems that allows those systems to be linked without having knowledge of the technologies used by one another.” Loosely coupled software can be configured and combined together with other software at runtime. Tightly coupled software does not offer the same integration flexibility with other software, as its configuration is determined at design-time. This design-time configuration significantly hinders reusability options. In addition, loosely coupled applications are much more adaptable to unforeseen changes that may occur in business environments.

In the early 1990’s some financial firms adopted an objected oriented approach to their banking architecture. This approach is only superficially similar to a service oriented architecture approach. In an object oriented (OO) approach, the emphasis is on the ability to reuse objects within the source code. SOA emphasizes a “runtime reuse” philosophy in which the service itself is discoverable and reused across a network (Earley, Free & Kun, 2005). SOA also provides a solution to the lack of interoperability between legacy systems.

References:

Allen, P. (2006). Service orientation: winning strategies and best practices.

Early, A., & Free, D., & Kun, M. (2005, July 1). An SOA Approach Will Boost a Bank’s Competitiveness (ID: G00126447). Retrieved from Gartner database.

Ho, H. (2003). What is Service-Oriented Architecture? O’Reilly XML.com.

Khanna, Ayesha. (2008). Straight through processing for financial services: the complete guide.

Maréchaux, J., (2006, March 28). Combining Service-Oriented Architecture and Event-Driven Architecture using an Enterprise Service Bus. IBM developerWorks. Retrieved from http://www.ibm.com/developerworks/library/ws-soa-eda-esb/

How Timken Manages the Business Cycle

Capital Expenditures

In Peter Navarro’s book entitled “The Well Timed Strategy: Managing the Business Cycle for Competitive Advantage”, the professor of business at the University of California-Irvine defines the master cyclist as “A business executive who skillfully deploys a set of well-timed strategies and tactics to manage the business cycle for competitive advantage”[1]. With respect to capital expenditures, firms headed by master cyclists will increase capital expenditures during recession in order to develop new and innovative products and be better positioned to satisfy pent up demand once a recovery takes place. These firms will also modernize existing facilities during economic slowdowns [10].

The overall financial performance of the Timken Company was disappointing in 1998. Although the company was able to set a new sales record at that time, earnings as compared to 1997 dropped 33% [2]. A combination of difficult market conditions and unusual occurrences such as a prolonged strike at General Motors contributed to the decline. “A nearly global economic slow down — which started last year in Thailand, spread to Japan, then to most of the rest of Asia, South America and Russia — has squashed demand for many U.S. products”[3]. The modernization of existing capacity in many countries along with volatile currencies and a strong dollar placed substantial downward pricing pressures with respect to bearings worldwide [1]. Competitors in Asia found the U.S. market appealing since demand was drying up in their home markets. Consequently, the amount of imports into the United States for the products Timken manufactured increased, while exports decreased during this time period.

In this global economic slowdown for players in the steel and ball bearings industry, conventional wisdom would dictate that a company would need to decrease their capital expenditures to better position themselves. Timken during this period of time executed some strategies that were contrarian to conventional wisdom in an attempt to manage the business cycle. From their 1998 annual report Timken states, “We made record capital investments to prepare for the future and lower costs”[4]. During the third quarter of 1998, Timken dedicated a $55 million dollar (~$69 million in constant dollars) rolling mill and bar processing investment at its Harrison Steel Plant in Canton, Ohio [11]. Modernization expenditures were also announced for Timken’s Asheboro plant which opened in 1994 and produces bearings used in industrial markets. “The expansion will increase the size range of bearings the Asheboro plant is able to produce, hike plant capacity and add options available to Timken’s industrial customers” [5].

Timken has a record of increasing its capital expenditures in the face of economic slowdown or recessions in keeping with the strategies and tactics of a master cyclist. Their last new steel making plant prior to 1998 was the Faircrest Steel Plant in Perry Township, Ohio [6]. The plant was announced in the middle of the early 1980’s recession and opened in 1985 [12]. Timken took a huge gamble and invested $450 million (~1.1 billion in constant dollars), which was two-thirds of Timken’s net worth at the time — to build the only completely new alloy steel plant in the U.S. since World War II” [7]. At the time, the so-called experts said the U.S. steel industry was dead and companies didn’t need to build any new plants ” [12]. Similarly during the recession year of 1991, Timken boosted capital expenditures to $144.7 million up from $120 million in 1990 [12].

In order to differentiate its products from pure commodities Timken invested in research and development during this period of economic uncertainty for its products. This strategy is also in keeping with the master cyclist philosophy of increasing capital expenditures to develop innovative products and new capacity in time for a recovery. While Timken’s largest research and development center is in Canton Ohio, it added another large facility in Bangalore India to focus on new product development. Timken Research has four centers located in the US, Europe, Japan, Romania and India. The Timken Engineering and Research India Pvt Ltd is part of the company’s “work with the sun” concept where it is day time in at least one of the company’s centers [8].

Risk Management

 Firms that geographically diversify into new countries and regions can reap the benefits that this hedging strategy provides against business cycle risk. “The effectiveness of geographical diversification as a hedge is rooted in the fact that the business cycles and political conditions of various countries are not perfectly correlated ” [5]. The privatization of Brazil’s steel mill industry in 2001 opened up the door for North American companies to do business there [9]. Timken responded by forming a joint venture with Bardella S.A. Industrias Mechanicas (Bardella) to provide industrial services to the steel and aluminum industries in Brazil. In 2001 the company also acquired Bamarec which operated two component manufacturing facilities in France. The presence of French facilities allowed Timken to expand their precision steel components business unit. Timken CEO James Griffith believed that there was an opportunity to grow this business in Europe and entering the French market provided a base from which to launch their European strategy [10]. Both of the moves during a recession provided Timken an opportunity to hedge against the business cycle risks they faced in the United States.

 

[1] Navarro, Peter. The Well-Timed Strategy: Managing the Business Cycle for Competitive Advantage. New Jersey: Wharton School   Publishing 2006.

[2] The Timken Company 10K Report. 1999.

[3] Adams, David. “Canton, Ohio Steel Executive Favors Federal Reserve Rate Cut.” Akron Beacon Journal, Ohio. 13 November 1998. KRTBN Knight-Ridder Tribune Business News: Akron (Ohio) Beacon Journal.

[4] The Timken Company Annual Report. 1998.

[5] “Timken plans $20M boost for bearings. 16 July 1997.” American Metal Market. Vol. 105, No. 136, ISSN: 0002-9998

[6] Adams, David. “Steel Bearings Maker Timken Co. Opens New Canton, Ohio Mill.” 11 August 1998.

[7] Industry Insider. “How Timken Turns Survival into Growth.” http://www.businessweek.com/magazine/content/03_14/b3827027_mz009.htm 7 April 2003.

[8] Business Line (The Hindu). “Timken Company R&D base in India.” 11 February 1999.

[9] Robertson, Scott. “Timken expects to benefit from Brazil steel privatization.” 9 April 2001. AMM.

[10] “The Curse of a Strong Dollar; Timken CEO James Griffith says his outfit could sell a lot more bearings if the greenback wasn’t ‘overvalued…on the order of 30%.’” Business Week Online. 28 November 2001.

[11] “Continuity and Change in the Growth of a Family Controlled U.S. Manufacturing Firm.” Humanities and Social Sciences Online.   16 April 2007. <http://www.h-net.org/reviews/showrev.cgi?path=16914932502209&gt;

[12] Excerpt from Bear Stearns Industrial Internet Special. “The Wall Street Transcript – Questioning Market Leaders for Long Term Investors.” May 2001.

 

Spear Phishing

Regarding this New York Time article: Hackers in China Attacked The Times for Last 4 Months

Spear phishing attacks against businesses, diplomatic and government agencies seem to be very popular with cyber espionage networks. You only need one person to take the wrong action and the entire system is compromised as the New York Times is discovering.

China in 2012 used spear phishing and a .pdf file that exploited a vulnerability in Windows to launch spear phishing attacks against Tibetan activist groups. Antivirus software did not widely recognize the threats as was the case with the NYT’s imbroglio. [1]

In a similar vein to the attacks on the NYT, targeted spear phishing was used in a very recent incident called Operation Red October (lending to the fact that the attacks emanated from a Russophone country). The malware produced from this attack is called ‘Rocra’ and it is aimed at governments and research institutions in former Soviet republics and Eastern Europe.

The New York Times article states “Once they take a liking to a victim, they tend to come back. It’s not like a digital crime case where the intruders steal stuff and then they’re gone. This requires an internal vigilance model.”

It’s intriguing that the Red October attacks embody the spirit of that quote in the design of its malware:

“Red October also has a “resurrection” module embedded as a plug-in in Adobe Reader and Microsoft Office applications. This module made it possible for attackers to regain control of a system even after the malware itself was discovered and removed from the system.”

This is pretty scary stuff but ingenious nonetheless. Organizations need to take heed and make sure they are doing absolutely everything they can to combat attacks and training users about the dangers of spear phishing.

[1] http://www.scmagazineuk.com/chinese-spears-attack-tibetan-activists/article/231923/

[2] http://www.wnd.com/2013/01/red-october-cyberattack-implodes/

 

Macroeconomic Assessment: First Quarter 2007

Here is a May 1st, 2007 throwback submission for a macroeconomics graduate class taught by a great professor, Dr. Matthew J. Higgins. The assignment was to assess the current macroeconomic environment and compare the current environment with the previous year. Per the syllabus, one of the aims of the assignment was to teach students how to “cultivate the ability to process the vast amount of macroeconomic data that is put out and shape it into an assessment.”

Looking back on the assignment, I highlighted some ominous trends in housing that obviously affected the course of the economy quite negatively (to use an understatement). I wish I had the foresight of the guys in “The Big Short“. As an aside, Merry Christmas.


Overview

Gross Domestic Product for the first quarter of 2007 increased from the fourth quarter of 2006. In billions, gross domestic product rose from 13458.2 to 13632.6 [5]. In nominal terms, the economy expanded 5.3% but 4% inflation sapped most of that growth [1]. In real terms the increase was more modest at about 1.3% well below an already low projection of 1.8% [4]. Considering previous growth percentages of 5.6%, 2.6%, 2% and 2.5% for the four quarters of 2006, 1st quarter growth was lackluster. In fact, first quarter 2007 results were the weakest growth numbers since before the 2003 tax cuts [3] and well below many economists’ expectations [2]. However, some economists believe that the economy has hit its low point and is ready to rebound.

“The first-quarter GDP report was “probably the low point in the cycle,” said Nariman Behravesh, chief economist at consulting firm Global Insight. “It suggests that in fact we may be setting the stage for a very slight rebound this quarter.” Mr. Behravesh believes the economy will be growing at a rate of close to 3% by the end of the year” [2].

 Housing

Residential Investment, which is a proxy for the housing market, was the biggest drag on 1st Quarter results. Residential Investment decreased from about 717 billion in the 4th quarter of 2006 to 687 billion, a slide of about 4.2% [5]. This aforementioned decrease shaved about one percentage point off of overall GDP.

In the 4th Quarter of 2006 residential investment lowered overall GDP by 1.21%. This slide in investment has been ongoing and falling for six consecutive quarters. The downside to the destabilization of the housing sector is that it could force consumers and businesses to cut back sharply in spending. Former Federal Reserve Chairman Alan Greenspan estimated that the cash homeowners extract from their homes accounted for almost 3% of all consumer spending as of the third quarter of 2006 [2]. Similarly, studies have shown that a 1$ increase in housing wealth increases consumption by seven cents [6]. The main driver of the U.S. economy is personal consumption. A significant drop in consumption most likely would have an adverse effect on GDP. Falling home prices and fewer home equity extractions could potentially curb personal consumption as well.

Interestingly, the slowing U.S. housing market is also having an effect on Latin American countries. Home construction is the main gateway industry for immigrants entering the United States [7]. These immigrants contribute a significant portion of the estimated 50 billion in cash sent from the U.S. to family members in their home countries [7]. “In a recent study of 15 Latin American economies tracked by BCP Securities of Greenwich, Conn., all but three showed better than a 90% correlation between the ebb and flow of U.S. housing starts and the swelling and shrinkage of remittances as recorded by the nations’ central banks” [7].

Trade and the Dollar

For the first quarter of 2007 exports fell 1.2%, the biggest decline in nearly four years. Imports on the other hand increased 2.3%. Net exports subtracted 0.5 of a percentage point from growth. The decline in exports was puzzling considering a rise of 10.6% in the fourth quarter of 2006.

A weakened dollar and European and Asian growth have allowed exports to be a driver of the U.S. economy in recent years. “The Federal Reserve’s Trade-Weighted Dollar Index, which measures the dollar’s performance against seven currencies, fell to its lowest level this past week since the index’s inception in 1973.” [8]. The dollar recently traded at $1.3647 per euro after touching $1.3679 April 29th, near the record low of $1.3681 set April 27 [7]. The dollar dropped 2.22 percent to $1.3651 in April for its biggest monthly drop since November [7]. The weakening dollar will begin to alter the flow of trade by making domestic products cheaper in foreign markets and imported products more expensive. As consumers begin to buy less imported products the economy will be positioned to experience more gains from consumer spending as more money will stay with domestic business.

Another benefit of the weakening dollar is the strong boost it provides domestic companies that do significant business in foreign markets. Companies such as Caterpillar, McDonalds and General Electric reported first quarter 2007 earnings that surprised Wall Street Analysts [9]. PepsiCo Inc. for example reported a 16% increase in first-quarter profit, more than half of which it attributed to sales outside North America. U.S. based businesses with foreign operations will be able to increase their profits when those profits are converted back to dollars. Companies in the S&P 500 derive about 30% of their revenue from abroad, up from 22% five years ago [2]. The current weak dollar will allow companies to gain market share simply by holding their prices steady in dollar terms.

Personal Consumption and CPI

Personal consumption expenditures increased by an annualized 3.8% in the first quarter of 2007. While this was better than the overall 2006 number of 3.2% growth, it lagged behind the fourth quarter 2006 4.2% growth. The Reuters/ University of Michigan Consumer Sentiment index was down from 88.4 in March of 2007 to 87.1 in late April [10]. The drop in consumption was attributed to an increase in certain areas. Medical-care services are up at a 5.9% annual rate over the past six months [11]. Fuels and utilities increased at 1.2% and household energy increased 1.4% as well [11].

Interestingly the CPI exhibited a major price increase in fruits and vegetables. For February of 2007 fruits and vegetables increased 4.7% as a result of the government’s efforts to develop a market for ethanol fuel. An increase in the price of corn has ramifications for the prices of many products.

A major freeze in California in winter 2006 had an impact on the price of fruit and led to drastically lower crop yields. All California crops sustained about 1.3 billion dollars in damage and losses for citrus crops were estimated at 800 million [12].

Rising gasoline prices have also contributed to a decrease in consumer confidence. Gasoline prices as of April 30, 2007 were at $3.02 a gallon and up 80 cents, or 36% since bottoming out in late January at $2.21 a gallon [13].

All of the aforementioned price increases have had an adverse effect on consumer confidence and have been a drag on personal consumption.

Government Consumption Expenditures and Gross Investment

Government consumption expenditures and gross investment lagged in the first quarter of 2007. Overall this area grew at about 1% in annualized terms as compared with a 3.4% increase in the fourth quarter of 2006. National defense was down 6.6% as opposed to a 12.3% increase in the fourth quarter [5]. In a role reversal, Nondefense spending increased as opposed to a decrease in the previous quarter. State and local government expenditures were at 1.672 trillion dollars and grew at a 3.3% annualized rate from the previous quarter [5].

Conclusion

Overall GDP growth was disappointing with respect to the previous quarter and year. Exports which had been a strong component of GDP in the previous year due to the weakening dollar, fell drastically. The weak dollar and a strong global economy have provided a boon to U.S. companies with foreign operations. This situation has been conducive towards the Dow Jones Industrial Index reaching record levels. Although personal consumption is still the engine that powers the U.S. economy, it has weakened somewhat with respect to 4th quarter 2006 levels. The Reuters/ University of Michigan Consumer Sentiment index has experienced a third-consecutive monthly decline and reached its lowest level in seven months [2]. An increase in food, utilities, medical-care services and gasoline have caused consumer sentiment to wane recently. Residential Investment has continued to be a drag on the economy and hobble expansion. This housing downturn has the potential to affect consumer purchases as homeowners experience a drop in home prices. Federal government expenditures were below 2006 levels although buoyed by strong state and local expenditures. This continued weakness in the economy combined with rising prices could be the recipe for stagflation in the near future.

References:

[1] Nutting, Rex “ECONOMIC REPORT: U.S. Economic Growth Slows To 1.3% Pace In First Quarter” Dow Jones Business News (7 April 2007): Factiva

[2] Dougherty, Conor and Whitehouse “Mark Economy Slows But May Hold Seeds of Growth — Weak First Quarter May Signal a Bottom; Consumers Keep Buying” The Wall Street Journal (28 April 2007): Factiva

[3] “Hot Topic: Economic Ups and Downs “ The Wall Street Journal (28 April 2007): Factiva

[4] “ECONOMIC REPORT: Capital Spending Bounces Back In March” Dow Jones Business News (25 April 2007) : Factiva

[5] http://www.bea.gov/national/pdf/dpga.pdf

[6] “Hot Topic: Finding Meaning in Dow 13000” The Wall Street Journal (28 April 2007): Factiva

[7] http://www.bloomberg.com/apps/news?pid=20601085&sid=akQspI2qj7dM&refer=europe

[8] Karmin, Craig “Dollar Touches a New Low Against Euro — U.S. Currency’s Slump Is Likely to Aid Profits, But Add Inflation Risk” The Wall Street Journal (28 April 2007)

[9] Bruno, Joe “U.S. firms benefiting in sales from weak dollar” Associated Press (29 April 2007)

[10] “US: Roundup of Economic Indicators through April 30” Market News International (30 April 2007) :Factiva

[11] Dougherty, Conor “Inflation jumps in U.S. as manufacturing gains” The Wall Street Journal Asia (19 March 2007): Factiva

[12] Raine, George “ECONOMIC DEEP FREEZE / January cold spell inflicts hardship on the state’s citrus workers “ The San Francisco Chronicle (7 March 2007): Factiva

[13] “ECONOMIC REPORT: Gas Prices Jump 10 Cents To $3.02 “ Dow Jones Business News (30 April 2007): Factiva

 

 

What Exactly is Hadoop & MapReduce?

In a nutshell, Hadoop is an open source framework that enables the distributed processing of large amounts of data over multiple servers. In effect it is a distributed file system tailored to the storage needs of big data analysis. In lieu of holding all of the data required on one big expensive machine, Hadoop offers a scalable solution of incorporating more drives and data sources as the need arises.

Having the storage capacity for big data analyses in place is instrumental, but equally important is having the means to process data from the distributed sources. This is where Map Reduce comes into play.

Map Reduce is a programming model introduced by Google for processing and generating large data sets on clusters of computers. This video from IBM Analytics does an excellent job of presenting a clear concise description of what Map Reduce accomplishes.

Enterprise Architecture and Best Practices

Enterprise Architecture (EA) is the overarching framework that relies on a combination of artifacts in each domain area to paint a picture of the organization’s current (and future) capabilities. Best practices are proven methodologies for implementing portions of the architecture.

Both “Best Practices” and “Artifacts” are two of the six core elements that are necessary for an EA approach to be considered complete. A best practice can be used as an artifact within the EA leading to complementary effects. For example, best practices such as SWOT Analysis and Balanced Scorecard can be used as artifacts within EA at the strategic level. The Knowledge Management Plan, which outlines how data and information is shared across the enterprise, is an artifact that resides at the Data & Information functional area on the EA cubed framework.

As EA is the integration of strategy, business and technology, organizational strategic dictates can greatly influence the use or adoption of a best practice. If rapid application development, reusability or speed to market are important to the business, then the best practice of Service-Oriented Architecture (SOA) can be used within the EA to meet this end.

This example from T-Mobile highlights an example of SOA within an EA framework enabling quick development and reusability:

“For example, one T-Mobile project was to create a service that would let subscribers customize their ring sounds. The project group assumed that it would have to create most of the software from scratch. But the architecture group found code that had already been written elsewhere at T-Mobile that could be used as the basis of the development effort. The reuse reduced the development cycle time by a factor of two, says Moiin. [vice president of technical strategy for T-Mobile International]” [2].

Another example of EA driving a best practice selection is evident at Verizon. The enterprise architects at Verizon have incorporated an SOA best practice into their EA framework with the aim of building a large repository of web services that will help cut development times.

“‘We had to give incentives to developers to develop the Web services,’ says Kheradpir. ‘When they say they want to develop a certain functionality, we say: ‘You can do that, but you have to build a Web service also.’’

Though the developers grumbled at first, they eventually saw that everybody wins with a rapidly expanding repository of services. ‘This is going to be our strategy for enterprise architecture,’ says Kheradpir. ‘Now developers can go to the Web site and grab a service and start coding the higher-level application they need, rather than spending so much time at the integration and infrastructure level’”[2].

[1] Bernard, Scott A. “Linking Strategy, Business and Technology. EA3 An Introduction to Enterprise Architecture.” Bloomington, IN: Author House, 2012, Third Edition

[2] Koch, Christopher. “A New Blueprint for the Enterprise”, April 8th 2005. Retrieved from web: http://www.cio.com.au/article/30960/new_blueprint_enterprise/?pp=4

Normalization: A Database Best Practice

The practice of normalization is widely regarded as the standard methodology for logically organizing data to reduce anomalies in database management systems. Normalization involves deconstructing information into various sub-parts that are linked together in a logical way. Malaika and Nicola (2011) state, “..data normalization represents business records in computers by deconstructing the record into many parts, sometimes hundreds of parts, and reconstructing them again as necessary. Artificial keys and associated indexes are required to link the parts of a single record together.“ Although there are successively stringent forms of normalization, best practice involves decomposing information into the 3rd normal form (3NF). Subsequent higher normal forms provide protection from anomalies that most practitioners will rarely ever encounter.

Background

The normalization methodology was the brainchild of mathematician and IBM researcher Dr. Edgar Frank Codd. Dr. Codd developed the technique while working at IBM’s San Jose Research Laboratory in 1970 (IBM Archives, 2003). Early databases employed either inflexible hierarchical designs or a collection of pointers to data on magnetic tapes. “While such databases could be efficient in handling the specific data and queries they were designed for, they were absolutely inflexible. New types of queries required complex reprogramming, and adding new types of data forced a total redesign of the database itself.” (IBM Archives, 2003). In addition, disk space in the early days of computing was limited and highly expensive. Dr. Codd’s seminal paper “A Relational Model of Data for Large Shared Data Banks” proposed a flexible structure of rows and columns that would help reduce the amount of disk space necessary to store information. Furthermore, this revolutionary new methodology provided the benefit of significantly reducing data anomalies. These aforementioned benefits are achieved by ensuring that data is stored on disk exactly once.

Normal Forms (1NF to 3NF):

Normalization is widely regarded as the best practice when developing a coherent flexible database structure. Adams & Beckett (1997) state that designing a normalized database structure should be the first step taken when building a database that is meant to last. There are seven different forms of normalization; each lower form is a subset of the next higher form. Thus a database in 2nd normal form (2NF) is also in 1st normal form (1NF), although with additional satisfying conditions. Normalization best practice holds that databases in 3rd normal form (3NF) should suffice for the widest range of solutions. Adams & Beckett (1997) called 3NF “adequate for most practical needs.” When Dr. Codd initially proposed the concept of normalization, 3NF was the highest form introduced (Oppel, 2011).

A database table has achieved 1NF if does not contain any repeating groups and its attributes cannot be decomposed into smaller portions (atomicity). Most importantly, all of the data must relate to a primary key that uniquely indentifies a respective row. “When you have more than one field storing the same kind of information in a single table, you have a repeating group.” (Adams & Beckett, 1997). A higher level of normalization is often needed for tables in 1NF. Tables in 1NF are often subjected to “data duplication, update performance degradation, and update integrity problems..” (Teorey, Lightstone, Nadeau & Jagadish, 2011).

A database table has achieved 2NF if it meets the conditions of 1NF and if all of the non-key fields depend on ALL of the key fields (Stephens, 2009). It is important to note that tables with only 1 primary key that satisfy 1NF conditions are automatically in 2NF. In essence, 2NF helps data modelers determine if 2 tables have been combined into one table.

A database has achieved 3NF if it meets the conditions of 2NF and it contains no transitive dependencies. “A transitive dependency is when one non‐key field’s value depends on another non‐key field’s value” (Stephens, 2009). If any of the fields in the database table are dependent on any other fields, then the dependent field should be placed into another table.

If for example field B is functionally dependent on field A, (e.g. A->B), then add field A and B to a new table, with field A designated as a key which will provide linkage to the original table.

In short, 2NF and 3NF help determine the relationship between key and non-key attributes. Williams (1983) states, “Under second and third normal forms, a non-key field must provide a fact about the key, just the whole key, and nothing but the key.” A variant of this definition is typically supplemented with the remark “so help me Codd”.

Benefits:

Adams & Beckett (1997) assert that the normalization method provides benefits such as database efficiency & flexibility, the avoidance of redundant fields, and an easier to maintain database structure. Hoberman, (2009) adds that normalization provides a modeler with a stronger understanding of the business. The normalization process ensures that many questions are asked regarding data elements so these elements may be assigned to entities correctly. Hoberman also agrees that data quality is improved as redundancy is reduced.
Assessment

Although engaging in normalization is considered best practice, many sources advocate that normalization to 3NF is sufficient for the majority of data modeling solutions. Third normal form is deemed sufficient because the anomalies covered in higher forms occur with much less frequency. Beckett & Adams (1997) describe 3NF as “adequate for most practical needs.” Stephens (2009) affirms that many designers view 3NF as the form that combines adequate protection from recurrent data anomalies with relative modeling ease. Levels of normalization beyond 3NF can yield data models that are overly engineered, overly complicated and hard to maintain. The risk inherent in higher form constructs is that the performance can degrade to a level that is worse than less normalized designs. Hoberman (2009) asserts that, “Even though there are higher levels of normalization than 3NF, many interpret the term ‘normalized’ to mean 3NF.”

There are examples in data modeling literature where strict adherence to normalization is not advised. Fotache (2006) posited that normalization was initially a highly rigorous, theoretical methodology that was of not much practical use to real world development. Fotache provides the example of an attribute named ADDRESS, which is typically stored by many companies as an atomic string per 1NF requirements. The ADDRESS data could be stored in one field (violating 1NF) if the data is only needed for better person identification or mailing purposes. Teorey, Lightstone, Nadeau, & Jagadish (2011) advise that denormalization should be considered when performance considerations are in play. Denormalization introduces a trade off of increased update cost versus lower read cost depending upon the levels of data redundancy. Date (1990) downplays strict adherence to normalization and sets a minimum requirement of 1NF. “Normalization theory is a useful aid in the process, but it is not a panacea; anyone designing a database is certainly advised to be familiar with the basic techniques of normalization…but we do not mean to suggest that the design should necessarily be based on normalization principles alone” (Date, 1990).
Conclusion

Normalization is the best practice when designing a flexible and efficient database structure. The first three normal forms can be remembered by recalling a simple mnemonic. All attributes should depend upon a key (1NF), the whole key (2NF) and nothing but the key (3NF).

The advantages of normalization are many. Normalization ensures that modelers have a strong understanding of the business, it greatly reduces data redundancies and it improves data quality. When there is less data to store on disk, updating and inserting becomes a faster process. In addition, insert, delete and update anomalies disappear when adhering to normalization techniques. “The mantra of the skilled database designer is, for each attribute, capture it once, store it once, and use that one copy everywhere” (Stephens 2009).

It is important to remember that normalization to 3NF is sufficient for the majority of data modeling solutions. Higher levels of normalization can overcomplicate a database design and have the potential to provide worse performance.

In conclusion, begin the database design process by using normalization techniques. For implementation purposes, normalize data to 3NF compliance and then consider if data retrieval performance reasons necessitate denormalizing to a lower form. Denormalization introduces a trade off of increased update cost versus lower read cost depending upon the levels of data redundancy.

From the DAMA Guide to the Data Management Body of Knowledge:

  • First normal form (1NF): Ensures each entity has a valid primary key , every data element depends on the primary key, and removes repeating groups , and ensuring each data element is atomic (not multi-valued).
  • Second normal form (2NF): Ensures each entity has the minimal primary key and that every data element depends on the complete primary key.
  • Third normal form (3NF): Ensures each entity has no hidden primary keys and that each data element depends on no data element outside the key (―the key, the whole key and nothing but the key).

Glossary:

Delete Anomaly: “A delete anomaly is a situation where a deletion of data about one particular entity causes unintended loss of data that characterizes another entity.” (Stephens, 2009)

Denormalization: Denormalization involves reversing the process of normalization to gain faster read performance.

Insert Anomaly: “An insert anomaly is a situation where you cannot insert a new tuple into a relation because of an artificial dependency on another relation.” (Stephens, 2009)

Normalization: “Normalization is the process of organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency.” (Microsoft Knowledge Base)

Primary Key: “Even though an entity may contain more than one candidate key, we can only select one candidate key to be the primary key for an entity. A primary key is a candidate key that has been chosen to be the unique identifier for an entity.” (Hoberman, 2009)

Update Anomaly: “An update anomaly is a situation where an update of a single data value requires multiple tuples (rows) of data to be updated.” (Stephens, 2009)

Bibliography

Adams, D., & Beckett, D. (1997). Normalization Is a Nice Theory. Foresight Technology Inc. Retrieved from http://www.4dcompanion.com/downloads/papers/normalization.pdf

DAMA International (2009). The DAMA Guide to the Data Management Body of Knowledge (1st Edition).

Fotache, M. (2006, May 1) Why Normalization Failed to Become the Ultimate Guide for Database Designers? Available at SSRN: http://ssrn.com/abstract=905060 or http://dx.doi.org/10.2139/ssrn.905060

Hoberman, S. (2009). Data modeling made simple: a practical guide for business and it professionals, second edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=34408.

IBM Archives (2003): Edgar F. Codd. Retrieved from http://www-03.ibm.com/ibm/history/exhibits/builders/builders_codd.html

Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory. Communications of the ACM 26(2). Retrieved from http://www.bkent.net/Doc/simple5.htm

Malaika, S., & Nicola, M. (2011, December 15). Data normalization reconsidered, Part 1: The history of business records. Retrieved from http://www.ibm.com/developerworks/data/library/techarticle/dm-1112normalization/

Microsoft Knowledge Base. Article ID: 283878. Description of the database normalization basics. Retrieved from http://support.microsoft.com/kb/283878

Oppel, A. (2011). Databases demystified, 2nd edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=72521.

Stephens, Rod. (2009). Beginning database design solutions. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=29584.

Teorey, T. & Lightstone, S. & Nadeau, T. & Jagadish, H.V.. ( © 2011). Database modeling and design: logical design, fifth edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=41847.

Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net