Month: December 2015

Macroeconomic Assessment: First Quarter 2007

Here is a May 1st, 2007 throwback submission for a macroeconomics graduate class taught by a great professor, Dr. Matthew J. Higgins. The assignment was to assess the current macroeconomic environment and compare the current environment with the previous year. Per the syllabus, one of the aims of the assignment was to teach students how to “cultivate the ability to process the vast amount of macroeconomic data that is put out and shape it into an assessment.”

Looking back on the assignment, I highlighted some ominous trends in housing that obviously affected the course of the economy quite negatively (to use an understatement). I wish I had the foresight of the guys in “The Big Short“. As an aside, Merry Christmas.


Overview

Gross Domestic Product for the first quarter of 2007 increased from the fourth quarter of 2006. In billions, gross domestic product rose from 13458.2 to 13632.6 [5]. In nominal terms, the economy expanded 5.3% but 4% inflation sapped most of that growth [1]. In real terms the increase was more modest at about 1.3% well below an already low projection of 1.8% [4]. Considering previous growth percentages of 5.6%, 2.6%, 2% and 2.5% for the four quarters of 2006, 1st quarter growth was lackluster. In fact, first quarter 2007 results were the weakest growth numbers since before the 2003 tax cuts [3] and well below many economists’ expectations [2]. However, some economists believe that the economy has hit its low point and is ready to rebound.

“The first-quarter GDP report was “probably the low point in the cycle,” said Nariman Behravesh, chief economist at consulting firm Global Insight. “It suggests that in fact we may be setting the stage for a very slight rebound this quarter.” Mr. Behravesh believes the economy will be growing at a rate of close to 3% by the end of the year” [2].

 Housing

Residential Investment, which is a proxy for the housing market, was the biggest drag on 1st Quarter results. Residential Investment decreased from about 717 billion in the 4th quarter of 2006 to 687 billion, a slide of about 4.2% [5]. This aforementioned decrease shaved about one percentage point off of overall GDP.

In the 4th Quarter of 2006 residential investment lowered overall GDP by 1.21%. This slide in investment has been ongoing and falling for six consecutive quarters. The downside to the destabilization of the housing sector is that it could force consumers and businesses to cut back sharply in spending. Former Federal Reserve Chairman Alan Greenspan estimated that the cash homeowners extract from their homes accounted for almost 3% of all consumer spending as of the third quarter of 2006 [2]. Similarly, studies have shown that a 1$ increase in housing wealth increases consumption by seven cents [6]. The main driver of the U.S. economy is personal consumption. A significant drop in consumption most likely would have an adverse effect on GDP. Falling home prices and fewer home equity extractions could potentially curb personal consumption as well.

Interestingly, the slowing U.S. housing market is also having an effect on Latin American countries. Home construction is the main gateway industry for immigrants entering the United States [7]. These immigrants contribute a significant portion of the estimated 50 billion in cash sent from the U.S. to family members in their home countries [7]. “In a recent study of 15 Latin American economies tracked by BCP Securities of Greenwich, Conn., all but three showed better than a 90% correlation between the ebb and flow of U.S. housing starts and the swelling and shrinkage of remittances as recorded by the nations’ central banks” [7].

Trade and the Dollar

For the first quarter of 2007 exports fell 1.2%, the biggest decline in nearly four years. Imports on the other hand increased 2.3%. Net exports subtracted 0.5 of a percentage point from growth. The decline in exports was puzzling considering a rise of 10.6% in the fourth quarter of 2006.

A weakened dollar and European and Asian growth have allowed exports to be a driver of the U.S. economy in recent years. “The Federal Reserve’s Trade-Weighted Dollar Index, which measures the dollar’s performance against seven currencies, fell to its lowest level this past week since the index’s inception in 1973.” [8]. The dollar recently traded at $1.3647 per euro after touching $1.3679 April 29th, near the record low of $1.3681 set April 27 [7]. The dollar dropped 2.22 percent to $1.3651 in April for its biggest monthly drop since November [7]. The weakening dollar will begin to alter the flow of trade by making domestic products cheaper in foreign markets and imported products more expensive. As consumers begin to buy less imported products the economy will be positioned to experience more gains from consumer spending as more money will stay with domestic business.

Another benefit of the weakening dollar is the strong boost it provides domestic companies that do significant business in foreign markets. Companies such as Caterpillar, McDonalds and General Electric reported first quarter 2007 earnings that surprised Wall Street Analysts [9]. PepsiCo Inc. for example reported a 16% increase in first-quarter profit, more than half of which it attributed to sales outside North America. U.S. based businesses with foreign operations will be able to increase their profits when those profits are converted back to dollars. Companies in the S&P 500 derive about 30% of their revenue from abroad, up from 22% five years ago [2]. The current weak dollar will allow companies to gain market share simply by holding their prices steady in dollar terms.

Personal Consumption and CPI

Personal consumption expenditures increased by an annualized 3.8% in the first quarter of 2007. While this was better than the overall 2006 number of 3.2% growth, it lagged behind the fourth quarter 2006 4.2% growth. The Reuters/ University of Michigan Consumer Sentiment index was down from 88.4 in March of 2007 to 87.1 in late April [10]. The drop in consumption was attributed to an increase in certain areas. Medical-care services are up at a 5.9% annual rate over the past six months [11]. Fuels and utilities increased at 1.2% and household energy increased 1.4% as well [11].

Interestingly the CPI exhibited a major price increase in fruits and vegetables. For February of 2007 fruits and vegetables increased 4.7% as a result of the government’s efforts to develop a market for ethanol fuel. An increase in the price of corn has ramifications for the prices of many products.

A major freeze in California in winter 2006 had an impact on the price of fruit and led to drastically lower crop yields. All California crops sustained about 1.3 billion dollars in damage and losses for citrus crops were estimated at 800 million [12].

Rising gasoline prices have also contributed to a decrease in consumer confidence. Gasoline prices as of April 30, 2007 were at $3.02 a gallon and up 80 cents, or 36% since bottoming out in late January at $2.21 a gallon [13].

All of the aforementioned price increases have had an adverse effect on consumer confidence and have been a drag on personal consumption.

Government Consumption Expenditures and Gross Investment

Government consumption expenditures and gross investment lagged in the first quarter of 2007. Overall this area grew at about 1% in annualized terms as compared with a 3.4% increase in the fourth quarter of 2006. National defense was down 6.6% as opposed to a 12.3% increase in the fourth quarter [5]. In a role reversal, Nondefense spending increased as opposed to a decrease in the previous quarter. State and local government expenditures were at 1.672 trillion dollars and grew at a 3.3% annualized rate from the previous quarter [5].

Conclusion

Overall GDP growth was disappointing with respect to the previous quarter and year. Exports which had been a strong component of GDP in the previous year due to the weakening dollar, fell drastically. The weak dollar and a strong global economy have provided a boon to U.S. companies with foreign operations. This situation has been conducive towards the Dow Jones Industrial Index reaching record levels. Although personal consumption is still the engine that powers the U.S. economy, it has weakened somewhat with respect to 4th quarter 2006 levels. The Reuters/ University of Michigan Consumer Sentiment index has experienced a third-consecutive monthly decline and reached its lowest level in seven months [2]. An increase in food, utilities, medical-care services and gasoline have caused consumer sentiment to wane recently. Residential Investment has continued to be a drag on the economy and hobble expansion. This housing downturn has the potential to affect consumer purchases as homeowners experience a drop in home prices. Federal government expenditures were below 2006 levels although buoyed by strong state and local expenditures. This continued weakness in the economy combined with rising prices could be the recipe for stagflation in the near future.

References:

[1] Nutting, Rex “ECONOMIC REPORT: U.S. Economic Growth Slows To 1.3% Pace In First Quarter” Dow Jones Business News (7 April 2007): Factiva

[2] Dougherty, Conor and Whitehouse “Mark Economy Slows But May Hold Seeds of Growth — Weak First Quarter May Signal a Bottom; Consumers Keep Buying” The Wall Street Journal (28 April 2007): Factiva

[3] “Hot Topic: Economic Ups and Downs “ The Wall Street Journal (28 April 2007): Factiva

[4] “ECONOMIC REPORT: Capital Spending Bounces Back In March” Dow Jones Business News (25 April 2007) : Factiva

[5] http://www.bea.gov/national/pdf/dpga.pdf

[6] “Hot Topic: Finding Meaning in Dow 13000” The Wall Street Journal (28 April 2007): Factiva

[7] http://www.bloomberg.com/apps/news?pid=20601085&sid=akQspI2qj7dM&refer=europe

[8] Karmin, Craig “Dollar Touches a New Low Against Euro — U.S. Currency’s Slump Is Likely to Aid Profits, But Add Inflation Risk” The Wall Street Journal (28 April 2007)

[9] Bruno, Joe “U.S. firms benefiting in sales from weak dollar” Associated Press (29 April 2007)

[10] “US: Roundup of Economic Indicators through April 30” Market News International (30 April 2007) :Factiva

[11] Dougherty, Conor “Inflation jumps in U.S. as manufacturing gains” The Wall Street Journal Asia (19 March 2007): Factiva

[12] Raine, George “ECONOMIC DEEP FREEZE / January cold spell inflicts hardship on the state’s citrus workers “ The San Francisco Chronicle (7 March 2007): Factiva

[13] “ECONOMIC REPORT: Gas Prices Jump 10 Cents To $3.02 “ Dow Jones Business News (30 April 2007): Factiva

 

 

What Exactly is Hadoop & MapReduce?

In a nutshell, Hadoop is an open source framework that enables the distributed processing of large amounts of data over multiple servers. In effect it is a distributed file system tailored to the storage needs of big data analysis. In lieu of holding all of the data required on one big expensive machine, Hadoop offers a scalable solution of incorporating more drives and data sources as the need arises.

Having the storage capacity for big data analyses in place is instrumental, but equally important is having the means to process data from the distributed sources. This is where Map Reduce comes into play.

Map Reduce is a programming model introduced by Google for processing and generating large data sets on clusters of computers. This video from IBM Analytics does an excellent job of presenting a clear concise description of what Map Reduce accomplishes.

Enterprise Architecture and Best Practices

Enterprise Architecture (EA) is the overarching framework that relies on a combination of artifacts in each domain area to paint a picture of the organization’s current (and future) capabilities. Best practices are proven methodologies for implementing portions of the architecture.

Both “Best Practices” and “Artifacts” are two of the six core elements that are necessary for an EA approach to be considered complete. A best practice can be used as an artifact within the EA leading to complementary effects. For example, best practices such as SWOT Analysis and Balanced Scorecard can be used as artifacts within EA at the strategic level. The Knowledge Management Plan, which outlines how data and information is shared across the enterprise, is an artifact that resides at the Data & Information functional area on the EA cubed framework.

As EA is the integration of strategy, business and technology, organizational strategic dictates can greatly influence the use or adoption of a best practice. If rapid application development, reusability or speed to market are important to the business, then the best practice of Service-Oriented Architecture (SOA) can be used within the EA to meet this end.

This example from T-Mobile highlights an example of SOA within an EA framework enabling quick development and reusability:

“For example, one T-Mobile project was to create a service that would let subscribers customize their ring sounds. The project group assumed that it would have to create most of the software from scratch. But the architecture group found code that had already been written elsewhere at T-Mobile that could be used as the basis of the development effort. The reuse reduced the development cycle time by a factor of two, says Moiin. [vice president of technical strategy for T-Mobile International]” [2].

Another example of EA driving a best practice selection is evident at Verizon. The enterprise architects at Verizon have incorporated an SOA best practice into their EA framework with the aim of building a large repository of web services that will help cut development times.

“‘We had to give incentives to developers to develop the Web services,’ says Kheradpir. ‘When they say they want to develop a certain functionality, we say: ‘You can do that, but you have to build a Web service also.’’

Though the developers grumbled at first, they eventually saw that everybody wins with a rapidly expanding repository of services. ‘This is going to be our strategy for enterprise architecture,’ says Kheradpir. ‘Now developers can go to the Web site and grab a service and start coding the higher-level application they need, rather than spending so much time at the integration and infrastructure level’”[2].

[1] Bernard, Scott A. “Linking Strategy, Business and Technology. EA3 An Introduction to Enterprise Architecture.” Bloomington, IN: Author House, 2012, Third Edition

[2] Koch, Christopher. “A New Blueprint for the Enterprise”, April 8th 2005. Retrieved from web: http://www.cio.com.au/article/30960/new_blueprint_enterprise/?pp=4

Normalization: A Database Best Practice

The practice of normalization is widely regarded as the standard methodology for logically organizing data to reduce anomalies in database management systems. Normalization involves deconstructing information into various sub-parts that are linked together in a logical way. Malaika and Nicola (2011) state, “..data normalization represents business records in computers by deconstructing the record into many parts, sometimes hundreds of parts, and reconstructing them again as necessary. Artificial keys and associated indexes are required to link the parts of a single record together.“ Although there are successively stringent forms of normalization, best practice involves decomposing information into the 3rd normal form (3NF). Subsequent higher normal forms provide protection from anomalies that most practitioners will rarely ever encounter.

Background

The normalization methodology was the brainchild of mathematician and IBM researcher Dr. Edgar Frank Codd. Dr. Codd developed the technique while working at IBM’s San Jose Research Laboratory in 1970 (IBM Archives, 2003). Early databases employed either inflexible hierarchical designs or a collection of pointers to data on magnetic tapes. “While such databases could be efficient in handling the specific data and queries they were designed for, they were absolutely inflexible. New types of queries required complex reprogramming, and adding new types of data forced a total redesign of the database itself.” (IBM Archives, 2003). In addition, disk space in the early days of computing was limited and highly expensive. Dr. Codd’s seminal paper “A Relational Model of Data for Large Shared Data Banks” proposed a flexible structure of rows and columns that would help reduce the amount of disk space necessary to store information. Furthermore, this revolutionary new methodology provided the benefit of significantly reducing data anomalies. These aforementioned benefits are achieved by ensuring that data is stored on disk exactly once.

Normal Forms (1NF to 3NF): 

Normalization is widely regarded as the best practice when developing a coherent flexible database structure. Adams & Beckett (1997) state that designing a normalized database structure should be the first step taken when building a database that is meant to last. There are seven different forms of normalization; each lower form is a subset of the next higher form. Thus a database in 2nd normal form (2NF) is also in 1st normal form (1NF), although with additional satisfying conditions. Normalization best practice holds that databases in 3rd normal form (3NF) should suffice for the widest range of solutions. Adams & Beckett (1997) called 3NF “adequate for most practical needs.” When Dr. Codd initially proposed the concept of normalization, 3NF was the highest form introduced (Oppel, 2011).

A database table has achieved 1NF if does not contain any repeating groups and its attributes cannot be decomposed into smaller portions (atomicity). Most importantly, all of the data must relate to a primary key that uniquely indentifies a respective row. “When you have more than one field storing the same kind of information in a single table, you have a repeating group.” (Adams & Beckett, 1997). A higher level of normalization is often needed for tables in 1NF. Tables in 1NF are often subjected to “data duplication, update performance degradation, and update integrity problems..” (Teorey, Lightstone, Nadeau & Jagadish, 2011).

A database table has achieved 2NF if it meets the conditions of 1NF and if all of the non-key fields depend on ALL of the key fields (Stephens, 2009). It is important to note that tables with only 1 primary key that satisfy 1NF conditions are automatically in 2NF. In essence, 2NF helps data modelers determine if 2 tables have been combined into one table.

A database has achieved 3NF if it meets the conditions of 2NF and it contains no transitive dependencies. “A transitive dependency is when one non‐key field’s value depends on another non‐key field’s value” (Stephens, 2009). If any of the fields in the database table are dependent on any other fields, then the dependent field should be placed into another table.

If for example field B is functionally dependent on field A, (e.g. A->B), then add field A and B to a new table, with field A designated as a key which will provide linkage to the original table.

In short, 2NF and 3NF help determine the relationship between key and non-key attributes. Williams (1983) states, “Under second and third normal forms, a non-key field must provide a fact about the key, just the whole key, and nothing but the key.” A variant of this definition is typically supplemented with the remark “so help me Codd”.

Benefits:

Adams & Beckett (1997) assert that the normalization method provides benefits such as database efficiency & flexibility, the avoidance of redundant fields, and an easier to maintain database structure. Hoberman, (2009) adds that normalization provides a modeler with a stronger understanding of the business. The normalization process ensures that many questions are asked regarding data elements so these elements may be assigned to entities correctly. Hoberman also agrees that data quality is improved as redundancy is reduced.

Assessment

Although engaging in normalization is considered best practice, many sources advocate that normalization to 3NF is sufficient for the majority of data modeling solutions. Third normal form is deemed sufficient because the anomalies covered in higher forms occur with much less frequency. Beckett & Adams (1997) describe 3NF as “adequate for most practical needs.” Stephens (2009) affirms that many designers view 3NF as the form that combines adequate protection from recurrent data anomalies with relative modeling ease. Levels of normalization beyond 3NF can yield data models that are overly engineered, overly complicated and hard to maintain. The risk inherent in higher form constructs is that the performance can degrade to a level that is worse than less normalized designs. Hoberman (2009) asserts that, “Even though there are higher levels of normalization than 3NF, many interpret the term ‘normalized’ to mean 3NF.”

There are examples in data modeling literature where strict adherence to normalization is not advised. Fotache (2006) posited that normalization was initially a highly rigorous, theoretical methodology that was of not much practical use to real world development. Fotache provides the example of an attribute named ADDRESS, which is typically stored by many companies as an atomic string per 1NF requirements. The ADDRESS data could be stored in one field (violating 1NF) if the data is only needed for better person identification or mailing purposes. Teorey, Lightstone, Nadeau, & Jagadish (2011) advise that denormalization should be considered when performance considerations are in play. Denormalization introduces a trade off of increased update cost versus lower read cost depending upon the levels of data redundancy. Date (1990) downplays strict adherence to normalization and sets a minimum requirement of 1NF. “Normalization theory is a useful aid in the process, but it is not a panacea; anyone designing a database is certainly advised to be familiar with the basic techniques of normalization…but we do not mean to suggest that the design should necessarily be based on normalization principles alone” (Date, 1990).

Conclusion

Normalization is the best practice when designing a flexible and efficient database structure. The first three normal forms can be remembered by recalling a simple mnemonic. All attributes should depend upon a key (1NF), the whole key (2NF) and nothing but the key (3NF).

The advantages of normalization are many. Normalization ensures that modelers have a strong understanding of the business, it greatly reduces data redundancies and it improves data quality. When there is less data to store on disk, updating and inserting becomes a faster process. In addition, insert, delete and update anomalies disappear when adhering to normalization techniques. “The mantra of the skilled database designer is, for each attribute, capture it once, store it once, and use that one copy everywhere” (Stephens 2009).

It is important to remember that normalization to 3NF is sufficient for the majority of data modeling solutions. Higher levels of normalization can overcomplicate a database design and have the potential to provide worse performance.

In conclusion, begin the database design process by using normalization techniques. For implementation purposes, normalize data to 3NF compliance and then consider if data retrieval performance reasons necessitate denormalizing to a lower form. Denormalization introduces a trade off of increased update cost versus lower read cost depending upon the levels of data redundancy.

Glossary:

Delete Anomaly: “A delete anomaly is a situation where a deletion of data about one particular entity causes unintended loss of data that characterizes another entity.” (Stephens, 2009)

Denormalization: Denormalization involves reversing the process of normalization to gain faster read performance.

Insert Anomaly: “An insert anomaly is a situation where you cannot insert a new tuple into a relation because of an artificial dependency on another relation.” (Stephens, 2009)

Normalization: “Normalization is the process of organizing data in a database. This includes creating tables and establishing relationships between those tables according to rules designed both to protect the data and to make the database more flexible by eliminating redundancy and inconsistent dependency.” (Microsoft Knowledge Base)

Primary Key: “Even though an entity may contain more than one candidate key, we can only select one candidate key to be the primary key for an entity. A primary key is a candidate key that has been chosen to be the unique identifier for an entity.” (Hoberman, 2009)

Update Anomaly: “An update anomaly is a situation where an update of a single data value requires multiple tuples (rows) of data to be updated.” (Stephens, 2009)

Bibliography

Adams, D., & Beckett, D. (1997). Normalization Is a Nice Theory. Foresight Technology Inc. Retrieved from http://www.4dcompanion.com/downloads/papers/normalization.pdf

Fotache, M. (2006, May 1) Why Normalization Failed to Become the Ultimate Guide for Database Designers? Available at SSRN: http://ssrn.com/abstract=905060 or http://dx.doi.org/10.2139/ssrn.905060

Hoberman, S. (2009). Data modeling made simple: a practical guide for business and it professionals, second edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=34408.

IBM Archives (2003): Edgar F. Codd. Retrieved from http://www-03.ibm.com/ibm/history/exhibits/builders/builders_codd.html

Kent, W. (1983) A Simple Guide to Five Normal Forms in Relational Database Theory. Communications of the ACM 26(2). Retrieved from http://www.bkent.net/Doc/simple5.htm

Malaika, S., & Nicola, M. (2011, December 15). Data normalization reconsidered, Part 1: The history of business records. Retrieved from http://www.ibm.com/developerworks/data/library/techarticle/dm-1112normalization/

Microsoft Knowledge Base. Article ID: 283878. Description of the database normalization basics. Retrieved from http://support.microsoft.com/kb/283878

Oppel, A. (2011). Databases demystified, 2nd edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=72521.

Stephens, Rod. (2009). Beginning database design solutions. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=29584.

Teorey, T. & Lightstone, S. & Nadeau, T. & Jagadish, H.V.. ( © 2011). Database modeling and design: logical design, fifth edition. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=41847.

Image courtesy of David Castillo Dominici at FreeDigitalPhotos.net

 

Interpret Big Data with Caution

One caution with respect to employing big data (or any other data reliant technique) is the tendency of practitioners to have an overconfidence in understanding the inputs and interpreting the outputs. It sounds like a fundamental concept but if one does not have a strong understanding of what the incoming data signifies, then the interpreted output is highly likely to be biased. As is the case with the concept of sampling, if the sample is not representative of the larger whole then bias will occur. Example:

“Consider Boston’s Street Bump smartphone app, which uses a phone’s accelerometer to detect potholes without the need for city workers to patrol the streets. As citizens of Boston download the app and drive around, their phones automatically notify City Hall of the need to repair the road surface.” [1]

One would be tempted to conclude that the data that feeds into the app would reasonably represent all of the potholes in the city. In actuality, the data that was fed into the app represented those potholes in areas inhabited by young, affluent smartphone owners. The city runs the risk of neglecting areas where older, less affluent, non smartphone owners experience potholes; which is a significant portion of the city.

“As we move into an era in which personal devices are seen as proxies for public needs, we run the risk that already existing inequities will be further entrenched. Thus, with every big data set, we need to ask which people are excluded. Which places are less visible? What happens if you live in the shadow of big data sets?” [2]

[1] http://www.ft.com/intl/cms/s/2/21a6e7d8-b479-11e3-a09a-00144feabdc0.html

[2] https://hbr.org/2013/04/the-hidden-biases-in-big-data/

The Importance of Standards in Large Complex Organizations

Large complex organizations require standards with respect to developing strategic goals, business processes, and technology solutions because agreed upon guiding principles support organizational efficiencies. Without standards in these spaces, there is the increased potential for duplication of functionality, as localized business units implement processes and technologies with disregard for the enterprise as a whole. When the enterprise as a whole considers items such as applications, tools and vendors, standards help ensure seamless integration.

Examples of enterprise standards might be:

  • “The acquisition of purchased applications is preferred to the development of custom applications.” [1]
  • An example of an infrastructure-driven principle might be, “The technology architecture must strive to reduce application integration complexity” [1]
  • “Open industry standards are preferred to proprietary standards.” [1]

These hypothetical top down standards can help settle the “battle of best practices” [2]. Standards also provide direction and can guide the line of business staff’s decision-making so that the entire organization is aligned to strategic goals. Furthermore, the minimization of diversity in technology solutions typically lowers complexity, which in turn helps to lower associated costs.

Enterprise Architecture standards also have a place in facilitating “post merger/acquisition streamlining and new product/service rollouts” [2]. Successfully and rapidly integrating new acquisitions onto a common framework can be vital to success.

Here are two banking examples where post merger system integration problems arose in financial services companies:

  • In 1996, when Wells Fargo & Co. bought First Interstate Bancorp, thousands of customers left because of missing records, long lines at branches, and administrative snarls. In 1997, Wells Fargo announced a $150 million write-off to cover lost deposits due to its faulty IT integration. [3]
  • In 1998, First Union Corp. and CoreStates Financial Corp. merged to form one of the largest U.S. banks. In 1999, First Union saw its stock price tumble on news of lower-than-expected earnings resulting from customer attrition. The problems arose from First Union’s ill-fated attempt at rapidly moving customers to a new branch-banking system. [3]

Having robust Enterprise Architecture standards in place may have helped to reduce the risk of failure when integrating these dissimilar entities.

References:

[1] Fournier, R., “Build for Business Innovation – Flexible, Standardized Enterprise Architectures Will Produce Several IT Benefits.” Information Week, November 1, 1999. Retrieved from Factiva database.

[2] Bernard, Scott A. “Linking Strategy, Business and Technology. EA3 An Introduction to Enterprise Architecture.” Bloomington, IN: Author House, 2012, Third Edition

[3] Popovich, Steven G., “Meeting the Pressures to Accelerate IT Integration”, Mergers & Acquisitions: The Dealmakers Journal, December 1, 2001. Retrieved from Factiva database.

ADP: Mergers & Acquisitions

This small sample is taken from a group paper drafted for a strategic management class (MGT 6125) taken back in the Spring of 2007. The class was taught by the esteemed professor of strategy, Dr. Frank Rothaermel. As part of the class our project group interviewed an executive from ADP, wrote a strategic analysis of the company and then presented our findings to the company (complete with Q&A). The experience was enriching but contributed to the number of late nights experienced during that semester. It should be noted that in 2014 ADP spun off its Dealer Services business into a separate company now called CDK Global.


 

Mergers & Acquisitions

ADP has become highly successful in its strategy of pursuing growth via horizontal integration. Although current CEO Gay Butler has maintained that ADP has no interest in “large, dilutive, multiyear acquisitions” [1], the company will acquire smaller industry competitors. Acquisitions give ADP the opportunity to grow inorganically, increase its product offerings, acquire technology and to reduce the level of rivalry in its industry.

A perfect execution of this strategy can be seen in its January 2003 acquisition of Probusiness. Probusiness was a much smaller California based provider of payroll and human resources services. Before the acquisition, Probusiness cited eight new large competitors who had an interest in expanding their roles in the payroll business [2]. Amongst those eight competitors were notable companies such as International Business Machines Inc. (IBM), Microsoft Corp. and Electronic Data Systems Corp. (EDS) [2]. True to form, ADP decided to react and proceeded to acquire Probusiness. The acquisition effectively prevented large competitors from acquiring approximately 600 new payroll clients in the larger employer space and reduced future competition.

The Probusiness acquisition was also a boon to the company in the fact that it offered ADP advanced payroll processing technology. Probusiness utilized PC based payroll processing as opposed to ADP’s more mainframe based technology [2].

A key acquisition for ADP in terms of increasing its global footprint was the December 2005 acquisition of U.K. based Kerridge Computer. This particular acquisition was significant in the fact that it increased ADP’s Dealer Management Services (DMS) presence from fourteen countries to over forty one [3].

ADP along with its main DMS competitors in the European market, Reynolds & Reynolds and SAP, began to realize the significant growth opportunities for the region. The European market for DMS unlike the United States market, is much more fragmented which means there are more opportunities for a larger player to standardize product offerings [4]. In 2003 the European Union lifted rules that had previously banned franchised car dealers from selling rival brands [4]. Demand for pan-European systems to help multi-brand dealers manage their stores, sometimes in multiple countries and in various languages increased dramatically [4]. ADP shrewdly realized that many smaller DMS providers would not be able to meet this demand and acquired Kerridge to bolster its position.

Strategically, the Kerridge acquisition has allowed ADP to have first mover advantage over its main competitors with respect to China. New vehicle sales growth in Asia is expected to be at 25.3% by the year 2011 [5]. By becoming a first mover in the region, ADP will have the opportunity to lock customers into its technology since it currently has a 96% client retention rate [5]. ADP will also have the opportunity to create high switching costs for its customers and make it difficult for rivals to take its customers.

Other recent acquisitions by ADP include “Taxware which brings tax-content and compliance solutions to the table; VirtualEdge, which offers tools for recruiting; Employease, which develops Web-based HR and benefits applications; and Mintax, which provides tools for corporate tax incentives. [6]” All of these acquisitions represent small fast growing companies with complimentary products and services. These products and services can be incorporated in ADP’s vast distribution network and provide potential bundling or cross selling opportunities with ADP’s current offerings.

Endnotes:

[1] Simon, Ellen “ADP chief looks at expansion, not acquisition” ASSOCIATED PRESS (7 March 2007)

[2] Gelfand, Andrew “ADP Seen Holding Off Competition With ProBusiness Buy” Dow Jones News Service (6 January 2003) :Factiva

[3] Kisiel, Ralph “Reynolds, ADP aim for European growth” Automotive News Europe Volume 11; Number 3 (6 February 2006) :Factiva

[4] Jackson, Kathy “Dealer software market is booming; Multibranding boosts demand for dealership management programs” Automotive News Europe, Volume 11; Number 21 (16 October 2006) :Factiva

[5] ADP Annual Financial Analyst Conference Call Presentation. March 22, 2007

[6] Taulli ,Tom “ADP Tries Getting Even Better” Motley Fool (November 2, 2006) Accessed 4/14/07 <http://www.fool.com/investing/general/2006/11/02/adp-tries-getting-even-better.aspx.

An Analysis of the Pharmaceutical Industry

Innovation

Innovation in the pharmaceutical industry is driven by the protections provided by intellectual patents. In theory, these patents temporarily grant innovative pharmaceutical companies a monopoly over their new product or innovation. This monopoly in turn allows pharmaceutical companies to recoup their research and development costs. Companies are able to reap high margins on their products and thus obtain a competitive advantage in the industry. Pharmaceutical companies that participate in the innovation sector are always under intense pressure to find and develop their next “blockbuster drug” quickly, cheaply and effectively. Those companies who are not able to innovate and bring a successful new product to market will experience financial difficulties. For example, pharmaceutical giant Pfizer Inc. has not produced a blockbuster drug since its introduction of the erectile dysfunction drug Viagra in 1998. As a consequence of not successfully innovating a new product since that time, the company will have a diminishing revenue stream in the future. Generic competitors will begin to steal market share from Pfizer as it loses its exclusive monopoly on highly profitable drugs.

While intellectual property protection can encourage innovation in the pharmaceutical industry it can also hamper innovation. Pharmaceutical companies can easily obtain new patents by making minor changes to existing products regardless of whether the drugs offer significant new therapeutic advantages [1]. Typically, minor changes are made to blockbuster drugs before their patent expiration date in order to increase the lifespan of the monopoly. In essence it is much faster and cheaper to receive new patent protections on these “me-too” products than to innovate and bring “new molecular entity” drugs to the marketplace.

Alternatives to the current monopolistic patent protection have been proposed to help foster more meaningful innovation. A drug prize system has been bandied about in intellectual discussions for some time. Under a prize system, the U.S. government would pay out a cash prize for any new drugs that successfully pass FDA regulations. The drugs would then be put into the public domain thus creating a free market system for drugs. “Generic drugs (‘generic’ being another way of saying the rights are in the public domain) already do a wonderful job of keeping prices down. While the price of patent-protected drugs has been rising at roughly twice the rate of inflation, the real price of generics has fallen in four of the last five years.” [2] The prize system has its benefits in the fact that marketing costs would be significantly reduced as companies would have already been paid an upfront prize for their efforts. Large awards could be provided to the pharmaceutical innovator who produces breakthrough results while smaller rewards would be paid for incremental “me-too” innovations.

A prize system could also help increase the number of innovations concerning critical diseases that affect the third world. Government intervention can help direct R&D resources to areas where the private market has failed to concentrate. Breakthroughs are badly needed in the fight against AIDS or the search to develop a malaria vaccine. Advanced industrialized nations could pool their resources together and decide to create a large bounty for any drug that provides a real breakthrough in this area. The desired end result would be enhanced access to better medicines for the third world. Obviously the prize system would have some drawbacks. One major drawback would be the duplication of efforts encountered by firms that would compete for the biggest prizes. It is possible that R&D resources concentrated on the highest prizes would encourage a majority of companies to compete to be the winner in these races even more so than in the current patent system.

Research and Development

Pharmaceutical companies are heavily dependant upon acquiring patents and innovating in order to compete in the industry. As a consequence, massive amounts of funding are needed for research and development in order to keep product pipelines full. The Tufts center for the study of drug development estimated the average costs of developing a prescription drug to be 802 million in 2001, up from 231 million in 1987 [3]. This estimation includes the costs of failures as well as the opportunity costs of incurring R&D expenditures before earning any returns.

A price/R&D tradeoff exists in the pharmaceutical industry. If pharmaceutical companies lower prices for drugs in the marketplace it is feasible to believe that profits will fall for these drug manufacturers. As profits fall, R&D is affected which in turn lowers the amount of new drugs coming into the marketplace. Fewer drugs in the marketplace could potentially lead to future generations of less healthy people.

The pharmaceutical industry is one of the most profitable industries in our nation’s economy, thus the companies that develop new drugs are quite content with the current structure in which R&D costs are high. Of course this means that prices for consumers will continue to remain high. Consumers on the other hand need access to new drug products for illness prevention or treatment purposes. Consumers would prefer that prices fall for new drugs while innovations increase. Given the current nature of the pharmaceutical industry this outcome will remain a paradox.

EndNotes:

[1] http://oversight.house.gov/Documents/20061219094529-73424.pdf

[2] http://www.forbes.com/2006/04/15/drug-patents-prizes_cx_sw_06slate_0418drugpatents.html

[3] http://csdd.tufts.edu/NewsEvents/RecentNews.asp?newsid=6

Image courtesy of amenic181 at FreeDigitalPhotos.net