More Than You Want to Know About Wal-Mart’s Technology Strategy Part 2

This article is a continuation of my earlier analysis (Part 1 here, continued here at Part 3) where I waded into Wal-Mart’s strategy for technology infrastructure and strategy for IT capability & staffing. Whether you love or hate Wal-Mart, no one can argue that historically the organization has been highly innovative, effective and efficient. In this second part of my three part series I will broach the company’s strategy for information risk and security, stakeholder requirements and project return on investment.

Wal-Mart: Strategy for Information Risk & Security:

Wal-Mart operates a massive information system infrastructure that has been called the largest private computer system in the country. As such, the company must be strategic in implementing the proper information security protocols and vigilant in order to react to attempted compromises to its confidential information. Any compromise of sensitive customer information could lead to a significant expense in compensating affected parties and lead to updating systems, processes and procedures to restore customer confidence. This scenario is especially relevant as Wal-Mart’s extensive point of sale system, from a black hat hacker’s perspective, registers a veritable treasure trove of customer debit, credit and gift card information.

In order to mitigate the aforementioned risks, Wal-Mart has complied with the PCI DSS or Payment Card Industry Data Security Standard. PCI DSS offers, “compliance guidelines and standards with regard to our (Wal-Mart’s) security surrounding the physical and electronic storage, processing and transmission of individual cardholder data” (Wal-Mart Stores Inc., 2016). Some operational system components of PCI DSS include maintaining a secure network via use of firewalls to protect sensitive data, encrypting cardholder data that is transmitted across public networks, regularly updating anti-virus software as well as tracking and monitoring all access to network resources and cardholder data. (PCI Security Standards Council, 2016). Former CIO Turner has stated, “Necessity is the mother of invention, and we’ve invested a lot of knowledge and capital in intrusion detection and playing as much offense as we can to make sure that we’re protecting our company. Personally, every day I spend time on security” (Lundberg, 2002).

From a disaster recovery perspective, Wal-Mart maintains redundant primary and secondary information systems to mitigate the risks of operational downtime and/or significant loss of information. The organization keeps primary and secondary information systems physically separated. In 2005, the company was lauded for its disaster recovery and business continuity efforts in the wake of Hurricane Katrina. The company stood up satellite links for its retail centers enabling those centers to correspond with headquarters despite the loss of phone lines and internet connectivity (Worhten, 2005). Wal-Mart also maintains an Emergency Operations Center (EOC) established in the wake of the September 11, 2001 terror attacks. The organization has a central EOC located at headquarters in Arkansas which works in concert with decentralized EOCs at a division level. During Hurricane Sandy, the organization was successful in moving generators across state lines in order to reopen stores and provide systems operability in a timely manner (PricewaterhouseCoopers, 2005).

Wal-Mart: Strategy for Stakeholder Requirements, Testing & Training/Support:

Wal-Mart’s immense size allows it considerable influence over its supplier stakeholders. Typically, suppliers reside in an inferior position (Wal-Mart can end the supplier relationship or demand sub-optimal concessions from the supplier) which enables the retailing behemoth to dictate industry wide changes in how suppliers and merchants interact. This unbalanced power relationship allows the company to micromanage its supply chain partners from a business process and respective information technology project perspective. When the power balance is more on an equal footing, Wal-Mart is willing to work collectively with a supplier.

Case in point is the lauded cooperation between Procter & Gamble and Wal-Mart in the late 1980’s to implement Retail-Link. Retail-Link was a joint business process and related technology systems project between the two organizations for mutually beneficial gains. Wal-Mart’s in-store point of sale data acted as a pull to automatically trigger manufacturing orders to P&G when stocks were low (Wailgum, 2007). When this concept proved successful, Wal-Mart dictated to 2,000 supplier stakeholders that they must all update their information systems to integrate with Retail-Link. The integration and information sharing with Retail-Link was a boon to Wal-Mart’s suppliers as it provided predictable volumes and constantly humming factories, but the takeaway is that Wal-Mart mandated the terms to stakeholders based upon its asymmetrically favorable power position.

In some cases, Wal-Mart’s technical project mandates to suppliers did not yield mutually beneficial Return on Investment (ROI). An example of this scenario is embodied in the much publicized initiative to have its suppliers adopt RFID in the mid 2000’s. Wal-Mart was seeking to increase its inventory visibility at the warehouse and in its stores. In this case, Wal-Mart did not adequately consider stakeholder technology implementation concerns before issuing its RFID mandate. A supplier is on record stating that the consumer packaged goods industry was not the best early adaptor for RFID and that the small margins and project complexities didn’t offer compelling ROI (Wailgum, 2007). The ROI that could be established from a supplier standpoint was to continue doing business with Wal-Mart while only investing the bare minimum in upgrades required to implement RFID. A Gartner analyst has estimated that the implementation costs of RFID for smaller companies would cost between $100,00 to $300,000, while larger manufacturers could experience investment costs of up to $20 million (Network World, 2008).

Once a critical mass of important supplier stakeholders decided that their operating costs were being negatively impacted, Wal-Mart decided to back down from its mandate. Only when the favorable power dynamic shifted from Wal-Mart to the supplier network, did the company walk back its mandate.

From a development standpoint, Wal-Mart traditionally used the more structured Systems Development Lifecycle (SDLC) methodology. All systems within the company require testing & validation. According to former CIO Turner, “In any development effort, our [IS] people are expected to get out and do the function before they do the system specification, design or change analysis. The key there is to do the function, not just observe it. So we actually insert them into the business roles. As a result, they come back with a lot more empathy and a whole lot better understanding and vision of where we need to go and how we need to proceed” (Lundberg, 2002). Turner also eschews testing systems in low volume stores or with the easiest customers.

Recently, in its more cutting edge Silicon Valley based development division (@WalmartLabs) the company has moved to adopt an Agile development methodology. Agile methodology allows the group to react faster to changing market conditions with respect to the much slower SDLC methodology. This approach is necessary in a cut-throat marketplace where competitors such as Amazon have been using Scrum for over a decade (King, 2014).

Wal-Mart: Project ROI and Key Success Measures:

Despite the less than successful analysis and grasp of intended project benefits related to its RFID initiative, Wal-Mart relies heavily on ROI as a measure of project success. Cost is a major driver of IT related expenses thus a reliance on ROI is a sensible approach. Former CIO Turner has stated that 33% of Wal-Mart development projects are canceled before they are completed and that 56% of completed projects are subjected to budget overruns of 189%. “One of the problems is that a lot of companies don’t require an ROI except for major purchases. ‘At Wal-Mart, everything has to pay its way, even infrastructure [investments]. A lot of people say you can’t cost-justify infrastructure, but you can. There is a way. You have to make ROI the center of what you’re about, to begin to pay your way’” (Power, 1998). At Wal-Mart all technology implementations are assigned a payback analysis and the savings from the analysis must be incorporated into the business plan. A quarterly report on each project is shared at the executive level to ensure that business unit profit and loss statements reflect the investment value that was initially calculated. The mentality at Wal-Mart is a focus on turning information technology from a traditional cost center to a profit center.

Additionally, the centralized information technology group at Wal-Mart does not saddle its divisions with a chargeback funding method. The company takes a holistic enterprise wide view approach with respect to determining which projects make sense for the company. Wal-Mart can be said to employ the corporate budget funding method where IT managers have considerable control over the entire IT budget. When it’s time to implement a project, the divisions with the largest budgets are treated the same as divisions where resources are more scarce. As of 2004, the organization lacked an IT steering committee which helped speed up the project selection process (Sullivan, 2004). The drawback to this funding method approach is that IT competes with all other budgeted items for funds (Pearlson, Galletta & Saunders, 2016).

Project completion dates in the organization’s nomenclature are referred to as “end dates”. All projects are tracked against the end dates and problem projects are scrutinized when they fall behind schedule. When new systems are deployed it is not uncommon for high level management to solicit feedback from line employees involved in using the system. When necessary, personnel are replaced on project teams in order to increase project effectiveness (Lundberg, 2002).

To be continued in Part 3 where I address these three areas:

  • Strategy for Data Acquisition and Impact on Business Processes
  • Strategy for Social Media/Web Presence
  • Strategy for Organizational Change Management, Project Strategy and Complexity

If you’re interested in Business Intelligence & Tableau check out my videos here: Anthony B. Smoak

References:

King, R. (October 2014). Wal-Mart Becomes Agile But Finds Some Limits. Dow Jones Institutional News. Retrieved from Factiva

Lundberg. A. (July 1, 2002). Wal-Mart: IT Inside the World’s Biggest Company. CIO magazine. Retrieved from http://www.cio.com/article/2440726/it-organization/wal-mart–it-inside-the-world-s-biggest-company.html?page=2

Network World. (September, 2008). “Wal-Mart’s RFID revolution a tough sell; Even for the world’s biggest retailer, championing an unproven technology with no clear ROI has been difficult” Retrieved from Factiva on June 13/16

PCI Security Standard Council. (2016). Maintaining Payment Security. Retrieved from https://www.pcisecuritystandards.org/pci_security/maintaining_payment_security

PricewaterhouseCoopers. (September, 2013). Interview with Mark Cooper. Walmart takes collaborative approach to disaster recovery. Retrieved from http://www.pwc.com/gx/en/industries/capital-projects-infrastructure/disaster-resilience/walmart-disaster-response-strategy.html

Power, D. (June, 1998). WAL-MART: TECHNOLOGY PAYBACK IS IMPERATIVE. Supermarket News. Retrieved from Factiva

Pearlson, K., Galletta, D., & Saunders, C. (January, 2016). Managing and Using Information Systems: A Strategic Approach, Binder Ready Version, 6th Edition

Sullivan, L. (September 24, 2004). Wal-Mart’s Way: Heavyweight retailer looks inward to stay innovative in business technology. Retrieved 6/17/16 from http://www.informationweek.com/wal-marts-way/d/d-id/1027448?

Wailgum, T. (October 2007). How Wal-Mart Lost Its Technology Edge. Retrieved from http://www.cio.com/article/2437953/strategy/how-wal-mart-lost-its-technology-edge.html

WAL-MART STORES, INC. (January 31, 2016). FORM 10-K. Retrieved from https://www.sec.gov/Archives/edgar/data/104169/000010416915000011/wmtform10-kx13115.htm

Worthen, B. (November 1, 2005). How Wal-Mart Beat Feds to New Orleans. CIO Magazine.Retrieved from http://www.cio.com/article/2448237/supply-chain-management/how-wal-mart-beat-feds-to-new-orleans.html

The IT Department Needs To Market Its Value or Suffer the Consequences

This article is also published on LinkedIn.

By now it’s an all too common cliché that the IT department does not garner the respect it deserves from its counterpart areas of the business. This perceived respect deficiency can manifest itself in the lack of upfront involvement in business strategy (we’ll call you when it breaks), unreasonable timelines (do it yesterday), rampant budget cuts and layoffs (do more with less) and/or limited technical promotional tracks (promotions are for business areas only).

IT pros tend to believe that if they’re adding value, delivering difficult solutions within reasonable timeframes and providing it all in a cost efficient manner, the recognition and gratitude will follow. Typical IT and knowledge worker responsibilities fall under the high level categories of “keep things running” (you’re doing a great job so we don’t notice) or “attend to our technical emergency” (drop what you’re doing).

It’s fair to say that there is a perception gap between the true value and the perceived value of what IT brings to the table. Anecdotally, there certainly seems to be a disconnect between the perceived lack of difficulty in business asks and the actual difficulty in delivering solutions. This perception gap can occur not only between IT and the “business” but also between the non-technical IT manager and the technical rank and file.

In this era of automation, outsourcing and job instability, there is an element of danger in one’s contributions going unnoticed, underappreciated and/or misunderstood. Within IT, leaders and the rank and file need to overcome their stereotypical introverted nature and do a better job of internally marketing their value to the organization. IT rank and file need to better market their value to their managers, and in turn the IT department collectively needs to better market its value to other areas of the business.

Perception matters, but IT must deliver the goods as well. If the business misperceives the actual work that the IT department provides and equates it to commoditized functions such as “fix the printers” or “print the reports” then morale dips and the IT department can expect to compete with external third parties (vendors, consulting firms, outsourcing outfits) who do a much better job of finding the ear of influential higher–ups and convincing these decision-makers of their value.

I once worked on an extremely complex report automation initiative that required assistance from ETL developers, architects, report developers and business unit team members. The purpose was to gather information from disparate source systems, perform complex ETL on the data then and store it in a data-mart for downstream reporting. Ultimately the project successfully met its objective of automating several reports which in-turn saved the business a week’s worth of manual excel report creations. After phase 1 completion, the thanks I received was genuine gratitude from the business analyst whose job I made easier. The other thanks I received was “where’s phase 2, this shouldn’t be that hard” from the business manager whose technology knowledge was limited to cutting and pasting into excel.

Ideally my team should have better marketed the value and helped the business partner understand the appropriate timeliness (given the extreme complexity) of this win, instead of just being glad to move forward after solving a difficult problem for the business.

I believe Dan Roberts accurately paraphrases the knowledge worker’s stance in his book Marketing IT’s Value.

“’What does marketing have to do with IT? Why do I need to change my image? I’m already a good developer!’ Because marketing is simply not in IT’s comfort zone, they revert to what is more natural for them, which is doing their technical job and leaving the job of marketing to someone else, which reinforces the image that ‘IT is just a bunch of techies.’”

The IT department needs to promote better awareness of its value before the department is shut out of strategic planning meetings, the department budget gets cut, project timelines start to shrink and morale starts to dip. IT workers need to promote the value they bring to the table by touting their wins and remaining up to date in education, training and certifications as necessary. At-will employment works both ways, if the technical staff feels stagnant, undervalued and underappreciated, there is always a better situation around the corner; especially considering the technical skills shortage in today’s marketplace.

“It’s not about hype and empty promises; it’s about creating an awareness of IT’s value. It’s about changing client perceptions by presenting a clear, consistent message about the value of IT. After all, if you don’t market yourself, someone else will, and you might not like the image you end up with [1]”

References:

[1] Colisto, Nicholas R.. ( © 2012). The CIO Playbook: Strategies and Best Practices for IT Leaders to Deliver Value.

[2] Roberts, Dan. ( © 2014). Unleashing the Power of IT: Bringing People, Business, and Technology Together, second edition.

 

More Than You Want to Know About Wal-Mart’s Technology Strategy Part 1

Wal-Mart has long been associated with innovations in its home-grown information technology systems, which in turn have exerted tremendous influence on its business strategy of everyday low prices. The company was a pioneer in bar code scanning and analyzing point of sale information which was housed in its massive data warehouses. Wal-Mart launched its own satellite network in the mid 1980’s which led to profound business practice impacts with respect to its supply chain management process. Strategic systems such as Retail-Link, spearheaded by industry luminary Kevin Turner, enabled data integration and sharing between Wal-Mart and its suppliers. These systems also enabled the concept of vendor managed inventory. However, not every technology project in which the company invests significant resources turns to gold as Wal-Mart encountered missteps with its RFID technology initiative. Despite the less than stellar ROI and supplier adoption rate of RFID, that effort demonstrated the willingness of its technology to push the envelope in exerting tremendous changes on business processes not only within Wal-Mart but throughout the industry.

Storm clouds are on the horizon as consumer preferences change from “big-box” brick and mortar stores to online retail platforms such as Amazon. To counter Amazon’s online dominance, the company must continue to invest in its digital know-how. Adding new capabilities to its online presences and refreshing its digital properties will be a requirement in order to keep pace in a shifting industry dynamic.

Wal-Mart: Strategy for Technology Infrastructure:

Wal-Mart’s architectural philosophy can be classified by the twin sentiments of “build rather than buy” (the organization has historically held the belief that their information systems provide a competitive advantage over other industry players) and one of innovation. Recently, as consumer preferences have shifted away from “big-box” brick and mortar stores to the convenience of online “e-tail”, competitors such as Amazon and Target have begun to erode Wal-Mart’s retail dominance. In order to react, Wal-Mart has been allocating resources to invest in digital capabilities that will allow the organization to effectively compete and become better aligned with consumer shopping preferences.

Historically, Wal-Mart’s information technology strategy has long favored an internal “build rather than buy” approach which has spawned innovative business strategies. Wal-Mart prefers to build in house strategic systems that allow the company to gain competitive advantages. Retailers are known to prefer home-grown systems and Wal-Mart’s immense size has traditionally been a hindrance in running off the shelf packages (Wailgum, 2007). Globally, the company runs a heavily modified version of IBM’s elderly point of sale (POS) supermarket application at all of its checkouts with the exception of Japan (Zetter, 2009). The in-house systems approach has been a source of competitive advantage for Wal-Mart. “Wal-Mart was a pioneer in applying information and communications technology to support decision making and promote efficiency and customer responsiveness within each of its business functions and between functions” (Ustundag, 2013).

The advantage of an in-house strategic system is that it offers tight alignment between the company’s business strategy and the finished solution. Another advantage of in-house strategic systems as opposed to running off the shelf packages from third parties is the ability to keep proprietary business process and systems knowledge out of the hands of competitors. A third party developer would have no problem advertising a system that was in use at Wal-Mart and then selling that system to competitors. The advantages of the in-house development approach must be weighed against the downsides, namely the higher cost of development and the internal staffing required for new innovative development and on-going maintenance.

Recently, as Wal-Mart tries to use its geographic reach and existing retail infrastructure to compete with Amazon, it is making a move to ramp up its cloud based technology assets. In keeping with its “build rather than buy” approach, the company built its own data centers and developed supporting cloud based commerce applications using open source tools. “‘We took back control of the technology and largely built it ourselves,’ explained Jeremy King, chief technology officer for global e-commerce at Walmart” (Lohr, 2015). Additionally, as of 2015, the company is in the middle of an IT systems overhaul called Pangaea that “includes a hybrid cloud platform and search technology” (Nash 2015). King, in keeping with the Wal-Mart approach has stated, “Most people don’t replace entire systems in one shot, especially with from-scratch development…but given how rapidly this place is changing, we didn’t have time to screw around” (Nash 2015).

Wal-Mart: Strategy for IT Capability & Staffing:

Wal-Mart is not a technology company, but it is a company in which technology is a key enabler of business strategies. Since technology has been a crucial component of the organization’s competitive advantage, its IT governance archetype can be characterized as an IT duopoly. The IT duopoly arrangement allows technology executives and business unit leaders to collaborate on technology projects and decisions. Kevin Turner, who was a Wal-Mart vice president for application development, a former CIO and the current CEO of Citadel Securities, has stated that technology payback figures for Wal-Mart initiatives are put into writing, “which in turn requires the affected business units to acknowledge savings and work them into their business plan — or dispute the savings and work with the IT department toward a resolution” (Power, 1998). “‘Do the [business units] always agree with us? No. Will they work with us? Yes. If they don’t, we won’t do anything more for them in the future. And I’ll tell you, that works,’ said Turner” (Power, 1998).

Traditionally, Wal-Mart’s IT staff have a background in other non-technology areas of the business. The company looks to promote its staff out of the IT department which allows a technological “cross-pollination” of knowledge to occur across the organization. When Wal-Mart is looking to develop new systems it dispatches its top engineers to perform “regular” operations jobs so they can gain working hand knowledge of the challenges that line employees face (Boyer, 2003).

As Wal-Mart has looked to withstand new online retail challenges from chief competitors Amazon and Target, its technology staffing mix and organizational structure have had to adapt in order to remain competitive. Former CEO Mike Duke was looking to combine the organization’s stores, information technology assets and logistics expertise into one channel in order to drive growth (Buvat, Khadikar, & KVJ, 2015). Wal-Mart was cognizant that it could not realistically expect the technical staff that it required in order to compete with Amazon, to relocate to Bentonville Arkansas. Therefore, in 2010 Wal-Mart re-organized and consolidated its worldwide e-commerce staff into a new Global division located in Silicon Valley California. Historically, the company has favored a centralized Information Systems structure coupled with an in-house development approach. Former Wal-Mart CIO Turner has stated, “What we’ve come up with is a model of decentralized decisions but centralized systems and controls. We will have a common system and a common platform, but we have to allow a great deal of flexibility in our systems so that the people in those local markets can do their job in the best, most effective way” (Lundberg, 2002).

The new Global E-Commerce initiative is in keeping with that philosophy as the new division’s key responsibilities include, “running Walmart’s ten websites worldwide, building and testing cutting-edge technology at @WalmartLabs, and building Walmart’s eCommerce capabilities” (Buvat et al., 2015). Additionally, in order to bolster its e-commerce staff, Wal-Mart has purchased 14 companies primarily for the purpose of gaining access to engineering personnel. As a result of Wal-Mart’s emphasis on ramping up e-commerce talent, its e-commerce sales grew from 4.9 billion to 12.2 billion dollars between 2011 and 2014; an increase of nearly 150% (Buvat et al., 2015).

To be continued in Part 2 and Part 3 where I address additional areas such as:

  • Strategy for Information Risk & Security
  • Strategy for Stakeholder Requirements, Testing & Training/Support
  • Project ROI and Key Success Measures
  • Strategy for Data Acquisition and Impact on Business Processes
  • Strategy for Social Media/Web Presence
  • Strategy for Organizational Change Management, Project Strategy and Complexity

Also check out The Definitive Walmart E-Commerce and Digital Strategy Post and The Definitive Walmart Technology Review to see how Walmart is ramping up to compete with Amazon.

Finally check out Costco’s Underinvestment in Technology Leaves it Vulnerable to Disruption to learn how Costco currently competes and how the company should compete.

If you’re interested in Business Intelligence & Tableau check out my videos here: Anthony B. Smoak or on Facebook.

References:

Boyer, J. (February, 2003). Technology Helps Stores Order Only As Much As They’ll Sell. Albany Times Union. Retrieved from Factiva

Buvat, J., Khadikar, A., KVJ, S. (2015). Walmart: Where Digital Meets Physical. Capgemini Consulting. Retrieved from https://www.capgemini-consulting.com/walmart-where-digital-meets-physical

Lohr, S. (October 2015). Walmart Takes Aim at ‘Cloud Lock-in’ Retrieved from http://bits.blogs.nytimes.com/2015/10/14/walmart-takes-aim-at-cloud-lock-in/

Lundberg. A. (July 1, 2002). Wal-Mart: IT Inside the World’s Biggest Company. CIO magazine. Retrieved from http://www.cio.com/article/2440726/it-organization/wal-mart–it-inside-the-world-s-biggest-company.html?page=2

Nash, K. (October, 2015). “Wal-Mart to Pour $2 Billion into E-Commerce Over Next Two Years.”, Dow Jones & Company, Inc. Retrieved from Factiva

Power, D. (June, 1998). WAL-MART: TECHNOLOGY PAYBACK IS IMPERATIVE. Supermarket News. Retrieved from Factiva

(ed), Alp Ustundag. ( © 2013). The value of rfid: benefits vs. costs. [Books24x7 version] Available from http://common.books24x7.com.libezproxy2.syr.edu/toc.aspx?bookid=49466.

Wailgum, T. (October 2007). How Wal-Mart Lost Its Technology Edge. Retrieved from http://www.cio.com/article/2437953/strategy/how-wal-mart-lost-its-technology-edge.html

Zetter, K. (October 13, 2009). BIG-BOX BREACH: THE INSIDE STORY OF WAL-MART’S HACKER ATTACK. Wired. Retrieved from https://www.wired.com/2009/10/walmart-hack/

Consumer Financial Protection Bureau Infographic: Complaints Analysis

Background

As a data and visualization endeavor, I put together an infographic that highlights some product complaints analyses I performed using publicly available Consumer Financial Protection Bureau data.

In case you are unfamiliar with the CFPB, it is an organization that was created in 2010 as a result of the financial calamity that gripped that nation during the great recession. The CFPB’s mission is to write and enforce rules for financial institutions, examine both bank and non-bank financial institutions, monitor and report on markets, as well as collect and track consumer complaints.

On the bureau’s website they host a consumer complaint database that houses a number of complaints that consumers file against financial institutions.

Each week we send thousands of consumers’ complaints about financial products and services to companies for response. Those complaints are published here after the company responds or after 15 days, whichever comes first. By adding their voice, consumers help improve the financial marketplace.

Process

I downloaded the complaint database from the CFPB’s  website and then decided to concentrate on selected bank complaints from the many financial institutions that are present in the database. I settled on a self-defined “National” category and a “Regional” category and then analyzed the percentage of complaints across three product spaces (Mortgages, Bank Accounts & Credit Cards).

I felt a percentage approach would be more useful than just merely listing a total count of complaints. The national banks category consists of the four nationally known firms: JP Morgan Chase, Wells Fargo, Bank of America and Citibank. The regional banks category consists of ten fairly large regional banks that have product offerings similar to the national banks.

It’s fairly obvious that the behemoth national banks are going to have more mortgage complaints than the much smaller regional banks on a total count basis. The more interesting analysis is to look at the rate of mortgage complaints for the national banks as compared to the regional banks (e.g. divide a specific product complaints total like mortgage by the total complaints for all products; calculate this percentage for national and regional categories across all three products).

I carried out this analysis using the ggplot package in R to generate the base graphics for the infographic. Adobe Illustrator was then used to further refine the graphics into what you see below:

IST 719 Final Project-01_BLOG_VERSION

I have an additional unrefined chart that is a straight output from the ggplot package in R. I didn’t have enough space on the infographic to include it there. However, this analysis is the same as is represented in the bottom quadrant of the infographic, except that it solely applies to regional banks.

The analysis consists of totaling all of the specific PRODUCT complaints filed against a particular bank and then dividing that number by the total number of ALL complaints filed against the individual bank (e.g. Total mortgage complaints filed against a bank/total complaints filed against a bank). I call the resulting number the Complaint Ratio.

In the ggplot graph output below we can see that Regions’s “Bank Account or service” product represents about 67% of all complaints filed against Regions. If I were to break out the numbers on a total count basis, we’d see that Regions’s overall complaints total is relatively small compared to other banks. However, the bulk of its complaints are distributed in the “Bank Account or service” product area.

9_Regional Data by Product

May your next bank be your best bank.

Additional Reading:

An Interesting Comparison of Bank of America to JPMorgan Chase

The National Shortage of Cyber Security Professionals

The sophistication of techniques and tactics employed by cyber criminals have ascended to a point where U.S. government and private industry must participate in a cyber “arms race” in order to protect their assets from malefactors. This arms race requires the talents of thousands of cyber security professionals to keep national information assets safe. Unfortunately, there is a dearth of talent available in the marketplace to meet this demand. As a result of this shortage, national cyber-defense capabilities are not growing to keep pace with both the number and the sophistication of these attacks on the United State’s strategic information assets. In addition, current security professionals are feeling stressed by staff shortages, which can also lead to a drop in security effectiveness.

The Center for Strategic and International Studies (CSIS) is a bipartisan think tank headquartered in Washington D.C. that focuses on defense and security policies. In their report titled “A Human Capital Crisis in Cyber Security”, they highlight a “desperate shortage” of people with the skills to “design secure systems, write safe computer code, and create the ever more sophisticated tools needed to prevent, detect, mitigate and reconstitute from damage due to system failures and malicious acts” (Evans & Reader, 2013, pg. 4). Furthermore, according to the CIA’s Clandestine Information Technology Office, there are currently one thousand security specialists in the United States who have the specialized skills to operate effectively in cyberspace; however, the United States needs about ten to thirty thousand such individuals. (Evans & Reader, 2013)

Competent cyber security specialists are needed on two fronts; the first front deals with the operating and maintaining of defense systems and tools that are already in place. The second front pertains to a need for creators and designers who establish new solutions that prevent, detect and mitigate attacks. With respect to those cyber professionals who can contribute on either of these two fronts, organizations wrestle with the questions of “Where do we recruit these individuals and how do we retain them?” Right now the Executive Branch has formulated a “Comprehensive National Cybersecurity Initiative”, where one of its aims is to expand cyber education. The initiative states, “we must develop a technologically-skilled and cyber-savvy workforce and an effective pipeline of future employees. It will take a national strategy, similar to the effort to upgrade science and mathematics education in the 1950’s, to meet this challenge” (National Security Council, 2013, pg.4).

CSIS also offers four elements of a strategy that aims to fill the cyber talent pipeline. These elements are paraphrased and listed below as offered by Evans & Reader (2013, pg. 3):

  • Promote and fund the development of more rigorous curricula in our schools:
    • Several U.S. colleges, funded under the Scholarship for Service program, have been graduating security experts with advanced technical skills. The Scholarship for Service program is offered by the National Science Foundation and provides scholarships to students in cyber security under the condition that they work for the government for a period equal to the duration of the scholarship. Unfortunately, the total number of new graduates with very deep technical skills is around 200 per year.
  • Support the development and adoption of technically rigorous professional certifications that include a tough educational component and a monitored practical component:
    • Emphasize hard technical skills. Do not rely solely on written examinations as an indicator of competence.
  • Use a combination of the hiring process, the acquisition process, and training resources to raise the level of technical competence of those who build, operate, and defend governmental systems:
    • Ensure that those who are hired have the necessary skill sets to be effective. Help those that are currently employed in the security field obtain the necessary knowledge and credentials.
  • Assure there is a career path as with other disciplines, like engineering or medicine, and reward and retain those with high-level technical skills, both in the civilian workforce and in the uniformed services.

Recruiting cyber professionals with highly in-demand skills and certifications also requires special considerations and challenges. Competition for this talent creates a bidding war that may prove costly to companies. Bureaucracies and resistance to change mentalities of typical corporations need to be adjusted to consider the higher than average compensation that in-demand cyber security professionals expect. The same organizational bureaucracy presents a challenge when trying to on-board candidates quickly. Federal agencies are known to have long hiring processes as individuals wait to pass security clearances. Individuals in high demand can often times take a position at another more efficient organization during a protracted wait.

In addition, true superstars may have limited credentialing to demonstrate their expertise as self taught hackers. Other “reformed” players from cyber security’s “dark side” may be the best prospects (Barr, J., 2012 b, pg. 3).

References:

Barr, J. G. (a) (November 2012). Recruiting Cyber Security Professionals. Faulkner Information Services.

Evans, K., & Reeder, F. (2013). “A Human Capital Crisis in Cybersecurity Technical Proficiency Matters” A Report of the CSIS Commission on Cybersecurity for the 44th Presidency. Retrieved April 15, 2013 from http://csis.org/files/publication/101111_Evans_HumanCapital_Web.pdf

National Security Council. (2013). The Comprehensive National Cybersecurity Initiative. Retrieved April 15, 2013 http://www.whitehouse.gov/sites/default/files/cybersecurity.pdf

Image courtesy of cjgphotography / 123RF Stock Photo

The Need For Speed: Improve SQL Query Performance with Indexing

This article is also published on LinkedIn.

How many times have you executed a SQL query against a million plus row table and then engaged in a protracted waiting game for your results? Unfortunately, a poor database table indexing strategy can counteract the gains of the best hardware and server architectures. The positive impact that strategically applied indexes can provide to query performance should not be ignored just because one isn’t wearing a DBA hat. “You can obtain the greatest improvement in database application performance by looking first at the area of data access, including logical/physical database design, query design, and index design” (Fritchey, 2014). Understanding the basics of index application should not be eschewed and treated as an esoteric art best left to DBAs.

Make use of the Covering Index

It is important that regularly used, resource intensive queries be subjected to “covering indexes”. The aim of a covering index is to “cover” the query by including all of the fields that are referenced in WHERE or SELECT statements. Babbar, Bjeletich, Mackman, Meier and Vasireddy (2004) state, “The index ‘covers’ the query, and can completely service the query without going to the base data. This is in effect a materialized view of the query. The covering index performs well because the data is in one place and in the required order.” The benefit of a properly constructed covering index is clear; the RDBMS can find all the data columns it needs in the index without the need to refer back to the base table which drastically improves performance. Kriegel (2011) asserts, “Not all indices are created equal — If the column for which you’ve created an index is not part of your search criteria, the index will be useless at best and detrimental at worst.”

Apply a Clustered Index

More often than not, a table should have a clustered index applied so as to avoid expensive table scans by the query optimizer. It is advisable to create one clustered index per table preferably on the PRIMARY KEY column. In theory, since the primary key is the unique identifier for a row, query writers will employ the primary key in order to aid with record search performance.

“When no clustered index is present to establish a storage order for the data, the storage engine will simply read through the entire table to find what it needs. A table without a clustered index is called a heap table. A heap is just an unordered stack of data with a row identifier as a pointer to the storage location. This data is not ordered or searchable except by walking through the data, row by row, in a process called a scan” (Fritchey, 2014).

However, the caveat to applying clustered indexes on a transactional table is that the index must be reordered after every INSERT or UPDATE to the key which can add substantial overhead to those processes. Dimensional or static tables which are only accessed for join purposes are optimal for this indexing strategy.

Apply a Non-Clustered Index

Another consideration in regard to SQL performance tuning is to apply non-clustered indexes on foreign keys within frequently accessed tables. Babbar et al. (2004) advise, “Be sure to create an index on any foreign key. Because foreign keys are used in joins, foreign keys almost always benefit from having an index.”

Indexing is an Art not a Science

Always remember that indexing is considered an art and not a science. Diverse real world scenarios often call for different indexing strategies. In some instances, indexing a table may not be required. If a table is small (on a per data page basis), then a full table scan will be more efficient than processing an index and then subsequently accessing the base table to locate the rest of the row data.

Conclusion

One of the biggest detriments to SQL query performance is an insufficient indexing strategy. On one hand, under-indexing can potentially cause queries to run longer than necessary due to the costly nature of table scans against unordered heaps. This scenario must be counterbalanced by the tendency to over-index, which will negatively impact insert and update performance.

When possible, SQL practitioners and DBAs should collaborate to understand query performance as a whole; especially in a production environment. DBAs left to their own devices have the potential to create indexes without any knowledge of the queries that will utilize those indexes. This uncoordinated approach has the potential to render indexes inefficient on arrival. Conversely, it is equally important that SQL practitioners have a basic understanding of indexing as well. Placing “SELECT *” in every SQL query will negate the effectiveness of covering indexes and add additional processing overhead as compared to specifically listing the subset of fields desired.

Even if you do not have administrative access to the tables that constitute your queries, approaching your DBA with a basic understanding of indexing strategies will lead to a more effective conversation.

References

Babbar, A., Bjeletich, S., Mackman, A., Meier, J., & Vasireddy, S. (May, 2004). Improving .NET Application Performance and Scalability. Retrieved from https://msdn.microsoft.com/en-us/library/ff647793.aspx

Fritchey, Grant. ( © 2014). Sql server query performance tuning (4th ed.).

Kriegel, Alex. ( © 2011). Discovering sql: a hands-on guide for beginners.

Andy Grove and Intel’s Move From Memory to Microprocessors

A titan of the technology industry recently passed away on March 21,2016. Andy Grove was instrumental in taking a commodity product such as the microchip and making it a branded must have hardware feature. “Intel Inside” and “Pentium” were on the minds of the majority of PC consumers during the 1990’s. As the beneficiary of Andy Grove’s leadership, Intel was able to sustain high profitability and sustainable profit growth. With the help of a Redmond based operating systems company, the “Wintel” standard won the format wars against Apple and IBM’s OS/2. Regarding Andy Grove and his Intel tenure, the Economist reported, “Under his leadership it increased annual revenues from $1.9 billion to more than $26 billion and made millionaires of hundreds of employees.”

For all of Andy Grove’s successes in the semiconductor market, it was not a forgone conclusion that Intel would ever make the leap into this industry. Most people of my generation who grew up in the 80’s and 90’s are not familiar with the fact that at the time of Intel’s founding, the company primarily produced replacement computer memories for mainframes. Intel first and foremost was founded as a memory company.

An article by Robert A. Burgelman in the Administrative Science Quarterly highlights the processes and decision calculus of Intel executives which led the company to exit the dynamic random access memory (DRAM) market. Burgelman provides key insights regarding the transformation of Intel from a memory company into a microcomputer company.

DRAM at one point in time accounted for over 90% of Intel’s sales revenue. The article states that DRAM was essentially the “technology driver” on which Intel’s learning curve depended. Over time the DRAM business matured as Japanese companies were able to involve equipment suppliers in the continuous improvement of the manufacturing process in each successive DRAM generation. Consequentially, top Japanese producers were able to reach production yields that were up to 40% higher than top U.S. companies. DRAMs essentially became a commodity product.

Intel tried to maintain a competitive advantage and introduced several innovative technology design efforts with its next generation DRAM offerings. These products did not provide enough competitive advantage, thus the company lost its strategic position in the DRAM market over time. Intel declined from an 82.9% market share in 1974 to a paltry 1.3% share in 1984.

Intel’s serendipitous and fortuitous entry into microprocessors happened when Busicom, a Japanese calculator company, contacted Intel for the development of a new chipset. Intel developed the microprocessor but the design was owned by Busicom. Legendary Intel employee Ted Hoff had the foresight to lobby top management to buy back the design for uses in non calculator devices. The microprocessor became an important source of sales revenue for Intel, eventually displacing DRAMs as the number one business.

There continued to be a disconnect between stated corporate strategy and the activities of middle managers during the transition period. Top executives gave weak justifications for the company’s reluctance to face reality and exit the DRAM space; they were emotionally attached to the DRAM business. A middle manager stated that Intel’s decision to abandon the DRAM market was tantamount to Ford deciding to exit the car business!

The demand for Intel microprocessors led middle managers to begin allocating factory resources to heavily produce microprocessors over DRAM. Intel’s cultural rule that information power should always trump hierarchical position power gave middle managers the decision space to make production allocation decisions that overrode corporate stated goals. These actions further dissolved the strategic context of DRAMs.

“By the middle of 1984 some middle managers made the decision to adopt a new process technology which inherently favored logic [microprocessor] rather than memory advances”. By the end of 1984, Intel’s top management was finally forced to face business reality with respect to DRAMs. In order to regain leadership in DRAM, management was faced with a 100 million dollar capital investment decision for a 1 MEG product. Top management decided against the investment and thus eliminated the possibility of Intel remaining in the DRAM space.

It should not be understated that Andy Grove saw a future where microprocessors would become the dominant driver of Intel’s success. He had the foresight to tell his direct reports to “make data based decisions and not to fear emotional opposition”. This was a gutsy call because the culture of Intel viewed DRAM memory as a “core technology of the company and not just a product”.

Andy Grove himself is quoted as saying, “The fact is that we had become a non-factor in DRAMs, with 2-3% market share. The DRAM business just passed us by! Yet, many people were still holding to the ‘self-evident truth’ that Intel was a memory company. One of the toughest challenges is to make people see that these self-evident truths are no longer true.”

Under Andy Grove’s leadership, Intel embarked upon a high stakes technological paradigm shift where either complacency or botched execution could have jeopardized the very existence of the company. Rest in peace Mr. Grove.
References:

Burgelman, Robert A (1994). Fading Memories: A Process Theory of Strategic Business Exit in Dynamic Environments. Administrative Science Quarterly. Vol. 39, No. 1 (Mar., 1994), pp. 24-56.

Protection Against Injection: The SQL Injection Attack

As we are all well aware, data is everywhere. Every organization generates and stores data and unfortunately too many bad apples are willing to exploit application weaknesses.  A very popular technique used by hackers of all hats to compromise data confidentiality and integrity is the SQL injection attack. “In terms of the attack methods used by hackers, SQL injection remains the number one exploit, accounting for nearly one-fifth of all vulnerabilities used by hackers” (Howarth, 2010). Don’t believe the hype? Visit the SQL Injection Hall of Fame.

Not everyone is a DBA or a security expert but if you care about data, you need to have a basic understanding of how this attack can be used to potentially compromise your web exposed data. In 2009 infamous hacker Albert Gonzalez was indicted by grand juries in Massachusetts and New York for stealing data from companies such as Dave & Buster’s Holdings, TJ Maxx, BJ’s Wholesale Club, OfficeMax, Barnes & Noble and The Sports Authority by using SQL injection attacks. All of these attacks were enabled due to poorly coded web application software (Vijayan, 2009). He masterminded “the combined credit card theft and subsequent reselling of more than 170 million card and ATM numbers from 2005 through 2007—the biggest such fraud in history” (Wikipedia, Albert Gonzalez). As an aside, Mr. Gonzalez is serving 20 years in prison for his crimes.

In short, a SQL injection is a malicious hacking method used to compromise the security of a SQL database. Invalid parameters are entered into a user input field on a website and that user input is submitted to a web application database server for execution. A successful exploit will allow the hacker to remotely shell into the server and take control or simply obtain sensitive information from a hacked SQL SELECT statement. The exploiter may be able to further exploit SQL commands and escalate privileges to read, modify or even delete information at will.

A popular method to test the vulnerability of a site is to place a single quote character, ‘, into the query string of a URL (Krutz, R. L. & Vines, R. D., 2008). The desired response is to see an error message that contains an ODBC (Open Database Connectivity) reference. ODBC is a standard database access protocol used to interact with applications regardless of the underlying database management system. Krutz et. al (2008) offer the example of typical ODBC error message:

Microsoft OLE DB Provider for ODBC Drivers error ‘80040e14’
[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near the
keyword ‘and’. /wasc.asp, line 68

An error message like this contains a wealth of information that an ill-intentioned hacker can use to exploit an insecure system. It would be in the best interests of a secure application to return a custom generic error response. Furthermore, it is not necessary to be an experienced hacker to take advantage of this exploit; there are automated SQL injection tools available that can make carrying out this attack fairly simple for someone with a script kiddie level of understanding.

There are ways to protect against SQL injection attacks; the most obvious way is to apply input validation. Rejecting unreasonably long inputs may prevent exploitation of a buffer overflow scenario. Programmers due to the extra work involved, sometimes skip validation steps, however the extra safety margin may be worth the cost. Encrypting the database contents and limiting privileges on those accounts which execute user input queries is also ideal (Daswani, N., Kern, C., & Kesavan, A., 2007)

From a SQL Server perspective, here are a few best practice tips shared from Microsoft TechNet to consider for input validation:

    • You should review all code that calls EXECUTE, EXEC, or sp_executesql
    • Test the size and data type of input and enforce appropriate limits. This can help prevent deliberate buffer overruns.
    • Test the content of string variables and accept only expected values. Reject entries that contain binary data, escape sequences, and comment characters. This can help prevent script injection and can protect against some buffer overrun exploits.
    • Never build Transact-SQL statements directly from user input.
    • Use stored procedures to validate user input.
    • In multitiered environments, all data should be validated before admission to the trusted zone. Data that does not pass the validation process should be rejected and an error should be returned to the previous tier.
    • Implement multiple layers of validation. Validate input in the user interface and at all subsequent points where it crosses a trust boundary. For example, data validation in a client-side application can prevent simple script injection. However, if the next tier assumes that its input has already been validated, any malicious user who can bypass a client can have unrestricted access to a system.
    • Never concatenate user input that is not validated. String concatenation is the primary point of entry for script injection.

References
Albert Gonzalez. In Wikipedia. http://en.wikipedia.org/wiki/Albert_Gonzalez

Howarth.F. (2010). Emerging Hacker Attacks. Faulkner Information Services.

Krutz, R. L. & Vines, R. D., ( © 2008). The CEH Prep Guide: The Comprehensive Guide to Certified Ethical Hacking.

Microsoft TechNet. SQL Injection. https://technet.microsoft.com/en-us/library/ms161953%28v=sql.105%29.aspx

Vijayan, J. (2009). “U.S. says SQL injection caused major breaches.” Computerworld, 43(26), 4-4.

Raise the Wage: The Minimum Wage’s Effect on the Restaurant Industry

Overview

The popular myth is that the typical minimum wage worker is a young high school student who is looking to earn pocket money in lieu of living expenses. This could not be further from the truth. According to the Department of Labor, “89 percent of those who would benefit from a federal minimum wage increase to $12 per hour are age 20 or older, and 56 percent are women” [9]. Within the past year the US has seen a number of fast food workers strike for a $15 an hour minimum wage. Would this increase be too high too fast? Back in 2007 when the minimum wage was looking to increase from $5.85 to $7.25 per hour, there was much wailing and gnashing of teeth from the restaurant industry. In 2007 I also wrote a small post which examined the possible effect of a minimum wage increase on the restaurant industry. Research studies have exhibited that an increase in the minimum wage would not be such a bad deal for the restaurant industry.

Labor and Prices  

One effect that the minimum wage increase will have on the industry is that labor costs will increase. Understandably, businesses have a strong desire to keep their costs low so they do not impact profit margins. Typically, labor accounts for about 30 percent of a restaurant’s overhead. This number is slightly below what is spent on food so it represents a significant portion of restaurant operating costs. For Darden Restaurants Inc labor makes up about 43 percent of its cost of sales [2]. Over the course of two years the federal minimum wage will increase by 40 percent. Margins in the casual dining restaurant are very slim, so any increases in costs will cut into the restaurants’ bottom line. As a result of the increases in labor costs, restaurants will be forced to raise prices on their goods. For example, Darden Restaurants Inc has adjusted prices in the past in response to state minimum wage increases. These increases were enacted so that Darden can maintain its profit margins.

Economists Daniel Aaronson and Eric French examined government-collected price data to determine the impact of minimum wage increases on prices. They found that “a 10 percent hike in the minimum wage increased restaurant prices on the whole by 0.7 percent, and prices at limited service establishments by 1.6 percent [1].”

The fast food sector of the restaurant industry provides a number of low skilled minimum wage earning jobs. Aaronson and French’s research shows that prices in this sector can expect to increase by 1.5 percent per every 10 percent increase [1]. With the minimum wage expected to increase by 40 percent over the next two years, if Aaronson and French’s model holds then prices will increase 6 percent in some states. Most likely this would occur in states that follow the federal minimum standard rates and those that do not have their own specific state minimums. Fast food chains that do a significant amount of business in states such as Georgia, Texas, Louisiana and Tennessee amongst others, could potentially feel the full impact of price increases. For states that are already above the federal minimum wage the impact could be less severe as these states may choose to not enact any further increases to their minimum wages.

Another consequence of price increases in the fast food sector would be its disproportional effect on the poor since poorer families spend a relatively large fraction of their incomes on such goods [4].

Employment

Aaronson and French in their research have constructed a model that attempts to determine the impact of minimum wage increases on employment. “In a perfectly competitive labor market, the authors find that a 10 percent increase in the minimum wage will result in a 2.5 to 3.5 percent decrease in employment. [1]”

While conventional theory dictates that minimum wage increases lead to higher unemployment levels, a study by David Card and Alan Krueger, two economic professors at the University of Princeton challenged this notion. They believe that the U.S. is far below the point where wage increases will begin to start costing jobs. Card and Kruegar conducted a study on minimum wage hikes and focused specifically on the New Jersey minimum wage hike in 1990 and its effect on the fast food industry. Unemployment rates and wages were compared in New Jersey and in Pennsylvania. What they found was that “employment actually expanded in New Jersey relative to Pennsylvania, where the minimum wage was constant. [4]”

Card and Krueger repeated this study and focused on the 1996 federal minimum wage increase with respect to New Jersey and Pennsylvania. In this instance the situations were reversed as New Jersey was already above the $4.75 mandated wage and Pennsylvania was increasing its wage from $4.25. They found that greater employment growth occurred in Pennsylvania than in New Jersey. Although they doubted that Pennsylvania’s strong employment growth was caused by the minimum wage increase they believed that the wage increase at worst, did not lead to unemployment as would be expected. “There is certainly no evidence in these data to suggest that the hike in the federal minimum wage caused Pennsylvania restaurants to lower their employment relative to what it otherwise would have been in the absence of the minimum-wage increase. [5]”

 Potential Benefits of Increased Minimum Wage

An increase in the minimum wage will produce some detrimental effects for that industry but there are some benefits to be had as well. I believe that there are employee benefits as well as employer benefits. The employee benefits are those that focus on the pluses experienced by minimum wage workers while the employer benefits are those that actually benefit the restaurants. An employee benefit would be that corporate restaurant chains would be forced to share some of their vast wealth with the people who helped create that wealth. In an economic outlook report issued by the National Restaurant Association they stated that “The industry is heading into 2007 as an economic powerhouse. [6]” The same association has also estimated nationwide industry sales of $536.9 billion next year, up 5 percent over 2006 [6]. With sales of this magnitude it may be feasible to require that more profit be passed on to employees especially since workers need a minimum amount of income to survive and pay bills. At the current rate of $5.15 and hour, “today’s minimum wage workers have less buying power than minimum wage workers had half a century ago. [7]”

Restaurant employers could benefit from the minimum wage increases as well. Card and Krueger studied restaurants in Texas after the federal minimum wage was raised from $3.80 to $4.25. They concluded that job growth was faster at those restaurants which were affected by the increase [9]. Some restaurants were not affected by the increase because they already paid at or over $4.25.

A higher minimum wage could help businesses like Applebee’s and Darden reduce their worker turnover which could increase the level of worker experience and reduce overall training costs. “High employee turnover is destructive to a company because it means that the company workforce lacks experience and continuity; it means that experienced workers have to spend much of their time training new hires; it means that managers have to spend both time and money finding replacements [8]”.

Conclusion

An increase in the minimum wage will mean that restaurants will most likely be forced to increase prices on their offerings in order to compensate for the higher labor costs. But as the article mentions, if prices increase and sales are not affected then fast food restaurant franchisers could reap the benefits of higher royalties. While conventional economics hint that higher wages will lead to higher unemployment, studies by Card and Krueger show that wage increases (at least as they apply to restaurants that rely on minimum wage workers) will not necessarily lead to higher unemployment. Their studies have shown that unemployment rates have fallen in states when the minimum wage was increased. An increase in the minimum wage could also pave the way for increases in restaurant productivity and a lowering of employee turnover.

References

[1] http://www.epionline.org/studies/aaronson_06-2006.pdf

[2] “Darden sees little impact from a minimum wage hike” Reuters News. (20 Dec. 2006) Factiva.

[3] Becker, Gary and Posner, Richard “How to Make the Poor PoorerThe Wall Street Journal (26 Jan. 2007): pg A11. Factiva

[4] http://www.uvm.edu/~vlrs/doc/min_wage.htm

[5] http://www.irs.princeton.edu/pubs/pdfs/90051397.pdf

[6] “UPDATE 1-US restaurants see 2007 sales up, oppose wage hikeReuters News. (12 Dec 2006) Factiva

[7] http://www.businessforafairminimumwage.org/

[8] http://www.huppi.com/kangaroo/41More.htm

[9] http://www.dol.gov/featured/minimum-wage/mythbuster

SQL: Think in Sets not Rows

This article is also posted on LinkedIn.

Structured Query Language, better known as SQL, is regarded as the working language of relational database management systems (RDBMS). As was the case with the relational model and the concepts of normalization, the language developed as result of IBM research in the nineteen seventies.

Left to their own devices, the early RDBMSs (sic) implemented a number of languages, including SEQUEL, developed by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s while working at IBM; and QUEL, the original language of Ingres. Eventually these efforts converged into a workable SQL, the Structured Query Language” (Kriegel, 2001).

For information professionals and database practitioners, SQL is regarded as a foundational skill that enables raw data to be manipulated within a RDBMS. “This is a declarative type of language. It instructs the database about what you want to do, and leaves details of implementation (how to do it) to the RDBMS itself” (Kriegel, 2001).

Before the advent of commercially accessible databases, data was typically stored in a proprietary file format manner. Each vendor had detailed specific access mechanisms, which could not be easily configured and customized for access by alternate applications. As databases began to adopt the relational model, the arrival and eventual standardization of SQL by ANSI (American National Standards Institute) and ISO (International Standards Institute) helped foster access, manipulation and retrieval consistency across many products.

Think in Sets not Rows!

SQL provides users the ability to query and manipulate data within the RDBMS without having to solely rely on a graphical user interface. There are powerful extensions in the many variant structured query languages (e.g. T-SQL, DB2, PL/SQL, etc.) that provide functionality above and beyond ISO and ANSI standards. However, SQL practitioners must first and foremost remember that SQL is a SET BASED construct. The most efficient SQL code regards table data as a whole and refrains from manipulating individual row elements one at a time unless absolutely necessary.

“Thinking in sets, or more precisely, in relational terms, is probably the most important best practice when writing T-SQL code. Many people start coding in T-SQL after having some background in procedural programming. Often, at least at the early stages of coding in this new environment, you don’t really think in relational terms, but rather in procedural terms. That’s because it’s easier to think of a new language as an extension to what you already know as opposed to thinking of it as a different thing, which requires adopting the correct mindset” (Ben-Gan, 2012).

Working with a relational language based upon the relational data model demands a set based mindset. Iterative cursor based processing, if used, should be used sparingly.

“By preferring a cursor-based (row-at-a-time) result set—or as Jeff Moden has so aptly termed it, Row By Agonizing Row (RBAR; pronounced ‘ree-bar’)—instead of a regular set-based SQL query, you add a large amount of overhead to SQL Server” (Fritchey, 2014).

If all other set based options have been exhausted and a row-by-row cursor must be employed, then make sure to use an “efficient” (relatively speaking) cursor type. The fast-forward only cursor type provides some performance advantages with respect to other cursor types in a SQL server environment. Fast forward cursors are read only and they only move forward within a data set (i.e. they do not support multi-direction iteration). Furthermore, according to Microsoft Technet (2015), fast forward only cursors automatically close when they reach the end of the data. The application driver does not have to send a close request to the server, which saves a roundtrip across the network.

References:

Ben-Gan, I.  (Apr, 2012). T-SQL Foundations: Thinking in Sets. Why this line of thought is important when addressing querying tasks. Retrieved from http://sqlmag.com/t-sql/t-sql-foundations-thinking-sets

Fritchey, Grant. ( © 2014). Sql server query performance tuning (4th ed.).

Kriegel, Alex. ( © 2011). Discovering sql: a hands-on guide for beginners.

Microsoft Technet. Fast Forward-Only Cursors (ODBC). Retrieved April 23, 2015, from https://technet.microsoft.com/en-us/library/aa177106(v=sql.80).aspx