Summary

In December 2015, the “Fixing America’s Surface Transportation Act” or FAST Act became law.  The FAST Act:

• Reaffirmed a continuing federal role in surface transportation, but continued a growing dependence on general funds and generally flat funding through the life of the law;

• As the first multi-year surface transportation authorization bill enacted in many years, FAST Act provided greater certainty and predictability of federal funding for state and local agencies, but Congress failed, again, to develop sustainable revenue streams to support federal surface transportation programs over the long-term.

• Did not advance the challenge of accountable and performance-based federal transportation programs that invest scarce federal resources in the most beneficial projects;

• Advanced multi-modalism in surface transportation by including for the first time a title on intercity passenger rail; and

• Improved national policy on freight and goods movement.

While the passage of FAST Act is to be applauded for many reasons, major challenges to the nation’s transportation infrastructure remain unaddressed.

Introduction

In early December 2015 Congress passed, and the President signed, a five-year surface transportation reauthorization bill, “Fixing America’s Surface Transportation Act” or FAST Act.  This was the first time in ten years that Congress had been able to enact such a multi-year bill.  Although the enactment of multi-year surface transportation bills by large bipartisan majorities had been almost automatic for a half-century, this achievement, in an otherwise deeply divided and partisan Congress and political environment, was notable.

In passing a five-year bill, Congress provided some greater degree of certainty to the state transportation departments, transit agencies, and other state and local grantees, which still receive significant assistance from the federal government for their transportation capital programs.  The over $60 billion a year from both the Highway Trust Fund (HTF) and general funds insures that the surface transportation program remains one of the largest in the federal domestic budget, and for many states and transit agencies these grants still represent very significant portions of their own capital programs.  So, the continuity and certainty of federal funding, established by FAST Act, will be important to these grantees and will allow them to continue with their own investment programs.

Finally, and perhaps most important, leading members of both Houses were determined to stop the pattern of repeated, short-term extensions of surface transportation programs and continual transfers of general funds to HTF (or “patches”) that had occurred over the last seven or eight years.  While a caucus of members in the House of Representatives generally prefer devolution of the federal transportation programs to the States and the elimination of the federal gasoline tax, FAST Act still passed in the House by 359 to 65.  In the Senate objections were raised about the “pay-fors,” on which the transfer of about $70 billion of general funds to HTF was based.  Without those pay-fors, program levels for the highway and transit programs could not have been maintained.

Overview of Impact on Transportation Policy

While FAST Act seemed to break a pattern of Congressional stalemate on major domestic grant programs and provided the certainty in funding that was welcomed by transportation stakeholders and state and local grantees, in other regards, the bill continued many significant transportation trends that had characterized national transportation policy for many years.  The bill was a combination of good and bad and a mixture of continuing policy trends and some new features. 

First, FAST Act re-affirmed that the federal government will continue to play a significant role in surface transportation investments, despite the discomfort of many conservatives in both Houses.  Spending on highways and transit programs has strong support not only from Democrats in both Houses, but a substantial majority of the Republican caucuses.  Indeed, many argue that the federal government has a defined role in transportation funding, as prescribed in the Constitution. 

Second, while FAST Act reaffirmed the national interest in surface transportation, these are not the “salad days” of the last half of the 20th century, when federal funding increased dramatically with each new surface transportation reauthorization bill (growing by 30 to 35 percent, for example, with the acts of 1991 and 1997). 

While there is a slight bump in federal funding for highways and transit in the first year of FAST Act, funding under this statute will remain essentially flat or stagnant throughout its five-year life (except for modest inflation adjustments in the later years).  Since investment needs and total spending on transportation at all levels of government keep growing, the federal share or proportion of the total investment pie will keep shrinking.

Thus, the relative federal role, while re-affirmed, continues to shrink proportionately, while those of states and localities continue to grow. Public funds at all levels are increasingly used to leverage private investment in public transportation infrastructure.

Third, the slight increases and the continuity of federal funding that were important features of FAST Act were dependent upon the transfer of abut $70 billion of general funds to HTF, thus continuing a pattern that has been established over the past decade.  The reliance on these transfers or “patches,” in order to assure HTF’s viability and to maintain current levels of federal spending for highways and transit means that Congress has failed, again, to develop sustainable revenue streams to support federal surface transportation programs. 

While the enactment of FAST Act may delay the time that another “patch” or general fund transfer to HTF will be necessary, Congress will still have to face in the future the need to identify the “pay-fors” to enable such transfers and to justify their appropriateness.  This is hardly a recipe for assuring the adequacy of federal transportation funding or for sustaining the federal role in surface transportation over the long term.

Fourth, for all the continuing debate about whether funding for, and investment in, transportation facilities should be user-based, Congress has effectively decided that HTF should be a mixed fund,that is supported by both user fees and general taxpayer funds.  Already, over 25 percent of HTF will derive from transfers from the general fund; when surface transportation programs that are entirely funded with general funds are included, the proportion of general funds grows to over 30 percent.

As a result of these circumstances, the battle over allocations of federal surface transportation funds between the states has been much muted during the consideration of more recent surface transportation authorization legislation, including FAST Act.  For many years, those states that received less in federal surface transportation grants than they raised from the federal gasoline taxes collected within their borders sought to have their relative shares of federal funding increased at the expense of those states (whose allocations of grants exceeded their contributions, in terms of federal motor fuels taxes).

With HTF increasingly dependent upon general funds, all states now receive more in federal aid than they contribute in federal transportation-related user charges, and the battle over relative allocations of grants between states seems less central to the consideration of federal surface transportation legislation.

The reality of a mixed fund for surface transportation should, also, minimize the battle between modes for funds. If general taxpayers are funding such a significant portion of federal transportation grants then certainly a stronger argument can be made that the funds should be used across modes and for broader intermodal and multi-modal solutions to mobility and accessibility problems.

Fifth, FAST Act made no real progress in introducing performance measures and real accountability in the federal surface transportation programs, which remain at the important, initial, steps, enacted in the Moving Ahead for Progress in the 21st Century Act (MAP-21).  For now, the articulation of performance metrics will have to rely on the regulatory and guidance processes of the Federal Highway Administration (FHWA) and the Federal Transit Administration (FTA).  Nor did FAST Act introduce any significant reforms in the deeply fragmented, often politicized, and frequently ineffective transportation planning and capital programming processes of metropolitan and state agencies.

Until such planning and programming reforms and requirements for performance that carry consequences are enacted at the federal level there are few assurances that the investment decisions made at the state and local levels will reflect national purposes, strategic goals, and the most benefits (economic, social, and environmental).  At a time of scarce resources, there should be a premium on “wise” investment decisions.  While FAST Act did not reverse provisions on performance and accountability contained in MAP-21, it did little to advance them.

Sixth, FAST Act did, however, advance the cause of multi-modalism in surface transportation.  For the first time a surface transportation authorization bill contained a title on intercity passenger rail (although Amtrak and other passenger rail programs will continue to be funded strictly from general funds, rather than from HTF, and through regular annual appropriation processes).

Like the highway, transit, and highway safety programs in FAST Act, passenger rail programs are authorized for five years, and FAST Act directed that net income from Amtrak’s Northeast Corridor (NEC) operations are to be used to invest in NEC capital improvements, rather than to subsidize money-losing long-distance trains (although, under the Fiscal Year 2016 Omnibus Appropriations bill, the effect of this provision will be delayed, in order to give Amtrak and the Federal Railroad Administration the opportunity to implement the new accounting structure).

Finally, a positive feature of FAST Act was its advancing national policy on freight and goods movement.  For the last 15 or 20 years there has been a growing awareness at the federal level that there are clear national interests and purposes, in addressing bottlenecks in the national freight network and in enhancing intermodal movements of goods. 

In 2005 Congress had established a grant program for projects of national and regional significance in the surface transportation re-authorization legislation adopted that year.  The program was intended to be largely directed to freight projects, but was fully earmarked by Congress. 

In the 2009 economic recovery legislation Congress established the Transportation Investment Generating Economic Recovery (TIGER) program, a multi-modal competitive grant program for state and local projects, designed to have significant economic impacts.  Administered by the U.S. Department of Transportation, the TIGER program has been extended repeatedly by Congress in annual appropriations measures.   Over its six-year life, TIGER has been the source of funding for several freight projects.

Building on these earlier freight initiatives, Congress included in FAST Act a discretionary freight projects program of $4.5 billion over the five-year life of the bill for so-called “nationally significant freight and highway projects.”  With its emphasis on highway bottlenecks, this program does not meet all of the needs of a nationally significant freight program, but it is an important step, as is the five-year $6.2 billion formula program for states to carry out freight projects.

These can be important initiatives, if these programs are actually used to improve the national and regional mobility of freight shipments and the accessibility of major market for freight movements and deliveries.

Conclusion

In its re-affirmation of a continuing federal role in surface transportation, FAST Act is an important statute.  However, this legislation continues a trend toward a growing dependence on general funds for these programs and stagnation in the general level of federal funding for surface transportation.  The inevitable result is a growing burden on states and localities to address the needs of an aging, deteriorating, and often-congested national surface transportation network.

FAST Act also fails, as did its most recent predecessors, to address the issue of establishing sustainable revenue streams to support the federal role in transportation.  Moreover, the challenge remains to assure accountable and performance-based federal transportation programs, in order to increase the probability that scarce federal resources will be invested in the most beneficial projects.

While the passage of FAST Act is to be applauded for many reasons, major challenges to the nation’s transportation infrastructure remain unaddressed.
 ___________________________________________________________ 
EMIL H. FRANKEL served as Assistant Secretary for Transportation Policy, U.S. Department of Transportation, under President George W. Bush, 2002 to 2005, and Commissioner of the Connecticut Department of Transportation, 1991 to 1995

Earlier this year, the president announced a $60 billion initiative to provide two years of “free” postsecondary training to students at community colleges. Schools receiving funding would be required to adopt evidence-based reforms to improve student outcomes as well as create programs that provide occupational training or fulfill transfer requirements to 4-year colleges and universities. However, these proposals do not fully cover the cost of attendance, just tuition and fees. Additionally, the proposals do nothing to actually lower the high cost of college, but merely subsidize today’s expensive tuition rates. 

In this paper, the American Action Forum (AAF) examines the impact of “free college” proposals on students, finding:

·      Only one out of three college students in the United States would be eligible to participate. 

·      Over a quarter of college students would be prevented from accessing a public federal relief program because the program doesn’t allow students at private institutions or 4-year public universities to participate.

·      Only $32 billion of the $80 billion combined federal and state investment for President Obama’s proposed free college program would result in a degree or credential. The majority - $48 billion- would be a loss.

Not All Students Win

Current estimates place the total college-going population in the United States at around 21 million, with about 15 million or some 72 percent of those students enrolled in public institutions.[1] Free college proposals would not extend cost relief to students enrolled in private higher education programs, as a result more than one in four college-going students would automatically be prevented from accessing any form of tuition relief. Of those enrolled at public institutions, about 6.8 million, or roughly 45 percent are enrolled in two-year community colleges. This means that the president’s proposal would only benefit approximately one out of every three college students in the United States.

The institutions where students would be ineligible to benefit from the president’s proposal serve a large percentage of the nations’ financially needy students. Just over 30 percent of Pell recipients attend public four-year colleges and universities while another 33 percent of Pell recipients are enrolled at private non-profit and for-profit institutions.[2] These students demonstrate one example of how the president’s proposal would prevent those in need of financial assistance from accessing it.

Why Push Students Towards Community Colleges?

From an outcomes standpoint, the president’s proposal allocates taxpayer dollars toward one of the worst-performing sectors of American higher education: community college. It also seeks to drive enrollment into what is already American higher education’s largest sub-sector. 

Community college participants not only represent the group with the highest unemployment rates amongst individuals with a college education, but their earnings are lowest amongst all individuals having a post-high school credential. They also represent the largest group of students who begin college but never complete a degree. Some estimate that 48.8 percent of all students at two-year public community colleges have not completed a degree and are not enrolled at another institution 6 years later.[3] As a sector, these students also have the highest tendency to default on their federal student loan debt.[4]

Chart 1: Earnings and Unemployment Rates by Educational Attainment

Source: Bureau of Labor and Statistics

The Taxpayer Cost of Funding Access Versus Degree Completion

In general, federal investments in higher education are made in an effort to increase the degrees and credentials needed to ensure a productive workforce with lower unemployment rates, higher wages, and economic growth. By definition, free college proposals increase access to higher education. However, more students does not equate to more degrees granted, which yield the kinds of wage gains consumers seek or the tax and productivity gains that governments expect to realize.[5] The lack of evidence or support for improving completion has been a common refrain amongst both economists and pundits.[6]

In fact, in November of 2014, just two months before the president unveiled his plan, the National Student Clearinghouse released an in-depth study that students at the public 2-year colleges at the heart of the president’s plan are highly unlikely to earn a degree or credential that is capable of generating economic benefits.

Chart 4. Six-Year Outcomes by Starting Institution Type

Source: National Student Clearinghouse Research Center

As the chart shows, after six years only about 39 percent of public community college students end up completing a degree.[7] This means that the president’s free college proposal would effectively be spending $36 billion of a $60 billion investment on up to 5.4 million students who will likely never receive any type of college credential. Add the share of the $20 billion that states would be required to invest on top of the federal match and the total potential loss on an $80 billion federal and state investment could be close to $48 billion.

A full accounting of the economic costs is difficult to produce, especially given the migration of students from 2- to 4-year institutions. Still, research from the American Institutes of Research found that between 2004 and 2008 almost $4 billion in federal and state taxpayer monies through grants and appropriations went to community college students who dropped out after just one year of study.[8]

It’s important to note that the federal taxpayer’s $60 billion investment provides grant dollars to students regardless of whether they actually receive a degree. It provides aid whether the student is part of a family of four that lives below the poverty line or part of a family that makes $200,000 per year.

Addressing the Real Challenge

The easiest way to understand the challenges with the president’s free community college proposal is to compare it to additional investment in, for example, existing federal programs designed to foster affordability, such as the Pell grant. 

If the focus of the program is to guarantee that individuals will complete and reap the benefits of advanced education, it is unclear why a policy should specify which entity provides the service.

As many critics have already noted, free community college ends up subsidizing individuals for whom higher education may already be affordable. On the other hand, this proposal works against efforts to redress problems associated with under-matching: the idea that qualified students from less-affluent households do not end up pursuing degrees at competitive colleges and instead enroll in less-selective or two-year community colleges. In either of these circumstances proponents need to be able to articulate why a free community college run via state-based incentive grant programs would be more efficient or effective at improving affordability, access, or completion than Pell grants or other programs already in place.

The under-matching issue is particularly salient because it suggests free college is highly inconsistent with existing federal education policy. For years, Congress and the Department of Education have pursued efforts to facilitate and improve consumer choice by stressing the need for cost calculators and ratings systems so that potential students can make decisions that best fit their unique circumstances. They have encouraged shopping for schools and programs, mindfulness towards things like graduation and attrition rates, and to make academic major and career decisions based on wages and future employment prospects. They have brought under- and over-matching into the national policy debate and actively sought ways to help students avoid making college-going decisions based on what costs the least and instead focus on where their skills and talents fit best.

The current free community college proposals effectively throw nearly all of this by the wayside with incentives for students to enroll in institutions with the highest attrition rates, some of the lowest wage curves and the greatest unemployment variability of any post-high school training institution. They make cost the main driver of the educational investment decision, without placing any downward pressure on the cost of college. They effectively place a financial penalty on millions of poor students for choosing a 4-year public or private option that may be a better fit for their capabilities or increase their chance of graduation. 

As policy goes, promoting choice while financially encouraging participation in a single sub-sector sends consumers mixed signals. It also creates economic inefficiencies by giving free college to students at one institution type without regard for what they’re studying.  This risks an overflow of students in programs that labor markets neither need nor want.

Conclusion

In the current proposals, free does not attempt to drive down the costs of providing students with an education, but instead shifts the burden of paying for higher education away from students and onto taxpayers. A balanced policy discussion on these proposals must consider the costs and tradeoffs. Not every student benefits from this proposal and not every state will be willing or able to participate. Where public resources are scarce, the fact that so many students start college but never finish raises important questions about whether existing funding could be more efficiently allocated to achieve more favorable outcomes.

There is strong public agreement that policymakers need to develop new ways of promoting higher education completion and build new tools that will help students, institutions, and government manage college affordability. Every option should be weighed through a balanced assessment of the extent to which it meets the needs of all parties that it may affect, the costs on taxpayers, and its impact on affordability.


[2] Calculations are based on Table 20a from the U.S. Department of Education’s 2012-2013 Federal Pell Grant Program End-of-Year Report. http://www2.ed.gov/finaid/prof/resources/data/pell-2012-13/pell-eoy-2012-13.html

[3] Similar rates at 4-year public and private institutions are 20.1 percent and 17.1 percent respectively. Source: The Delta Cost Project’s Institutional Costs of Student Attrition. Table #2. September 2012 http://www.deltacostproject.org/sites/default/files/products/Delta-Cost-Attrition-Research-Paper.pdf

[4] Source: http://www2.ed.gov/offices/OSFAP/defaultmanagement/schooltyperates.pdf

[5] According to the National Center for Education Statistics, 6-year graduate rates for undergraduates are less than 60 percent nationally while the equivalent metric for students at 2-year colleges is only about 30 percent.

[6] See, for example, Judith Scott Clayton and Thomas Bailey’s piece from January, 2015: “The Problem with Obama’s ‘Free Community College’ Proposal.” http://time.com/money/3674033/obama-free-college-plan-problems/

[7] Shapiro, D., Dundar, A., Yuan, X., Harrell, A. & Wakhungu, P.K. (2014, November). Completing College: A National View of Student Attainment Rates – Fall 2008 Cohort (Signature Report No. 8). Herndon, VA: National Student Clearinghouse Research Center.

[8] Source: The American Institutes for Research 2011 report, The Hidden Costs of Community Colleges. http://www.air.org/sites/default/files/downloads/report/AIR_Hidden_Costs_of_Community_Colleges_Oct2011_0.pdf.


Summary

  • The Arctic has received very little attention from the U.S. policymakers and the administration despite growing in economic and geostrategic importance as a result of receding sea ice levels.
  • The federal government must uphold its constitutional responsibility to provide for the nation’s common defense by asserting American leadership in the Arctic.
  • The United States should develop regional defenses and promote democratic norms in the wake of increased Russian and Chinese activity in the Arctic.

Introduction

With over 1,000 miles of Alaska’s Arctic coastline, the United States stands to benefit from the region’s increasingly accessible natural resource deposits and commercial maritime transit routes. However, many Americans do not understand the significance of this relatively unexplored frontier.

With the exception of the Cold War, the United States has invested very few resources in order to secure its position as a global leader in Arctic affairs. By failing to properly develop the region’s defense and commercial infrastructure, America will lose the ability to assert itself in this increasingly important area of geopolitical concern. Doing so will cede leadership to other nations – namely Russia – and continue a dangerous trend of reactive, rather than proactive engagement on the part of the American political and military establishment.

Source: Wikimedia

A Changing Arctic Climate

The Arctic is warming at an unprecedented rate. Early climate models indicated that the Arctic would experience ice-free summers starting at the end of the 21st century, but current estimates now project that this phenomenon will begin as early as mid-century, if not sooner.

The eight Arctic nations – the United States, Russia, Canada, Norway, Sweden, Finland, Denmark (via Greenland), and Iceland – as well as China, South Korea, Japan, and other countries stand to benefit from these monumental climatic shifts as new opportunities for resource extraction and trans-oceanic shipping emerge. Many in the international community have therefore called for development of the region’s defense and commercial infrastructure.

The Emerging Arctic Energy Market

According to a 2008 United States Geological Survey report, “The extensive Arctic continental shelves may constitute the geographically largest unexplored prospective area for petroleum remaining on Earth.” This high volume of untapped reserves – 84 percent of which may lie in offshore deposits – could amount to 13 percent of global undiscovered oil and 30 percent of global undiscovered natural gas.

Although Russia holds a large claim over these resources as a result of its vast Arctic coastline, the U.S. Department of the Interior’s Bureau of Ocean Energy Management estimates that Alaska’s outer continental shelf contains 26.21 billion barrels of undiscovered oil and 131.45 trillion cubic feet of undiscovered natural gas.

A recent report issued by the National Petroleum Council – a body of individuals appointed by the Secretary of Energy to advise the federal government on matters concerning energy, the environment, security, and the economy – states that exploration of and eventual output from these untapped reserves “may provide a material impact to U.S. oil production in the future, potentially averting decline, improving U.S. energy security, and benefitting the local and overall U.S. economy.”

Developing Arctic Infrastructure

Potential for near-term escalation of conventional warfare in the Arctic region remains low. A number of unique security challenges have nevertheless emerged as countries respond to the region’s burgeoning economic and geostrategic opportunities with the development of commercial and defense infrastructure.

From an immediate threat outlook, additional infrastructure is required to provide for safe transit through Arctic sea lanes. Unprecedented opportunity for natural resource extraction also warrants additional infrastructure to support potential oil spill and other environmental disaster cleanup initiatives.  Ship casualties in Arctic Circle waters ranging from structural damage to complete destruction have skyrocketed from just three in 2005 to 55 in 2014. Until countries invest in Arctic infrastructure to deal with increased transit and commercial development, these ship casualties will only become worse.

United States

America’s political and military leadership appears underprepared to lead in the Arctic. The U.S. Coast Guard, which is responsible for conducting maritime patrols such as search and rescue operations, faces a number of increasingly severe challenges as sequestration and inconsistent congressional budgeting have undermined its efforts to modernize and expand its fleet of polar icebreakers and surveillance aircraft. Together, these assets are the key tools for any nation seeking to project power and operate in the Arctic.

A frequently cited Coast Guard report states that adequate U.S. presence and capability in the polar regions requires at least three heavy and three medium polar icebreakers. Currently, the Coast Guard only operates one of each – the Polar Star heavy icebreaker, which can break through thick Arctic ice year-round, and the Healy medium icebreaker, which is used in support of Arctic scientific research. The Coast Guard also has one inactive heavy icebreaker, the Polar Sea, which is need of repair. Navy officials indicate that icebreakers “are the only means of providing assured surface access in support of Arctic maritime security and sea control missions.”

Source: Wikimedia

Facing a major challenge in maintaining the 39-year-old Polar Star, the Coast Guard is now attempting to acquire a new heavy icebreaker. With a price tag of approximately $1 billion, one polar icebreaker would consume the entire Coast Guard annual acquisition budget. Any new procurement would require significant support from both the federal and legislative branches. However, the Obama Administration has cut five-year funding for icebreaker acquisition by 81 percent since Fiscal Year (FY) 2013. Rather than receiving the previously budgeted $508 million through FY 2015, the program has only been allocated $9.6 million. The president’s FY 2016 budget request did not help the situation, requesting a mere $4 million for acquisition of an additional heavy polar icebreaker. Such delay in funding creates an uncertain and dire operating climate for the Coast Guard.

In an attempt to remedy the glaring capability gap, the Obama Administration announced in September 2015 that it would jumpstart production of an additional polar icebreaker by two years to FY 2020. If procured that year, the additional polar icebreaker would not come into service until 2024 or 2025. Given that the Polar Star’s service life is expected to end between 2019 and 2022, current projections of Coast Guard assets and capabilities indicate that “there will be a period of perhaps two to six years during which the United States will have no operational heavy polar icebreakers” if the Polar Star is not further extended or the Polar Sea repaired.

Even if the Coast Guard were to acquire an adequate fleet of icebreakers to escort ships through polar sea ice, the Navy may still be unable to operate in certain Arctic conditions. According to a war game published by the United States Naval War College (USNWC), “Strategic and operational planners will simply need to accept that certain areas in the Arctic remain off-limits to U.S. warships unless the commander is willing to accept risks, the ice recedes away from the area of interest, or ships are produced with additional ice strengthening.”

The USNWC war game further notes that U.S. Arctic capability gaps also concern the lack of air and ground support in Alaska’s Arctic, “to include military hangars and fuel storage, as well as roads from Fairbanks to [northern] airfields and supply nodes.” Coupled with the fact that there are no naval installations in Alaska’s Arctic that can accommodate icebreakers, such lack of regional infrastructure – including poor communications and satellite capabilities – significantly delays the response time of emergency responders and defense entities. The state of Alaska and Army Corps of Engineers do have a plan to make one port just outside of the Arctic Circle more accessible to icebreakers.

Outdated maps of America’s Arctic waters also exacerbate these capability gaps, forcing ships to rely on half-century old estimates of the region’s operating environment when in transit. Updated mapping of Alaska’s Arctic shoreline will not be completed until at least 2035.

Canada

The Canadian Rangers provide the bulk of the Army’s ground presence in the Arctic. This reserve force is comprised of approximately 5,000 soldiers operating over 170 patrols throughout Canada’s northern provinces and territories. Because of the Rangers’ small size and lack of resources, Canada’s Arctic Response Company Groups and Army Initial Reaction Unit assist it in providing northern ground support. Collectively, these forces provide a focused, yet limited presence in the Arctic.

Like the United States, Canada has inadequate Arctic maritime forces. Although it maintains six icebreakers (two medium and four light), none of them are equipped to operate year-round in the Arctic environment. Their warships are also not properly hardened for Arctic transit and the Navy’s fleet of Arctic Offshore Patrol Ships is still years from reaching operational status.

The Canadian Air Force’s Arctic presence is strengthened by its aerial situational awareness capabilities. It will procure three additional RADARSAT II satellites by 2018. These systems enhance marine surveillance, ice monitoring, disaster management, environmental monitoring, resource management, and mapping around the world. Canada’s fleet of 18 CP-140 Aurora surveillance aircraft also regularly operates in the Arctic and are currently receiving extensive upgrades. The Air Force relies on a small fleet of CF-18 aircraft that operate out of four northern forward operating bases to deter Russian incursion into its sovereign airspace.

Russia

Of all the Arctic nations, Russia undoubtedly maintains the largest Arctic military infrastructure. It established an Arctic regional command in 2014 and has begun building 10 airfields and 13 air-defense radar stations throughout the region. Russia currently operates 17 icebreakers – four heavy, six medium (with three more under construction), and seven light.

Russia has installed a variety of anti-aircraft missile regiments, missile system batteries, and coastal defense missile battalions in the Arctic. It is also developing extensive Arctic airlift capabilities, with an expected 2017 completion date for the Tiksi airport in Northeast/Central Russia. This facility will reportedly be outfitted with MiG-31 aircraft, one of the planes Russia has used to violate sovereign airspace of Arctic nations. Russia has also moved forward in constructing runways on numerous Arctic islands to increase its intelligence, surveillance, and reconnaissance capabilities in the region.

This buildup is meant to assist Russia in securing its claims to the Arctic’s vast natural resources and potentially lucrative shipping routes. A stronger Arctic presence will also act as force multiplier for the Russian Navy, allowing it to reassign resources through what is becoming an increasingly accessible trans-oceanic route.

China

As a rising geopolitical power and the world’s largest energy consumer, China sees the Arctic as a critical source for gaining access to natural resources and economical shipping routes. Having previously taken a more aggressive tone against notions of Arctic sovereignty, the country now advocates for peaceful cooperation with Arctic nations. China is currently conducting a five-year assessment of polar resources and governance in order to advance its cooperative agenda. Through its Arctic Research Center in Shanghai, China collaborates with Nordic nations on scientific research. It also works with international partners at the Chinese Arctic Yellow River Station in Norway.

China further cooperates with Arctic nations by participating in both bilateral and multilateral scientific missions. It currently operates the Xuelong (Snow Dragon) light research icebreaker, which conducted a trans-Arctic voyage from Shanghai to Iceland in August 2012, and is building a second, more advanced research icebreaker that is expected to enter into service in 2016. China claims these assets will primarily be used for Arctic scientific research.

China is also a member or observer of numerous Arctic-related regional associations and multilateral organizations. Most recently, it was admitted to the Arctic Council (which the United States currently chairs) as an observer nation. The Chinese are expected to continue advocating for their status as a self-described “near-Arctic” nation in these multilateral forums, despite being nearly 1,000 miles away from the Arctic Circle at its closest border.

Multilateral Cooperation in the Arctic

The international community has demonstrated an interest in maintaining stability in the Arctic, placing an emphasis on establishing multilateral lines of communication and cooperation. In particular, the Arctic Council and the International Maritime Organization have proven to be most effective in terms of fostering cooperative engagement between the Arctic nations. These organizations have produced a variety of international agreements, namely: the Agreement on Cooperation on Aeronautical and Maritime Search and Rescue in the Arctic; and the Agreement on Cooperation on Marine Oil Pollution Preparedness and Response in the Arctic; and the Polar Code, which creates standards for ships operating in the region. Although significant in terms of fostering ties between the Arctic nations, these agreements largely outline existing responsibilities that any country ought to follow under international maritime laws and norms.

Recently, the Arctic nations announced the formation of the Arctic Coast Guard Forum, “an operationally-focused, consensus-based organization with the purpose of leveraging collective resources to foster safe, secure and environmentally responsible maritime activity in the Arctic.” The United States currently chairs this forum in tandem with its chairmanship of the Arctic Council.

Some believe current economic and environmental conditions in the Arctic do not provide sufficient opportunity for cooperation. According a report published by The Arctic Institute, “Sharing responsibility or handing over tasks to other countries is often not even an option, as states are struggling to even provide capabilities in their own Arctic maritime domains.” Diplomatic complications may also emerge as countries must identify how they seek to engage with Russia in Arctic affairs following the country’s annexation of Crimea and repeated incursions into the sovereign airspace of Arctic nations.

Leading from Behind in the Arctic

At current standing, the United States lacks both the capabilities and political will to lead in the Arctic operating environment. The American political and military establishment must therefore reevaluate its northern posture so as to foster greater stability and economic prosperity in the region. Failing to do so could ultimately threaten U.S. national security objectives, prompting reactive and potentially hostile engagement in the future.

Both Congress and the administration must uphold their responsibility to provide for the common defense by ensuring that the nation is prepared to lead wherever American interests are at stake. Doing so will require an intensive and immediate development of America’s Arctic defense infrastructure coupled with sustained promotion of democratic norms through both multilateral organizations and bilateral agreements.

Summary

Puerto Rico’s declining economy has driven their country into $71 billion in debt. As a whole, Puerto Rico has poor growth, an oversized public sector, and a tax system inadequate to the commonwealth’s needs It is vital for Puerto Rico to prioritize the funding of essential programs while reducing other spending in order to regain economic stability. In this paper AAF outlines key reforms to improve Puerto Rico’s budgetary and economic outlook. Highlights of these reforms follow:

• Tax reform in order to more efficiently collect due revenue and chip away at the debt without inhibiting the economy.

• Congress should regularize Puerto Rico’s federal reimbursement policies of rum excise taxes to improve predictability and stability.

• Streamlining, consolidation e and where appropriate, privatization, of the large public corporations responsible for $48 billion of debt. Addressing these debts is essential to stabilize the Commonwealth’s finances as a whole and to finance large unfunded pension liabilities.

• Education reforms that address the Commonwealth’s needs while more efficiently stewarding public resources.

• Governance reform that could allow the Commonwealth to save between $20 to $60 million from consolidation of municipalities.

• Consolidation Puerto Rico’s 130 governmental agencies could save between $60 and $300 million annually.

• Modernization of Puerto Rico’s labor market that addresses high employment costs and unfavorable business environment.

• Exemption from the federal minimum wage, which is very high, in order to lower employment costs and incentivize employers to hire more workers.

• Improvements to Puerto Rico’s budget process to enhance the transparency of its finances in order to pinpoint and resolve problems in the budget within a reasonable time.

Introduction

The Puerto Rico economy has been on a steady decline for over a decade, and it is now in the process of grappling with a budget crisis that has endangered its capacity to finance governmental obligations; the current outstanding stock of which totals about $71 billion. Any strategy to remedy the dual challenge of weak growth and heavy indebtedness must engage policymakers at the local, Commonwealth-wide, and federal levels. This study outlines the growth and fiscal context in which potential policy reform must take place, and examines potential elements of a pro-growth fiscal consolidation for the Commonwealth of Puerto Rico.

Growth and Fiscal Context

The ultimate goal of economic and budgetary reforms in Puerto Rico should be a growing economy, and the ability of the Commonwealth to regain access to capital markets in a sustainable fashion. The growth challenge is longstanding and real. As shown below, Moody’s anticipates that, at best, Puerto Rico will have very modest growth and not until over five years from now.

Figure 1: Historical and Projected Output

 

imilarly, the Commonwealth has a deep structural budget challenge that has led to large and persistent budget deficits. Note that these reflect not only general fund deficits, but commonwealth-wide activities.

Figure 2: Structural Deficits

 

Rectifying these deficiencies will necessarily involve some difficult choices. However, it is important to put in context the Commonwealth’s capacity to absorb consolidation, its ability to restrain spending, its levels of taxation, and the size of its indebtedness. Indeed, it is a common assumption that Puerto Rico has engaged in protracted austerity and, despite this, is over-taxed and over-leveraged.  The data do not support these assumptions.

Consider first the amount of government spending in Puerto Rico. Despite attempts to scale back spending, Puerto Rico has failed to significantly reduce budgetary expenses. Looking at Figure 3, it is evident that Puerto Rico has ample room to reduce government spending when compared to the U.S. states.[1] Puerto Rico is well above the mean in this calculation which supports a capability of reduction. It is essential for the economic stability of Puerto Rico to prioritize the funding of essential programs and debt while at the same time reducing other spending. Previous research indicates that it is valuable for Puerto Rico to preserve core functions of government while focusing spending reductions on transfer programs.

Figure 3: Government Spending

 

On the other side of the ledger, tax collections as a percent of GDP for Puerto Rico are actually lower, at 11 percent, than any other U.S. states.[2]


 

Figure 4: Tax Collections

Similarly, the idea that Puerto Rico is dramatically over-leveraged is another misconception surrounding the Commonwealth.  Looking at total debt as a percent of GDP (with and without unfunded pension and other post-employment benefits [OPEB]) Puerto Rico is well below the mean and median when compared to the mainland. Looking at the total debt including unfunded pension and OPEB, Puerto Rico has debt equal to 98 percent of its GDP; substantially below the states’ median of 120 percent.[3]


 

Figure 5: Total Debt[4]

 

 Taken as a whole, the data depict a Puerto Rico that has poor growth, or actual decline; an overly large public sector; and a revenue base that simply is too small to finance it. This mismatch will ultimately lead to irreversibly large debt burdens, but that has not yet occurred. There remains time to address the growth and budget challenges.

 

Commonwealth Reforms

Puerto Rico must take the lead in charting a course beyond its current economic and budgetary torpor, which means any fiscal consolidation must first involve self-initiated reforms. Puerto Rico’s tax measures need modernization which can ultimately generate funds to chip away at the debt without inhibiting the economy. The debt portfolios of Puerto Rico’s large public corporations – with on the order of $48 billion in debt outstanding – must be addressed as part of a growth plan for Puerto Rico as well as pension liabilities.[5]  Additional savings can be achieved in other areas of the Commonwealth’s budget. One example is in education spending, where savings can be achieved without harming educational attainment.  The Commonwealth must also make structural changes at the municipal level by streamlining local administration. These measures should be paired with reforms that enhance labor markets, an essential element of a pro-growth consolidation. As part of this reform effort, and to gauge budgetary and economic progress, the Commonwealth must undertake meaningful budgetary process reform that enhance the transparency of its finances.

Tax Reforms

Tax reform and administration can play an important role in improving Puerto Rico’s budgetary trajectory. This should include a needed modernization of its current tax compliance tax system, including the property tax system, to more efficiently collect due revenue.

Puerto Rico has a substantial problem with the collection rate of its Sales and Use Tax (SUT) that is noted in both Krueger’s and the governor’s plan. At 56 percent[6], Puerto Rico’s collection rate is significantly lower than the U.S. states’ total tax collection rate of 83 percent[7]. By improving the Commonwealth’s SUT tax collections to 65 percent-70 percent (still well below the U.S. states’ total tax collection rate), Puerto Rico could yield approximately $100 to $150 million in annual receipts[8]. The collection rate can be improved by using a quasi-independent government agency (similar to the federal IRS) or a Public Private Partnership to assign taxes.

Another outdated system needing modernization addressed in the Krueger and Governor’s report alike is the property tax. The Krueger Report estimates that revamping the property tax can yield $100 million for 2016 increasing to $350 million by the year 2018[9] by reevaluating the assessed real-estate values (currently from 1954), updating the registry and adjusting down tax rates. A transition period using discounts—based on income-level and change in tax due—could be used to ease the burden of the increased valuation. Since Municipalities normally retain the property taxes, the central government revenue could cut its transfers to municipalities by the amount of additional property tax receipts.

 

Public Corporations Reforms

Puerto Rico has the opportunity to benefit from the consolidation and privatization of the public entities responsible for issuing $48 billion in debt. The funds generated and debt removed in the process can be repurposed to finance a growing unfunded pension.

Public Private Partnership (P3) initiatives can benefit the Commonwealth by alleviating debt, unfunded pension, and OPEB liabilities while reducing the outsourcing of government expenditures. Another side benefit of P3 initiatives is a lower cost and better service for customers; the result of private corporation structures improving the current inefficient system. Examples of P3 statute success include the Luis Munoz Marin International Airport, PR-22 & PR-5 toll roads, and 21st Century Schools project. Public Private Partnership is also present in the Governor’s plan.

PREPA can benefit by pursuing P3 projects for its power generation facilities or for developing the infrastructure intended for the transportation of Liquid Natural Gas. Doing this lowers power generation costs, generates new jobs for its residents, while eliminating debt financing costs. PRASA can improve profitability by refining water collections, reducing operation costs, and increasing usage rates. The major government-owned insurance corporations could be consolidated and privatized. Consolidating and partially privatizing the EDB, MFA, and GDB could reduce overhead expenses and finance IRR projects for the development of Puerto Rico. The potential developments include infrastructure projects, microfinance institutions, and other economic development initiatives as well as providing capital to small and medium sized enterprises to reduce volatility and having programs available to encourage foreign direct investment. The estimated effects of consolidating and privatizing specific public corporations can be found in Table 1.

Table 1: Potential Proceeds[10]

Pension costs continue to be a rising problem for Puerto Rico where the current unfunded pension nears $36 billion[11]. In order to resolve this issue, Puerto Rico can change all pensions to defined contribution plans, increase the minimum retirement age, and increase employee contributions.

 

Education Reforms

Education reform is another necessary measure to conserve funds unnecessarily subsidizing the University of Puerto Rico (UPR) as well consolidate the excess spending on schools and teachers.

The University of Puerto Rico is heavily subsidized by the commonwealth as all UPR students benefit from a blanket subsidy.  The flat rate that the UPR students pay is far lower than many schools on the mainland. Educational support could be shifted to a need-based scholarship approach to reduce the General Fund’s spending without hindering those reliant on the subsidy. Krueger estimates that a program such as this can reduce spending $500 million by FY2020[12].

Turning to the K-12 system, in the past decade the number of students in Puerto Rico dropped 40 percent while teachers increased 10 percent. This created an average student-teacher ratio of 12 which is considerably lower than the median for the mainland of 16. Krueger finds that a gradual cut in the number of teachers saves approximately $400 million per year by FY 2020[13]. Another cost reducing reform incorporated in both the Krueger and Governor’s report is a consolidation of schools. By reducing the number of schools, the remaining will be able to invest more in modern facilities and better technology. The governor’s report estimates that consolidating schools and reducing Puerto Rico Department of Education payroll is expected to save $50 million in 2017 growing to approximately $225 million in 2020[14].

 

Government Reforms

With 50 of the 78 municipalities operating at a deficit[15], the Commonwealth could benefit from regionalizing its less financially stable municipalities. Smaller municipalities spend 98 percent[16] of their budgets on payroll, according to Bayamón Mayor Ramón Luis Rivera Cruz, which can be consolidated to reduce operational costs and improve tax revenues while providing more services for the people. The Commonwealth could save between $20 to $60 million from consolidation of municipalities. The Commonwealth could also save between $60 and $300 million annually by consolidating Puerto Rico’s 130 governmental agencies.

 

Labor Market Reforms

Another measure that needs to be made at the Commonwealth level is the modernization of the labor market. The employment costs are too high and other laws make doing business burdensome in Puerto Rico.

The local labor laws of the Commonwealth are problematic to the employee base by supporting an unfavorable business environment. In order to overcome this problem, Puerto Rico needs to alter the laws that magnify employment costs: defining overtime as any time spent working in excess of an 8-hour day, 30 paid vacation days compared to the norm of 15, mandatory end-of-year bonus, and Sunday premium pay.  Puerto Rico also needs to change the laws that disrupt the ease of business: onerous requirements for laying off employees compared to the mainland and a probationary period of only 3 months. The reform of these local labor laws is supported in both the Krueger and Governor’s report.  Additionally, vocational training could be subsidized to advance the skills of Puerto Rico workers while reducing the employment costs related to business hiring.

Another problem with investing in Puerto Rico is a local permitting process that it is disorderly and not centralized. One reform backed by the Governor’s is to issue Executive Orders designating strategic emergency areas for expedited permitting processes in select-essential infrastructure and service projects. This is the quickest and easiest means of attaining improvement without the need for legislation. Another reform that would streamline the process is to modify the language in the Permit Process Reform Act. Current reviews and approvals take too much time to conclude and, due to the multiple agencies involvement, are redundant. By consolidating all the infrastructure and environmental related permits in the State Office of Permits Management, reviews and approvals will be much faster and effective. Another suggestion is to integrate all of the municipal permitting processes Commonwealth’s existing permit filing and processing platform centralizing the process.

 

Budget Reforms

The Commonwealth needs to improve the budget process, especially the budget’s transparency, in order to pinpoint and resolve problems in the budget within a reasonable time. The Commonwealth did not pass a balanced budget for the general fund in 5 of the past 6 years. Furthermore, the government has failed to report annual financial statements within 305 days of each fiscal year in 9 of the past 13 years. Meanwhile, there remains considerable skepticism over the the accuracy of relevant budget data. The lack of urgency to get the budgets passed in a timely manner is detrimental to Puerto Rico’s ability to turn around its fiscal situation which is a concern evident in both the Krueger and Governor’s report.

 

Federal Reforms

There are a number of policy reforms Puerto Rico must undertake to improve its economic and budgetary outlook, but those alone will not adequately address Puerto Rico’s challenges. There are also a number of reforms needed at the Federal level to increase the labor force and reduce expenses on the island.

 

Tax Reforms

Puerto Rico is in part dependent on the vicissitudes of federal tax writers owing to its reliance on federal reimbursements of rum excise taxes. These reimbursements are currently based on an enhanced, “temporary” basis, that is routinely extended in Congress’s annual tax “extenders” package. If this package is not renewed, Puerto Rico and the U.S. Virgin Islands would receive lower reimbursements – a combined $168 million in subsidies.[17] As part of broader tax reform measures, Congress should regularize its tax policies to improve predictability and stability.

 

Labor and Welfare Reforms

Puerto Rico has a significant problem with a labor force that continues to decline. This problem stems from Federal regulation hindering the Island’s ability to set compensation for its workers and is agreed with in the Krueger and Governor’s report. The Governor’s report finds that only 40 percent of the adult population are employed or looking for work compared to 63 percent on the U.S. mainland.[18]

Minimum wage laws contribute to the abysmal labor force participation rates by inflating employment costs. The U.S. federal minimum wage is very high relative to local average where minimum wage is equivalent to 77 percent of per capita income compared to 28 percent in U.S. states.[19] Puerto Rico needs to be exempt from the federal minimum wage in order to lower employment costs and incentivize employers to hire more workers. The Krueger report estimates the income taxes raised from higher worker participation to be $50 million in the early years and quickly growing. Another reason for the low labor participation rate is the welfare system providing generous benefits often exceeding that of minimum wage employment. One estimate from the Krueger Report shows that a household of three eligible for food stamps, AFDC, Medicaid and utilities subsidies can receive $1,743 per month compared to the minimum wage earners take home earnings of $1,159.

As of now, there is no incentive to join the work force because of this discrepancy between benefits of workers and non-workers. In order to combat this problem, the Federal government could give the Common wealth the ability to adjust welfare requirements and benefits accordingly; such as, continuing food stamps for a while even after a person returns to work, providing lower housing benefits to more people rather than higher benefits to a few, and cutting back on Medicaid benefits paid out over and above the Federal minimum standard. These suggestions for lowering the welfare system’s generous benefits are supported in the Krueger report.

 

Regulation Reforms

The Jones Act is another legislation that needlessly increases the costs on the island. The Jones Act does this by requiring the use of American crew and U.S. manufactured vessels on all shipping routes where the origin and destination are within the United States and its territories. Puerto Rico’s inclusion under the Jones Act results in Puerto Rico import costs twice as high as neighboring Caribbean Islands. This issue of high transport costs is present in both the Krueger report and the Governor’s report where it is suggested that Puerto Rico be exempt from the Jones Act. Despite there being exemptions in other U.S. territories (the U.S. Virgin Islands and Guam), Puerto Rico continues to be overruled with a rejection as current as earlier this year. A more lawmaker friendly proposal is to exempt Puerto Rico from the Jones act for specific imports, such as energy/fuel and food which would drive down the costs of living on the island. The lowered costs from this exemption could feasibly invite more tourists as well as retain the fleeing workers.

 

Conclusion

To improve its budgetary and economic outlook, Puerto Rico must embrace a multi-layered and multipronged fiscal consolidation that also enhances labor and investment opportunities in the Commonwealth. This approach should recognize the Commonwealth’s wherewithal to absorb difficult choices, while improving long-term economic growth. These reforms should focus on reforms to the Commonwealth’s tax system, and directing savings towards Puerto Rico’s public sectors. Additional reforms should improve Puerto Rico’s investment climate and spur domestic economic activity.



[1] This table uses data for state and local expenditures as well as Puerto Rico expenditures from the census and the OMB of Puerto Rico OMB respectively to calculate expenditures as a percent of GDP.

[2] This number is calculated using the total tax revenues divided into the GDP for each of the states and Puerto Rico. The GDP data is from the Bureau of Economic Analysis for the states and from The World Bank for Puerto Rico. The data for tax revenue is from the Annual Survey of State Government Finances provided by the Census for state and local tax revenue, the IRS for federal tax revenue, and the Government Development Bank for Puerto Rico (GDB) for the tax revenue of Puerto Rico.

 

[3] The debt for the states can be found using the Annual Survey of State Government Finances provided by the Census as well as the Federal Debt Securities held by the public provided by the US Department of Treasury. Furthermore, the debt for Puerto Rico is available at the GDB. The state’s unfunded pensions and OPEB is available at the State Budget Solutions while the numbers for Puerto Rico are from the Basic Financial Statements and Required Supplementary Information from the GDB.

 

[4] The higher percent of total debt/GDP for each state and Puerto Rico includes unfunded pension and OPEB.

[8] Estimated using current sales and use tax receipt from Commonwealth filings assuming 56% compliance rate

[10] EDB = Economic Development Bank, MFA = Municipal Finance Authority, GDP= Government Development Bank, PREPA = Puerto Rico Electric Power Authority, PRASA = Puerto Rico Aqueduct and Sewer Authority, PRHTA = Puerto Rico Highways & Transportation Authority, PRIDCO = Puerto Rico Industrial Development Company, Puerto Rico Ports Authority = PRPA, PAA = Port of the Americas Authority, SIF = State Insurance Fund; values reflect most recently available statements

  • State spending would need to increase by approximately 5 to 13 percent in order to meet the contribution to be eligible to receive federal m

  • All states would spend at least $3.7 billion to $4.1 billion annually.

  • Only $32 billion of the $80 billion combined federal and state investment for President Obama’s proposed free college program would result i

Executive Summary

In his 2015 State of the Union address, President Obama announced a $60 billion  initiative to provide students with two years of free community college. The program would require states to opt in and commit 25 percent of the necessary funding.  In return, the federal government would pick up the remaining 75 percent of funding for tuition and fees.

In this paper, the American Action Forum (AAF) examines the impact of “free college” proposals on states, finding:

  • State spending would need to increase by approximately 5 to 13 percent in order to meet the contribution to be eligible to receive federal matching funds.
  • All states would spend at least $3.7 billion to $4.1 billion annually.
  • Only $32 billion of the $80 billion combined federal and state investment for President Obama’s proposed free college program would result in a degree or credential. The majority - $48 billion- would be a loss.

Introduction

Recent proposals to provide the first two years of community college to students for free have become popular. These proposals do not fully cover the cost of attendance, just tuition and fees. The proposals also rely on states to provide funding commitments and meet “maintenance of effort” conditions.  In addition, the proposals do nothing to actually lower the cost of college, but merely subsidize today’s expensive tuition rates. Free in this context does not attempt to drive down the cost of providing education, but instead shifts the burden of paying for higher education away from students and onto taxpayers.

The Cost of State (Lack of) Participation[1]

The president’s proposal is structured as a matched investment, including an additional commitment to making and sustaining appropriate funding increases. The annual cost of expected state investment varies substantially, from as low as $7.4 million in Vermont to more than $349 million in New York (see Chart 2). At a minimum, the incremental one-year additional cost for all states would be approximately $3.7 billion.[2] This does not take into account tuition increases nor does it address the potential for students not currently in the higher education market to enroll in community colleges or for students at 4-year institutions to shift to community colleges.  Under very simple assumptions[3] the collective annual cost to states could easily reach $4.1 billion.

Pairing these minimum additional commitments with existing annual state appropriations for higher education[iii] suggests that the average state would need to increase its annual appropriations by 5.1 percent to 5.7 percent to cover states’ shares of the proposal’s match requirement. Across individual states those percentages vary from as low as 2.1 percent in Montana to 12.4 percent in New Hampshire.

Chart 2: Baseline Estimate of State Costs & Percentage of Budget Increase Needed to Cover Costs

 

Source: AAF Analysis (See Appendix 1)

States must not only be fiscally capable of such investments but also be willing to participate in the program. While the program currently has yet-to-be-defined performance standards, these standards and the necessity of reporting on them will impose further regulatory costs and paperwork burdens. In either case the incentive grant structure means that different levels of state participation will impact the eligible college-going population differently. Just four states enroll one-third of all public college students (California, New York, Texas and Florida) and half of all public community college students come from just seven states. California alone enrolls approximately one out of every five community college students in the United States.

Given that states, not students, initially determine participation levels, they substantially determine which students are eligible. For one or more of the “Big Four” states above to either be unwilling or unable to meet the incentive grant requirements, the result would result in anywhere from 200,000 to 1.6 million students to be ineligible for the proposed public subsidy.

The ability to both inject and sustain additional resource commitments into public higher education will depend in part on individual states’ public priorities, notably health care and pensions commitments. Despite the last recession ending just five years ago and a number of states having experienced budget surpluses in recent years, declines in state higher education investment continues to persist. Between 2008 and 2015, 31 states cut per-student funding by more than 20 percent, and 6 states cut funding by more than one-third.[5]

Proposals for free college are expected to offset such declines by creating matching federal investments, however, it is unclear to what extent capacity for such investments exists. Up to 16 states are expected to run deficits in the next year or two years. In states with sizeable public higher education systems like Illinois, Maryland, Wisconsin and Pennsylvania, deficits are expected to run between $750 million and $6 billion.[6]

Chart 3 plots state 2- and 4-year enrollments against George Mason University’s Mercatus Center index of states’ fiscal health.[7] The bottom five states in particular are classified by the Center as being in “financial peril” based on high deficits and unfunded debt obligations. If one assumes that the bottom quintile was financially unable to meet the matching fund requirements, more than 1.6 million community college students in these states would find themselves ineligible for relief provided by the president’s proposal.

Chart 3: Ranking of States’ Fiscal Health Compared to Public Full-Time Equivalent Enrollment in Degree-Granting Institutions, by State, Fall 2012

Note: Based on FTE fall enrollments.

Source: NCES, Digest of Education Statistics 2013, Table 307.20.; Mercatus Center at George Mason University

State willingness to participate may also be affected by college graduates’ migration patterns. Unlike other infrastructure investments, such as highways, human capital does not have to remain within the state where it was created. As Figure 1 shows, hundreds of thousands of students with either some college or associates degrees annually move to different states.

Figure 1:

 

 

Source: U.S. Census Bureau, 2014 American Community Survey 1-Year Estimates

Finally, evidence supporting the success of higher education programs of this type is thin and is more apt to suggest states generally struggle to meet federal eligibility conditions.

The U.S. Government Accountability Office published a report in December 2014 on state and federal programs to improve college affordability that was only able to identify three federal/state incentive-style programs. Of these, the one closest in structure to free college proposals, the College Access Challenge Grant Program, appropriated $142 million in 2013 (the last year of the program) but only ended up spending half of the money because not all of the states were able to meet the maintenance of effort conditions required for the funding.[8]

Addressing the Real Challenge

A simpler solution would be to investment additional funds into existing federal programs designed to foster affordability, such as the Pell grants. In comparison to Pell, state-based grants or allocations to institution types by design must curtail access since individuals do not dictate initial eligibility.

At the state level, the obvious obstacle is that states most capable of meeting the conditions necessary to receive matching federal funds will be those who are likely to need it the least. As seen in Figure 2, the financially healthiest states almost exclusively service small community college populations and states with high existing public subsidies at these schools. The states that need the dollars the most, the ones where economic investment can – over the long run – foster a larger, more stable revenue base, will find it most difficult to secure the investment dollars and maintain those levels over time.

In this regard, a state-based incentive grant is a policy that actually promotes gaps between the haves and the have-nots. If the federal government announced a program allowing families that agreed to make sustained investments in their children’s higher education to have the balance of their tuition and fees paid for, it would immediately be met with derision as a tool that assists the wealthy at the expense of the poor. Enacting similar policy at the state level is also biased toward some.[9]

The idea behind free community college is that states and institutions are supposed to make investments that will eventually lead to structural changes to higher education that students, families and policymakers all seek: lower-cost education programs and higher completion rates. Unfortunately, there is no evidence or logical support for the idea that free community college “bends the cost curve” or provides states making substantial financial commitments with assurance or incentives that completion rates will rise as a result.

Conclusion

While proposals to make community college free continue to attract attention, these proposals are expensive and fail to actually help students in need. Additionally, these proposals require states to make investments that they’ve shown a great deal of reluctance committing to while being offered no promise or guarantee of a return on the money spent.

A balanced policy discussion on these proposals must consider the costs and tradeoffs. Not every student would benefit from these proposals and not every state will be willing or able to participate. Where public resources are scarce, the fact that so many students start college but never finish raises important questions about whether existing funding could be more efficiently allocated to achieve more favorable outcomes.

Appendix 1: Calculating Costs and Budgets

 


[1] A core challenge to developing reliable cost estimates is that, at any given time, the most recent state-, school- and student-based data typically captures a number of different years. The estimates provided here face the same constraint though the divergence only spans approximately three calendar years. Rather than normalize and report “old” data, we have opted to instead utilize the most recent data available for each data type on the assumption that incremental, year-on-year changes have not been so dramatic as to significantly alter the spirit or purpose of the estimates being provided.

[2] Cost estimates are based on the FTE enrollment levels and average tuition and fee data provided by the College Board’s Trends in College Pricing 2015http://trends.collegeboard.org/sites/default/files/trends-college-pricing-web-final-508-2.pdf

[3] Assumes a two percent tuition and fee increase and an increase in a state’s community college population that equates to a 5 percent shift of a state’s existing public 4-year population.

[4] Source: Illinois State University College of Education Grapevine 2014 data (Table 1). http://education.illinoisstate.edu/grapevine/tables/

[7] More information about the Mercatus rankings can be found here: http://mercatus.org/statefiscalrankings

[8] The other program that the GAO identified as part of their research was the Leveraging Educational Assistance Partnerships (LEAP) program. The report also mentions the GEAR UP program. However that program is designed more towards assisting a wider array of educational partners than just colleges or towards colleges’ abilities to reduce costs. Source: U.S. Government Accountability Office report, Higher Education: State Funding Trends and Policies on Affordability. http://www.gao.gov/assets/670/667557.pdf

[9] To get a better understanding of the challenges to state-based efforts at promoting university funding equality, look at the National Science Foundation’s Experimental Program to Stimulate Competitive Research (EPSCoR), which was designed to redress the concentration of federal research dollars in a relatively small number of universities at the expense of many flagship institutions in the central part of the United States. More information can be found here: http://www.nsf.gov/od/oia/programs/epscor/2030%20Report.pdf


Introduction:

At one point in Alice's Adventures in Wonderland, the eponymous character is lost in the woods, leading to her famous exchange with Cheshire Cat. After the Cat asks Alice where she wants to go, she explains, “I don’t much care where,” and finally she resigns saying, “So long as I get somewhere.” The Cat retorts, “Oh, you're sure to do that, if only you walk long enough.” The Federal Communication Commission’s (FCC) Lifeline subsidy is also wandering without care for goals or a destination.  Now that changes are being considered by the FCC, a more robust review of the program is needed.

Internet access is a key component in our rapidly advancing economy. While incredibly valuable, 15 percent of Americans remain disconnected, mostly by choice.[1] To expand broadband adoption, the FCC plans to extend the Lifeline program to subsidize its purchase by low-income households. Given current eligibility and a lack of targeting mechanisms, we project the program to cost as much as $4.6 billion per year.

The agency should look towards broader and much needed reforms for the program before implementing this expansion. In particular, the FCC should:

  • Define the problem that Lifeline aims to solve;
  • Cap the budget;
  • Reform eligibility requirements;
  • Reconsider the current contribution method, which is harmful to the poorest families; and
  • Implement an economically rigorous evaluation.

Overview Of The Lifeline Program

New broadband subsidies surely will extend the cost of the program. Although the Commission has committed to expanding Lifeline to include broadband, this expansion should have faced much broader skepticism. As a recent Government Accountability Office report noted, “the Lifeline program, as currently structured, may be a rather inefficient and costly mechanism to increase telephone subscribership among low-income households.”[2]

Lifeline is very much a program rooted in another competitive landscape. Shortly after the AT&T system was broken up, the Lifeline subsidy program was put into place to help low income families afford telephone services. In the years before the breakup, universal service became synonymous with service for all, even though no such provision was written into the original 1934 Communications Act. A decades-long back and forth beginning in the 1930s between federal regulators and state regulators culminated in local costs being shifted to long distance service. The FCC pulled apart some of these geographic cross subsides in the 1980s with the imposition of new charges.[3] Simultaneously, a reconceptualization of government’s role in society and communication policy argued that ubiquitous communications infrastructure would contribute to the national unity and opportunity. Both provided reasons for the creation of Lifeline.

Yet, the FCC has never tried to evaluate just how successful the program is. Even after countless GAO reports suggested this big change, the agency has held that the structure of the program “makes it difficult to determine a causal connection between the program and the penetration.”[4] A growing literature since Garbacz & Thompson (1997) contradicts the official agency view.[5] Independent studies have been conducted of the program for nearly as long as its existence, and almost universally it is found to be economically inefficient and ineffectual in achieving its stated goals.[6] Moreover, previous broadband expansion projects have an unfortunate history of waste, fraud, and abuse. The experience of the Rural Utilities Service serves as an example, where billions of dollars were mishandled and half of the projects they outlined were never finished.[7]

The logic of the subsidy is simple, as the price decreases, consumption goes up. For broadband, the offer of lower prices via the Lifeline program will be fighting forces within the market, which have pushed down prices because of competition. As the market for Internet has developed and prices have dropped, consumers naturally find themselves more willing to buy Internet access. One method to understanding this change are so called price elasticity of demand studies, which estimate how a 10 percent increase in price will affect a change in demand. Below is a chart that shows the trend in these projections over the last 15 years. In the 1990s, a 10 percent change in price would alter demand by nearly 3.5 percent. By 2014, the demand decreased by around .5 percent for a 10 percent increase in price. In all, consumers are becoming less sensitive to price changes.

Various Sources[8]

As the market has developed to become less elastic, the effectiveness of broadband subsidies has greatly diminished. While these price inducements might have been effective a decade ago when the technology was new, Lifeline will increasingly become ineffective as time passes.

Lifeline Reforms

Now that the FCC has decided to reform the Lifeline Program, five major changes need to be pursued. First, the problem that Lifeline aims to solve needs to be defined. Second, the budget needs to be capped. Third, the eligibility requirements require reforms. Fourth, the current contribution method needs to be debated further and reconsidered, as it is harmful to the poorest families. Fifth and finally, Lifeline needs an economically rigorous evaluation process.

Defining the Purpose of Lifeline

As explained in the recent Notice of Proposed Rulemaking,

“The purpose of the Lifeline program is to provide a hand up, not a hand out, to those low-income consumers who truly need assistance connecting to and remaining connected to  telecommunications and information services. The program’s real success will be evident by the stories of Lifeline beneficiaries who move off of Lifeline because they have used the program as a stepping stone to improve their economic stability.”[9]   

Even though the ostensible aim of the program is to ensure that those who cannot pay for the service are able to do so, the FCC has never formally backed up these aims with targets. Only when these two work together can a program effectively operate. Will the program be aimed at reducing the digital divide? How many people are being targeted with the program? Which demographic characteristics would be best suited for this program at making it effective? By which metrics will we know that Lifeline has been a success? How will we separate these program effects out from market shifts?

In various proposals from the White House and the FCC, the problem has mainly been portrayed as a divide between the rich and the poor. However, AAF analysis of Pew data suggests that the real digital divide is between the old and young, which demands a rethinking of the proposed programs and of Lifeline.

In a post arguing for the expansion of Lifeline, FCC Chairman Tom Wheeler explained:

“While more than 95 percent of households with incomes over $150,000 have broadband, only 48 percent of those making less than $25,000 have service at home. A world of broadband ‘haves’ and ‘have-nots’ is a world where none of us will have the opportunity to enjoy the full fruits of what broadband has to offer.”[10]

Similarly President Obama’s proposal, known as ConnectHome, aims to connect more low-income communities to high-speed Internet.[11] While it is undeniable that Internet adoption is higher in households with more income, it is worth investigating what the most significant factors behind non-adoption are. In other words, is the digital divide really a chasm of income?

The Pew Research Center recently released its 2000-2015 study on Internet adoption that provides some fascinating data for answering this particular question. Below is a chart created using Pew data that examines the advances made in Internet adoption by income bracket over the last 15 years. While incredible strides have been made, gaps remain between the rich and poor.

However, this graph alone doesn’t tell the whole story. As has been widely documented, age also correlates strongly with Internet usage and may be a confounding variable in the analysis.[12]

AAF analyzed Pew data to further break down the relationship between age, income, and Internet adoption. As we restricted the age range of the analysis, we found that the income brackets were slowly converging. When looking only at the age range of 18-29, Internet usage is near universal regardless of income bracket.

These findings would seem to indicate that age is a far more significant factor than income when determining whether or not an individual uses the Internet.  Income is, and will continue to be, a contributing factor. But among the demographic that has grown up with and seen the enormous benefits of the Internet, few are being priced out of the broadband market.

This age-based digital divide also puts the Pew “Who’s Not Online and Why” study into perspective.

As their report shows, only 19 percent of offline adults listed price as the primary reason for non-adoption.[13] Meanwhile relevance and usability make up 66 percent,[14] which are the two areas we would expect to see increased considering who the offline population consists of.

These findings have major implications for public policy. In a world of scarce resources and limited government funding, it is important that we fund the programs that would have the greatest impact on the problems we intend to solve. If our goal is to bridge the digital divide and help more Americans access the Internet, it’s hard to see how offering a subsidy without a solid basis in outcomes is the best use of funds.

The “problem” of broadband is often played up as an issue of affordability, but when considering the underlying factors of age, relevance, usability, and computer ownership, it seems unlikely that a subsidy would have a strong effect on closing the digital divide. The lesson of studies performed on wired and wireless telephony support this general conclusion, as does the FCC’s recent study of broadband adoption. Indeed, that study exemplified the problems endemic at the agency at solving the Internet broadband gap, enrolling just over 7,000 people in the pilot program, which 74,000 were expected. Comcast has shown itself to be effective in getting low income families online, as nearly 500,000 families are connected via their Internet Essentials program. [15] Partnering with private companies could help alleviate these issues and go a long way in explaining non-adopters don’t get onto the Internet. Most importantly, the FCC needs explicitly to name those constituencies intended for Lifeline and outline those metrics that will serve to explain when the program has been effective.

Cap the Budget

Capping the budget of the Lifeline program is a vital. Regardless of how targeted the program ends up being, it would be fiscally irresponsible to leave Lifeline uncapped. Lifeline remains the only USF program without a budget cap and consequently the one with the most historic abuse.[16]  While common objections to doing so usually revolve around funding levels, our analysis estimates four levels of cost of the program for both expected usage and the high-end potential, as well as targeted or non-targeted programs. Given current eligibility and no targeting mechanisms, the program has the potential to cost as much as $4.6 billion. If the program reduced eligibility and was targeted it could cost as little as $522 million. 

Budget Projections of Expected Usage for Lifeline Subsidies

 

Non Targeted

Targeted

Current Eligibility

 $         1,524,600,000.00

 $                    786,693,600.00

Reduced Eligibility

 $         1,022,733,482.76

 $                    522,976,148.48

 

Budget Projections of Potential Usage for Lifeline Subsidies

 

Non Targeted

Targeted

Current Eligibility

 $          4,662,000,000.00

 $                  2,383,920,000.00

Reduced Eligibility

 $          3,099,192,372.00

 $                  1,584,776,207.52

 

By outlining eight different scenarios, we can see a range of potential budget options for the program with the three primary factors in the analysis being current or reduced eligibility, targeted or non-targeted, and full or expected Usage. Current eligibility refers to the existing eligibility standards as calculated by the GAO.[17] Reduced eligibility estimates those households under 135% of the poverty line as eligible for subsidies. If no additional methods are used to narrow down the population receiving the subsidy, they are marked as non-targeted, while targeted was determined as only those currently without Internet access among the eligible population receiving the subsidy. Full usage was defined as 100 percent of eligible persons using the subsidies. Expected usage places 33 percent of eligible persons as receiving the subsidies, as reports have suggested that 66 percent of non-adopting households would not subscribe to broadband at any price.[18]

If the Commission is concerned about fluctuations in enrollment due to changing economic conditions, then set a cap that is higher than the expected operating budget. During the 2008 economic collapse, Internet adoption dropped 4 percentage points among households making $30,000 or less. This, however, occurred when the adoption rate was close to 50%. Setting a cap that is 10 percent higher than the expected operating budget would ensure that Lifeline is able to meet the increased demand for broadband subsidies in the event of another recession, but would only work if the agency is diligent in weeding out those who are abusing the system.

Reform Eligibility Requirements,

Currently, individuals can be eligible for Lifeline if they sit below 135% of the poverty line. However, Lifeline might be available if that person qualifies for Medicaid; Supplemental Nutrition Assistance Program (Food Stamps or SNAP); Supplemental Security Income (SSI); Federal Public Housing Assistance (Section 8); Low-Income Home Energy Assistance Program (LIHEAP); Temporary Assistance to Needy Families (TANF); National School Lunch Program's Free Lunch Program; Bureau of Indian Affairs General Assistance; Tribally-Administered Temporary Assistance for Needy Families (TTANF); Food Distribution Program on Indian Reservations (FDPIR); Head Start; or any number of State assistance programs. Because many of these programs use different criteria for determining eligibility, the culmination of all these program leads to an eligible population of 42 million households.[19]

This is roughly a third of the U.S. population that would be eligible. Subsidizing such a large population is not only expensive but is also contrary to the basic goal of Lifeline, which is to get those Americans connected that currently are not. One possible cost cutting measure would be to simplify the eligibility system to only include those families who are below 135% of the poverty line. Our analysis indicates this would cut the eligible population down from 42 million to 28 million households. This was labeled in our budget projections as “Reduced Eligibility”.

One of the most important reforms for Lifeline is ensuring that eligibility is not determined by those providing the communication service, also known as the eligible telecommunications carriers (ETC). Fortunately, it appears that the commission is agreed on this point. We applaud the commission for recognizing the perverse incentives this creates since the ETC’s have little reason to limit their Lifeline subscribers and thus potential income from the program. However, as detailed above, the overall spending for Lifeline can be significantly reduced if the eligible population can be meaningfully targeted. 

Another way to ensure we are targeting the population that most needs this assistance would be to work with existing ETC's to determine who already subscribes to Internet (or has within the last 3 months to prevent gaming the subsidy) and disqualify them from receiving subsidies. This would ensure that we are targeting those who need help crossing the digital divide, not subsidizing those who have already made the leap. We labeled this group as “targeted.” As part of good governance, the FCC should align eligibility with the goals as defined previously. 

Reform the Current Contribution Method

The Universal Service Fund (USF) is structured such that the program subsidizes internally those who cannot afford telecommunications services. Each month, these rates change, and last year they averaged to a 5.82 percent tax on wireless carriers.[20] Paradoxically, like many flat taxes, the burden falls disproportionately onto poor families and young people, who are more likely to use wireless as their primary means of Internet access. According to one study, 80 percent of poor households pay into the fund through taxes only to get nothing back in return.[21] In fact, MIT economics professor Jerry Hausman estimates that the each dollar raised via wireless taxes costs the economy between $0.72 and $1.12, making the USF program a wash.[22]

As a result of the reclassification of broadband, Internet broadband might soon be considered a USF taxable service meaning that the contribution base could expand dramatically. If this does happen, voices calling for a change in the contribution method of all USF programs might subside. Yet, the mechanism is out of line with prevailing tax logic. The USF is akin to the Supplemental Nutrition Assistance Program, and should be treated as one. Even though the USF is funded via a fee, the money collected doesn’t benefit the group from which the levy is collected, like a toll booth helps to pay for a road. The USF benefits those who are not connected to communication services and thus should be understood as general program, and paid for through the normal tax structure.[23] Although it is not the purview of the FCC to make this kind of change, Lifeline reforms should make the program more explicit and more closely tied to Congressional approval, as it is legally mandated with the power of taxing. 

Implement Meaningful Evaluations of Lifeline

The FCC has a responsibility to ensure the programs they institute are having a net positive effect. Due to the inefficiencies of government bureaucracy and the weak effect subsidies will have in the current market, it is vital that the FCC perform a cost-benefit analysis to ensure they are doing more harm than good.

This problem is further complicated when examining the essentially meaningless goal the FCC has set for itself. The GAO report on Lifeline strongly urged the commission to set measurable goals as a concrete way to evaluate the performance of the program. In the 2012 NPRM on Lifeline reform, the FCC claimed to have fixed the problem by setting the goal of increased broadband penetration of low-income consumers. The problem came when the commission decided how to measure this goal:

“As with our first goal, as an outcome measure of the availability of broadband service to low-income consumers, we adopt the broadband penetration rate of low-income consumers, i.e. the extent to which low-income consumers are subscribing to broadband. Progress towards our goal of ensuring the availability of broadband service to low-income consumers will be indicated by a narrowing of the difference between this outcome measure and the broadband service penetration levels of non-low-income consumers in the “next highest income” bracket”. [24]

Without choosing a specific benchmark or threshold of broadband penetration, there is no way the FCC can’t meet the goal. Natural market forces will continue to push up adoption rates at all income levels, and seeing as most income brackets are already at saturation levels, the gap between the low-income bracket and the next highest income bracket will naturally become smaller over time.

FCC should conduct a basic analytical projection of the proposed broadband program based on current Lifeline adoption levels for wired and wireless telephone service before taking any further steps with the program. In particular, there are number of regressions that could help to pinpoint these targets.[25] From here, a specific level of broadband penetration above the 3-year average growth of adoption for low-income consumers can be established, thus creating an actionable goal.

Conclusion

The FCC has not done enough to set goals and establish mechanisms for evaluation of the Lifeline program. In particular, the FCC needs to

  • Define the problem that Lifeline aims to solve;
  • Cap the budget;
  • Reform eligibility requirements
  • Reconsider the current contribution method, which is harmful to the poorest families; and
  • Implement an economically rigorous evaluation.

Only when these changes are implemented will the program truly begin to serve those whom should be receiving Lifeline assistance.

 


[1] Julia Greenberg, 15 Percent of Americans Don’t Use the Internet, http://www.wired.com/2015/07/15-percent-americans-dont-use-internet/

[2] United States Government Accountability Office, Telecommunications: FCC Should Evaluate the Efficiency and Effectiveness of the Lifeline Program, http://www.gao.gov/assets/680/670687.pdf

[3] Lynne Holt and Mark Jamison, Re-evaluating FCC policies concerning the Lifeline & Link-up programs, http://jthtl.org/content/articles/V5I2/JTHTLv5i2_HoltJamison.PDF  

[4] See note 1.

[5] Christopher Garbacz and Herbert G Thompson Jr, Assessing the Impact of FCC Lifeline and Link-Up Programs on Telephone Penetration, https://ideas.repec.org/a/kap/regeco/v11y1997i1p67-78.html

[6] Robert Crandall and Leonard Waverman, Who Pays for Universal Service? When Telephone Subsidies Become Transparent, http://www.brookings.edu/research/books/2000/universal-service; James Alleman and Paul N. Rappoport, Universal Service, http://www.colorado.edu/engineering/alleman/print_files/Universal_Service__A_Policy_Survey_Review_and_Critique.PDF   

[8] See Paul Rappoport, Lester D. Taylor, James Alleman, WTP Analysis of Mobile Internet Demand, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.377.5233&rep=rep1&type=pdf; Robert Crandall, Competition and Chaos: U. S. Telecommunications Since the 1996 Telecom Act; Mark Dutz, Jonathan Orszag, and Robert Willig, The Substantial Consumer Benefits of Broadband Connectivity for U.S. Households, http://internetinnovation.org/files/special-reports/CONSUMER_BENEFITS_OF_BROADBAND.pdf; Gregory L Rosston, Scott Savage, and Donald M. Waldman, Household Demand for Broadband Internet Service, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1988327; Octavian Carare, Chris McGovern, Raquel Noriega, Jay A. Schwarz, The Willingness to Pay for Broadband of Non-Adopters in the U.S.: Estimates from a Multi-State Survey, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2375867; Austin Goolsbee, The Value of Broadband and the Deadweight Loss of Taxing New Technology, http://faculty.chicagobooth.edu/austan.goolsbee/research/broadb.pdf.       

[9] Federal Communications Commission, In the Matter of Lifeline and Link Up Reform and Modernization Telecommunications Carriers Eligible for Universal Service Support Connect America Fundhttps://apps.fcc.gov/edocs_public/attachmatch/FCC-15-71A1.pdf

[10] Tom Wheeler, A Lifeline for Low-Income Americans, https://www.fcc.gov/blog/lifeline-low-income-americans

[11] The White House Office of the Press Secretary, FACT SHEET: ConnectHome: Coming Together to Ensure Digital Opportunity for All Americans, https://www.whitehouse.gov/the-press-office/2015/07/15/fact-sheet-connecthome-coming-together-ensure-digital-opportunity-all

[12] Pew Research Center, Internet User Demographics, http://www.pewInternet.org/data-trend/Internet-use/latest-stats/

[13] Kathryn Zickuhr, Who’s Not Online and Why, http://www.pewInternet.org/2013/09/25/whos-not-online-and-why/

[14] Ibid.

[15] Comcast, About Internet Essentials, https://internetessentials.com/about

[16] Will Rinehart, FCC Proposes Changes to Lifeline, http://americanactionforum.org/insights/fcc-proposes-changes-to-lifeline

[17] See note 1.   

[18] Octavian Carare, Chris McGovern, Raqual Noriega, and Jay A Schwarz, The Willingness to Pay for Broadband of Non-Adopters in the U.S.: Estimates from a Multi-State Survey, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2375867; Additionally, this report dovetailed with the Pew Research finding that 2/3 of non-adopters listed relevance and usability as the primary reasons for non-adoption.  

[19] Ajit Pai, Dissenting Statement of Commissioner Ajit Pai Re: Lifeline and Link Up Reform and Modernization, WC Docket No. 11-42, Telecommunications Carriers Eligible for Universal Service Support, WC Docket No. 09-197, Connect America Fund, WC Docket No. 10-90, https://apps.fcc.gov/edocs_public/attachmatch/FCC-15-71A5.pdf

[20] Scott Mackey and Joseph Henchman, Wireless Taxation in the United States 2014, http://taxfoundation.org/article/wireless-taxation-united-states-2014

[21] Gregory L. Rosston and Bradley S. Wimmer Wimmer, The State of Universal Service, http://web.stanford.edu/group/siepr/cgi-bin/siepr/?q=system/files/shared/pubs/papers/pdf/99-18.pdf     

[22] Jerry Hausman, Efficiency Effects on the U.S. Economy from Wireless Taxation, http://economics.mit.edu/files/1029

[23] Office of Management and Budget, OMB Circular No. A–11 (2015) Section 20 – Terms and Concepts, https://www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/s20.pdf  

[24] Federal Communications Commission, In the Matter of Lifeline and Link Up Reform and Modernization Lifeline and Link Up Federal-State Joint Board on Universal Service Advancing Broadband Availability Through Digital Literacy Training Report and Order and Further Notice of Proposed Rulemaking, https://apps.fcc.gov/edocs_public/attachmatch/FCC-12-11A1.pdf

[25] Olga Ukhaneva, Universal Service in a Wireless World, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2430713

Summary

  • The United States and Japan have jointly researched, developed, and produced missile defense systems.
  • These programs help ensure the security of the Japanese homeland and U.S. troops stationed there.
  • Cooperation and collaboration in defense technology can also help the United States make the most of its defense budget by sharing costs with close allies.

Introduction

The United States and Japan share an alliance that is integral to the national security of both countries. Japan and America also share a similar geopolitical advantage: they are both bordered by water. In Japan’s case, however, the water does not separate the country from its unfriendly neighbors by much—and new weapons technologies are bringing threats very close to the Japanese homeland.

The proliferation of increasingly longer-range ballistic missiles in the Asia-Pacific region poses a serious threat to both Japan and the United States. North Korea has a proven arsenal of hundreds of ballistic missiles. Many of these missiles are capable of reaching Japan and threaten American military bases there.

In response to this threat, the United States has invested heavily in ballistic missile defense (BMD) to protect U.S. forces and allies in the region. One strategy the U.S. government has used to pursue BMD programs is close defense technology cooperation with Japan. This includes joint research, co-development, and co-production of weapons systems.

Mutual Security Concerns

In 1998, North Korea tested a long-range ballistic missile that flew over Japan and landed in the Pacific Ocean. Since then, North Korea has tested hundreds of short- to medium-range missiles in the direction of Japan. Estimates are that the country currently has around 100 ballistic missiles capable of hitting Japan.[1] The North Korean nuclear program presents an even more serious threat, as the country has demonstrated increasing technological sophistication in successive nuclear tests. North Korea claims to have miniaturization capability, which is necessary to mount nuclear warheads on missiles, though U.S. intelligence assessments have not confirmed this.[2]

China’s military buildup also raises serious security concerns in Japan, as the two countries have a long history of conflict. China disputes Japanese sovereignty over uninhabited but resource-rich islands in the East China Sea. Additionally, Chinese aggression in the South China Sea increases tension between the two countries and poses a risk of conflict in the region. Unlike North Korea, the Chinese military presents a more diverse threat to Japan, with significant land and sea defense assets to deploy in a potential conflict. However, it does have hundreds of ballistic missiles that would be capable of reaching Japan—as well as at least 250 nuclear warheads.[3]

Missile Defense Cooperation

Overview

U.S.-Japan missile defense cooperation dates back to the 1980s, when the countries began bilateral research on BMD technology. The official program for joint research and development of BMD systems began in the late 1990s, shortly after the first North Korean missile test over Japan. Since that time, the United States and Japan have deployed both sea- and ground-based missile defense programs: on the sea, Aegis-equipped destroyer ships with Standard Missile-3 (SM-3) interceptors, and on land, Patriot Advanced Capability-3 (PAC-3) surface-to-air interceptor batteries. SM-3 provides upper-tier interception capability, and PAC-3 provides the capability to intercept missiles in the lower-tier. Together, SM-3 and PAC-3 create a multi-tier (or “layered”) BMD system to intercept missiles at different phases of the flight trajectory. Functionally, the way the BMD system intercepts a missile is like “a bullet hitting a bullet.”[4]

This image shows how the BMD systems work to destroy incoming missiles.

          Source: Japanese Ministry of Defense[5]

Current Capabilities

Japan has invested more in missile defense capability than any other country in Asia. It has four Aegis-equipped destroyers with SM-3 interceptors and plans to add four more. It also has 17 PAC-3 batteries dispersed to protect Tokyo and other key locations. These systems are linked with advanced radars and early warning sensors to detect missile threats. Japan is considering investing in a space-based early warning system in the future.[6]

Historical Timeline

The United States and Japan worked together closely to develop the BMD systems in the country. Just months after the 1998 North Korean missile test, the Japanese government approved a decision to cooperate with the United States on BMD. In 1999, the two countries signed a Memorandum of Understanding for joint research and development of BMD.[7] The first step was establishing a cooperative technical research program for BMD. This program was focused on developing a sea-based, upper-tier BMD system and involved joint design, prototype production, and testing.[8]

In 2003, the Japanese government made the decision to acquire BMD capabilities. At the same time, the government continued the U.S.-Japan joint technological research project for the purpose of developing improved capabilities in the future for the systems it procured.[9] The United States began deploying BMD assets to U.S. military bases in Japan in 2006, following the first North Korean nuclear test. In 2009, prior to a launch North Korea claimed was a “satellite,” Japan deployed the first of its own BMD assets, both the sea-based Aegis SM-3 and the ground-based Patriot PAC-3.[10]

Procurement and Production

Japan purchased some of its BMD assets from the United States. The Japanese procurement of the Aegis BMD system was the first time a missile defense capability produced by the U.S. Missile Defense Agency was sold to a military ally.[11] In addition to Foreign Military Sales, the two countries also negotiated a deal to allow Japan licensed production of BMD systems developed in the United States. Under this agreement, Mitsubishi Heavy Industries was able to produce PAC-3 interceptor missiles, which are produced in the United States by Lockheed Martin.[12] This PAC-3 system is said to be “the backbone of [Japan’s] BMD forces.”[13] Still, land-based BMD provides a relatively small area of coverage, and even though there are PAC-3 batteries deployed to protect Tokyo, much of the Japanese homeland remains vulnerable.

Current Cooperation

U.S.-Japan joint technological research and development on BMD continues. Currently, the two countries are co-developing SM-3 Block IIA interceptors to upgrade Aegis-equipped destroyers. The new interceptors, which will feature an increase in velocity, range, and divert capability, will be able to defeat longer-range ballistic missiles and are scheduled to be deployed in 2018. The two countries agreed to equitably share the financing and work on the project.[14] The U.S. government contracted with Raytheon, and the Japanese government contracted with Mitsubishi Heavy Industries.[15] Both companies were responsible for developing specific parts of the program. Co-production of the SM-3 Block IIA interceptors will proceed under licensed production similar to the PAC-3, where manufacturers in Japan produce equipment with technical assistance from American companies.

A key component of cooperation on BMD is real-time information sharing. The United States and Japan share early warning information and intelligence from forward deployed assets like ships and radar.[16] This improves target identification and tracking for both countries.

Finally, the U.S. military and Japan Self-Defense Forces participate in joint BMD training exercises to improve bilateral response capability and interoperability. Japan is the only country with which the United States has conducted kinetic BMD joint exercises[17]

Spending/Cost Savings

This table shows both U.S. and Japanese budget allocations for co-development of the SM-3 Block IIA interceptor.

          Source: DOD Comptroller and Japan Ministry of Defense[18]

          *Japanese budget figures have been converted from Yen to U.S. Dollars using the annual average exchange rate for the year the budget was appropriated.

The United States has clearly spent more on the program than Japan, which stands to reason given that the U.S. defense budget is so much larger overall. Assuming that the United States would have developed the advanced interceptors anyway as a means to protect U.S. forces in the region and offer extended deterrence to an important ally, even unequal cost-sharing with Japan represents relief for the U.S. defense budget. Essentially, every dollar Japan spends to help develop SM-3 Block IIA is a dollar that the United States does not have to spend. In total then, cooperation with the Japanese has saved the United States $431 million so far—or 18 percent of the total program cost to date.

Next Generation

In addition to improving sea-based missile defense with the SM-3 Block IIA interceptors, there are next generation upgrades for land-based BMD on the horizon. There are a few options for next steps, all with the goal of expanding the area of coverage for BMD in Japan. The United States is currently upgrading its own Patriot PAC-3 systems with the next generation Missile Segment Enhancement (MSE) interceptors. Once equipped with these advanced interceptor missiles, PAC-3 can be used to protect against cruise missiles in addition to ballistic missiles.[19] There is also some speculation that Japan may be interested in acquiring Aegis Ashore, which would deploy the capabilities from Aegis BMD ships in a ground configuration. Another option would be to purchase the Terminal High Altitude Area Defense (THAAD) system. Whichever system or combination of systems Japan decides to pursue, cooperation with the United States will be important, both in terms of procurement and achieving interoperability with U.S. forces in Japan.

Conclusion

BMD cooperation with Japan has increased U.S. national security by deterring missile threats in a region home to both American troops and allies. In addition to buying BMD systems from the United States, Japan has made a substantive technological and financial contributions to the development of next generation improvements to these systems that will help defeat and deter mutual threats. Joint development and co-production enhances capabilities for both nations and maximizes the defense budgets of both nations. During a time when the United States and Japan are each facing fiscal strain, making the most of each defense dollar is particularly important. The cost savings that come through defense technology cooperation represent greater alliance burden sharing with Japan.

Defense cooperation with Japan could provide a model for greater cost sharing with other U.S. allies. In recent years, the United States has co-developed missile defense systems with countries in Europe, such as Germany and Italy. The Department of Defense also has an initiative to accelerate defense cooperation with India, with co-development and co-production on the list of goals.[20] All of these partnerships represent steps toward closer cooperation and greater burden sharing with American allies. Given the strategic vulnerabilities inherent in joint development and production of weapons systems, however, this strategy should be reserved for the closest of allies.



[1] Riki Ellison and Ian Williams, “Japan: Priorities for Missile Defense Development and U.S. Partnership,” Missile Defense Advocacy Alliance, April 1, 2015, http://missiledefenseadvocacy.org/wp-content/uploads/2015/04/Japan-BMD-Report.pdf.

[2] Jethro Mullen, “North Korea Says It Can Miniaturize Nuclear Weapons,” CNN, May 20, 2015,  http://www.cnn.com/2015/05/20/asia/north-korea-nuclear-weapons/.

[3] Ellison and Williams, “Japan: Priorities for Missile Defense Development and U.S. Partnership.”

[4] Steven Hildreth, Susan Lawrence, and Ian Rinehart, “Ballistic Missile Defense in the Asia-Pacific Region: Cooperation and Opposition,” Congressional Research Service, April 3, 2015, https://www.fas.org/sgp/crs/nuke/R43116.pdf.

[5] Mai Yaguchi, “Japan's BMD Update - Presentation at the 2014 RUSI Missile Defence Conference,” March 19, 2014, http://www.slideshare.net/RUSIEVENTS/ms-mai-yaguchi.

[6] Hildreth, Lawrence, and Rinehart, “Ballistic Missile Defense in the Asia-Pacific Region: Cooperation and Opposition.”

[7] Richard Cronin, “Japan-U.S. Cooperation on Ballistic Missile Defense: Issues and Prospects,” Congressional Research Service, March 19, 2002, http://fpc.state.gov/documents/organization/9186.pdf.

[8] “Initiatives of Defense of Japan,” Japan Ministry of Defense, 2014, http://www.mod.go.jp/e/publ/w_paper/pdf/2014/DOJ2014_3-1-1_1st_0730.pdf.

[9] “Japan's BMD,” Japan Ministry of Defense, 2010, http://www.mod.go.jp/e/d_act/bmd/bmd.pdf.

[10] “Country Profile: Japan - Missile,” Nuclear Threat Initiative, November, 2014, http://www.nti.org/country-profiles/japan/delivery-systems/.

[11] “Aegis Ballistic Missile Defense,” Missile Defense Agency, http://www.mda.mil/system/aegis_bmd.html.

[12] “US Has Sealed Deal On Japan's Licensed Production Of PAC-3 Missiles,” Space Daily, July 16, 2005, http://www.spacedaily.com/report/US_Has_Sealed_Deal_On_Japans_Licensed_Production_Of_PAC3_Missiles.html

[13] Ellison and Williams, “Japan: Priorities for Missile Defense Development and U.S. Partnership.”

[14] “AEGIS SM-3 Block IIA Co-Development,” Missile Defense Agency, February, 2012, http://www.dtic.mil/descriptivesum/Y2013/MDA/stamped/0604881C_4_PB_2013.pdf.

[15] Daniel Wasserbly, “Next-generation SM-3 missile interceptor takes first flight,” IHS Janes’ 360, June 8, 2015, http://www.janes.com/article/52093/next-generation-sm-3-missile-interceptor-takes-first-flight.

[16] “Initiatives of Defense of Japan.”

[17] Hildreth, Lawrence, and Rinehart, “Ballistic Missile Defense in the Asia-Pacific Region: Cooperation and Opposition.”

[18] “Research Development, Test & Evaluation Programs (R-1),” Office of the Undersecretary of Defense (Comptroller), FY2010-FY2015, http://comptroller.defense.gov/Portals/45/Documents/defbudget/fy2016/fy2016_r1.pdf and “Defense Budget,” Japan Ministry of Defense, FY2010-FY2015, http://www.mod.go.jp/e/d_budget/.

[19] “Initiatives of Defense of Japan.”

[20] “U.S.-India Defense Technology and Trade Initiative (DTTI),” Department of Defense, http://www.acq.osd.mil/ic/DTTI.html.


Related Research

Test

Introduction

The Centers for Medicare and Medicaid Services (CMS) have finalized a rule regarding the Medicare reimbursement methodology for biosimilar products. Biosimilars are prescription medications which have been approved by the Food and Drug Administration (FDA) as being “highly similar” to a specific biologic medication (known as the reference product).[1] Thus, it is easiest for most people to think of biosimilars as the equivalent of generics for small molecule brand name medications, though this is not scientifically accurate. While small molecule generics are chemically identical (save for potentially any inactive ingredients) to their respective brand name drugs—and can be because they are chemically manufactured—exact copies of biological products, by their nature of being developed from living organisms, cannot be produced,[2] and patients may respond differently to the reference product and the biosimilars.

In recognition that biosimilars are not simply generic versions of biologicals, the Biologics Price Competition and Innovation Act was included in the Affordable Care Act (ACA) to establish a different pathway for approval through the FDA and a different methodology for reimbursement from CMS. The law directs that biosimilars be paid based on 100 percent of their volume-weighted Average Sales Price (ASP) plus 6 percent of their reference product’s ASP. (For comparison, small molecule drugs are paid 106 percent of the volume-weighted ASP of all brand name and generic versions of a particular drug.) Biosimilars were given this more favorable add-on to the reimbursement rate in recognition that they are more expensive to produce than small molecule generics. However, there is now a debate as to whether or not biosimilars should be paid with a single billing code based on the ASP of all biosimilars for a single reference product, as has been finalized by CMS, or the ASP for each individual biosimilar, separately from any other biosimilar of the same reference product. Economic arguments and patient safety concerns may support the latter, though the statutory text regarding this matter is somewhat ambiguous.

Biosimilars are Not Generics

There are several issues to consider when weighing these options. Biosimilars are not exactly the same as their reference biologic by definition. Biosimilars are also not required to treat all the same indications and conditions as the reference product. Therefore, one biosimilar may treat all or most of the same conditions as the reference product, while another biosimilar for that same reference product may only treat one indication. Further, even if two biosimilars of a single reference product treat the same indications, they are not necessarily biosimilar to each other (this would require a separate determination from the FDA). Thus, the biosimilar products are not necessarily equally effective or valuable and therefore should not be reimbursed the same amount. Reimbursing these drugs equally does not allow for fair compensation of higher quality products or the fact that one biosimilar may treat more indications than another, which presumably increases its cost of development. If price is the only factor on which companies are competing, a downward spiral may result, pushing prices to an unsustainably low level, eventually forcing high-value manufacturers to exit the market. As competition is reduced, prices could eventually be allowed to rise for the few companies left standing, even if those products are not those most preferred by physicians and patients.  The sterile injectable generic drug experience indicates that drug shortages can occur once the price has bottomed out.[3] The net effect of this policy, therefore, could be that the thriving biosimilars market that policy makers were expecting in the US might not materialize, and patients would not get the benefit of more treatment options and lower prices. 

Further, because biosimilars will not be exactly the same, they cannot be used interchangeably (a drug must specifically receive “interchangeable” status by the FDA to qualify as an acceptable substitute without physician’s orders). However, some are worried that physicians may inappropriately prescribe such drugs interchangeably if the reimbursement amount is the same. This could happen either because the equal reimbursement inaccurately signals to the physician that the drugs are interchangeable, or because the price of a drug changes whereby a physician could lose money by prescribing one biosimilar which in turn encourages the physician to change the prescription. Conversely, physicians have the responsibility of ensuring they understand the differences between the drugs they prescribe, and drug manufacturers should take care to ensure this information is readily available.

Difficulty Tracking Adverse Events

Another concern expressed by some is that using a single billing code will not allow for adequate pharmacovigilance to track and monitor adverse events among patients. If this is true, there is certainly reason to adjust this policy. Patients deserve to know that the biosimilars they are taking can be appropriately tracked to their respective manufacturer should there be a problem with a specific biosimilar.

Despite some claiming that the small molecule drug industry, which uses a single billing code (HCPCS), does not have problems tracking adverse events using the National Drug Code (NDC) assigned to each product along with the manufacturer lot number and company name for each batch of drugs produced, research by the Food and Drug Law Institute has found that adverse events for small molecule drugs are likely often misattributed to the innovator drug.[4] This happens because reports may only contain a nonproprietary name which results in drugs with the same nonproprietary name being grouped together.[5] Further, the FDA’s Adverse Event Reporting System (FAERS) database does not have a data field for an NDC number and the manufacturer lot number was only reported 10 percent of the time which would arguably make it difficult to rely on these pieces of information for tracking adverse events.[6]

Additionally, because biosimilars are not exactly the same as their reference product, as generics are for their brand-name drug, biosimilars may have a different clinical profile from the reference product and therefore potentially greater differences in the side effects, making the consequence of misattribution for biosimilars potentially much more severe. Finally, while biosimilars in the European Union, Canada, Australia, and Japan are all identified by the International Nonproprietary Name (INN) given by the World Health Organization (WHO), and those countries do not report problems tracking adverse events, WHO has stated that the current approach may not suffice as more biosimilars enter the market, at which point more distinct identifiers may be necessary.[7] In a report published by the National Center for Biotechnology Information, the authors state “the effectiveness of [current] surveillance methods may be compromised when there are multiple manufacturers of products that share common drug nomenclature or coding.”[8]

Some argue that the FDA naming practice, in which a unique suffix will be provided for each drug manufactured and added to the product’s proper name, will allow for sufficient pharmacovigilance. Others contend the FDA naming policy will not suffice, as a drug’s various naming and identifying components are not always used in billing and the FDA’s new Sentinel Initiative, to more effectively and accurately track adverse events electronically, will rely on multiple sources of information, including claims data.[9] The bottom line is the more differentiating factors, the better. The Secretary of Health and Human Services does not want to be in the position of finding out that the practice is insufficient after an adverse event occurs and the effected patients cannot be identified.

The Statutory Text is Unclear

Finally, there is disagreement over interpretation of the statutory text which prescribes how biosimilars should be reimbursed by CMS. The reimbursement formula contains two parts: 100 percent of the ASP of the biosimilar product (the base payment) plus 6 percent of the ASP of the reference product (an add-on payment, typically provided, much like a dispensing and administrative fee). There is no disagreement over the part of the statute that states the 6 percent add-on should be calculated solely based on the volume-weighted ASP of the reference biological product, and that the ASP of the reference product should not be included in the base payment for biosimilar products, which is clearly and explicitly provided in §3139(a)(1)(B) of the ACA:[10][11]

Disagreement arises over CMS’ reading of how to calculate the base payment. The text states that payment is the sum of:

                    ``(A) the average sales price as determined using

                the methodology described under paragraph (6) applied to

                a biosimilar biological product for all National Drug

                Codes assigned to such product in the same manner as

                such paragraph is applied to drugs described in such

                paragraph; and

                    ``(B) 6 percent of the amount determined under

                paragraph (4) for the reference biological product (as

                defined in subsection (c)(6)(I)).'';”

CMS has interpreted this language to mean that all NDCs of all biosimilars with the same reference product should be combined to generate one shared ASP, just as all NDCs of small molecules (generic and brand name) are combined to create a single shared ASP. This position is supported by the explicit exemption of the reference product from the calculation of the base ASP, which is only necessitated if CMS’s interpretation is correct, and by the ACA’s reference to paragraph (6), which sets payment policy for multiple source drugs, as opposed to paragraph (4), which prescribes the payment amount for single source drugs or biologicals.

However, this same language is interpreted by others to mean that all NDCs of a single biosimilar product (each biosimilar may have multiple NDCs based on formulary variations, dosing, or other minor differences) should be combined to generate that product’s unique ASP. They rely on the textual reference to NDCs of a single “product”, and the financial differences between biosimilar and small molecule manufacturing (discussed supra) to support this position. If the latter interpretation is correct, biosimilar reimbursement would more closely resemble reimbursements for biologics than those for small molecule generics.

While Congress’ intent in using this particular language is unknown, in cases of statutory interpretation such as this, the administrative—in this case, CMS’—interpretation is accepted as correct.

Conclusion

Ultimately, the decision over how biosimilars, or any medication, should be reimbursed by CMS should be determined by economic principles based on the value of the medication to patients and without putting patient safety and access to such products at risk. If the statutory text does not clearly provide for the favorable regulatory outcome of these factors, it should be amended. Because the rule finalized by CMS would provide equal reimbursement for all biosimilars of a single reference product, without regard for the relative value of such products based on factors such as the number of indications they are approved to treat or the side effects they cause, higher-value products may get squeezed out of the market. Even if CMS has appropriately interpreted the statute in development of this rule and in a consistent manner with how reimbursement policies have been determined for other drugs based on the same section of the law, the underlying policy appears to be flawed. This reimbursement methodology will likely lead to undesirable economic impacts, could stifle a fledgling biosimilars marketplace, and also lead to potential issues related to patient safety and access to appropriate treatment options, and therefore Congress should proactively fix this flawed approach.


[11] This is in contrast to the way that small molecule drugs are reimbursed by CMS under Part B: reimbursement is equal to 106 percent of the volume-weighted average sales price of all brand name and generic versions of a particular drug; the 6 percent add-on is not calculated separately or based solely on the ASP of the brand name drugs.

 

The American Action Forum (AAF) studied the surge in Bakken oil production, its impact on the environment, and finds air quality in the region unchanged. Generally, as U.S. and Bakken oil production has increased, air quality has actually improved. Oil production has increased in the region by 1,450 percent, while more than 90 percent of the air quality days in the region remain “good,” a trend unchanged since 2008. In addition, these economic boons are in concert with falling greenhouse gas emissions in the U.S.

Oil Production Boom

 For the past several years, new techniques to extract oil and natural gas have led to surging production across the U.S., including the Bakken shale formation in North Dakota and parts of eastern Montana. The graph below charts the incredible growth in the region’s production, measured in millions of barrels per day.

As the data reveal, the Bakken region has increased output from 0.08 million barrels per day in January 2008, to 1.16 million by August 2015, a 14.5-fold uptick. During this time, the nation as a whole has also witnessed incredible growth in output. The graph below details U.S. oil production from 2008 to 2014, measured in thousands of barrels a day.

Although the Bakken gains have been more pronounced, as shown above, the U.S. has still increased production by 74 percent, from just over five million barrels a day to more than 8.7 million in 2014. This expansion has helped to drive down gas prices for consumers at the pump and contributed to the realistic possibility of U.S. energy independence for the first time in more than a generation. However, as noted, this growth has increased cries that the fracking and oil boom has led to declining environmental quality. The data reveal these claims as mostly hollow.

Air Quality in Bakken

EPA labels air quality on a given day by the following: “good,” “moderate,” “unhealthy for sensitive groups,” “unhealthy,” “very unhealthy,” and “hazardous.” As AAF examined earlier this year, air quality is generally improving across the U.S. and the Bakken has enjoyed cleaner air as well. EPA generally defines “good” air days as the following: “air pollution poses little or no risk.” In the Bakken region, despite the oil production boom, more than 90 percent of year is labeled “good” by EPA. See the graph below:

 

Throughout the history of the oil boom in the Bakken, the percentage of the year with “good” air quality days has consistently stayed above 90 percent, with more than 94 percent considered “good” in 2014, a slight improvement above the 2013 figure (91 percent). The rate of “unhealthy” days in the Bakken remains low, with less than three last year, down from nine in 2012. There is actually a positive correlation (albeit, the correlation coefficient is just 0.36) between rising Bakken oil production and the overall number of “good” air quality days in the region.

Compare the Bakken’s results to upstate New York’s Marcellus region, which has banned fracking. In upstate New York, the average jurisdiction has never experienced more than 90 percent “good” air days in a year. These jurisdictions have averaged roughly 83 percent, compared to 93 percent for the Bakken. Increased oil and natural gas operations certainly don’t directly result in cleaner air, but there is little evidence in the Bakken area that production is causing poor air quality days for local residents.

Although “good” air quality days are a general indicator that the levels of environmental pollution are well within acceptable limits, AAF also examined smog in the Bakken region. EPA data tracks days throughout the year when ground-level ozone (smog) is the dominant pollutant. Even by that metric, the Bakken region has made slight improvements. The chart below tracks days in which ozone is the dominant pollutant for the average jurisdiction in the Bakken (with linear trendline).

 

As the data reveal, smog has gradually declined as the dominant pollutant in the area. In 2008, it was the main pollutant for 294 days; that figure dropped to 270 days by 2014, a decline of eight percent. This improvement is hardly earth-shattering, but it helps to dispel any notions that oil production in the Bakken is associated with dangerous air quality.

Greenhouse Gas Emissions

In addition to the positive news of energy independence and strong air quality, the amount of greenhouse gas (GHG) released into atmosphere continues to decline. This, despite the renaissance in domestic energy production. The graph below track carbon dioxide emissions from 2005 to 2013.

 

As noted above, U.S. GHG emissions have declined from six billion tons in 2005 to roughly 5.5 billion tons by 2013, a drop of nine percent. Even Montana, located partially in the Bakken region, has managed to reduce emissions nine percent during the same period. Likewise, Pennsylvania, which has drastically increased its natural gas production, has decreased carbon dioxide emissions by eleven percent from 2005 to 2013.

Conclusion

Energy booms are not incompatible with clean air and a strong environmental record. The Bakken region increased oil production 1,450 percent, all while maintaining “good” air quality for more than 90 percent of the year. The U.S. economy may not have to choose between energy production and environmental stewardship after all.

Related Research
  • This year restaurant employment in cities that raised their minimum wage has grown far slower than their surrounding areas

  • Restaurant employment in Seattle has only grown 0.6% this year, while growing 6% in the rest of Washington state

  • Restaurant employment in the San Francisco Bay area has only grown 1.9% this year; in the rest of California it has grown 3.6%

In recent years, policymakers and labor advocates have proposed raising the minimum wage at federal, state, and local levels. On the local level, these efforts have been somewhat successful as a number of major cities have enacted minimum wage increases over the last few years. These cities include San Francisco and Seattle, which both enacted laws to implement a “living wage” of $15 per hour. Beginning this year, several major cities took steps to phase in these large minimum wage increases. Now that we are almost to the end of 2015, these cities provide a new source of evidence to evaluate the labor market implications of raising the minimum wage. American Enterprise Institute’s Mark J. Perry has been tracking employment patterns both in Seattle and in the rest of the state of Washington, finding that restaurant employment in the city has been falling since the beginning of the year while it has been rising in the rest of the state. But, how have the rest of the major metropolitan areas fared? The facts show that this year restaurant employment in the major metropolitan areas with minimum wage increases has consistently experienced slower growth than restaurant employment in the rest of their states.

2015 Major Cities with Minimum Wage Increases

This year, the seven major U.S. cities that implemented minimum wage increases are Chicago, Louisville, Seattle, Washington, Oakland, San Francisco, and San Jose. These minimum wage increases are detailed in Table 1.

Table 1: Major City Minimum Wage Increases in 2015

City

Previous Minimum Wage

Current Minimum Wage

Date Effective

Minimum Wage Law

Chicago, IL

$8.25

$10.00

7/1/2015

$13 by 2019

Louisville, KY

$7.25

$7.75

7/1/2015

$9 by 2017

Seattle, WA

$9.47

$11.00

4/1/2015

$15 by 2018/2021

Washington, DC

$9.50

$10.50

7/1/2015

$11.50 by 2016

San Francisco Bay Area

 

 

 

 

Oakland, CA

$9.00

$12.25

3/2/2015

$12.55 by 2016

San Francisco, CA

$11.05

$12.25

5/1/2015

$15 by 2018

San Jose, CA

$10.15

$10.30

1/1/2015

Inflation Adjustment

Most of these minimum wage increases are the beginning of much larger minimum wage hikes. For instance, San Francisco’s and Seattle’s minimum wage increases were the first steps to increasing the cities’ minimum wages to $15. In other cities, the minimum wage implementation is much further along. San Jose raised its minimum wage from $8 to $10 in 2013. Since then, the minimum wage in San Jose continues to rise with inflation. This year it increased by only 15 cents from $10.15 to $10.30. Overall, the San Francisco Bay Area has been full of cities implementing substantial minimum wage increases. Besides San Francisco and San Jose, the cities in the Bay Area that are phasing in minimum wage hikes are Berkeley, Emeryville, Mountain View, Oakland, Sunnyvale, and Richmond.

How to Evaluate Employment Trends

As noted in a previous American Action Forum (AAF) paper, when evaluating the impact of the minimum wage it is necessary to ask the right question. For instance, evaluating the effect of the minimum wage on total employment trends would generally yield irrelevant results because only about 2 percent of all wage and salary workers earn at or below the federal minimum wage. It is very unlikely that a minimum wage increase would affect the majority of workers who earn significantly above the minimum wage. As a result, any analysis that examines the entire workforce is likely to understate the labor market consequences of raising the minimum wage because the vast majority of workers do not earn low wages.

Thus, in order to fairly examine the labor market consequences of increasing the minimum wage, it is essential to examine low-wage workers who actually earn at or slightly above the current minimum wage. This includes young, low-skilled workers, the exact population policymakers are trying to help by increasing the minimum wage. One way to zero in on the low-wage population is to examine recent job growth trends in an industry that actually employs low-wage workers. This paper examines 2015 job growth in restaurants, specifically food services and drinking places, which in 2013 employed 48.78 percent of all workers who earned at or below the federal minimum wage. Food services and drinking places continue to employ low-wage workers today, as the average hourly pay rate of production and nonsupervisory workers in that industry was only $11.51 in September 2015. Evaluating job growth in food services and drinking places provides a valid idea of how recent minimum wage hikes are impacting job creation for low-wage workers.

Methodology

To analyze trends in city restaurant employment relative to the rest of the state, we use the most localized data for each metropolitan area available in the Federal Reserve Bank of St. Louis’s Federal Reserve Economic Data (FRED). For each city, we use seasonally adjusted food services and drinking places employment data in the metropolitan statistical area (MSA) or metropolitan divisions (MD), illustrated below.

Table 2: Metropolitan Areas Analyzed

City

Metropolitan Area Analyzed

Chicago, IL

Chicago-Naperville-Arlington Heights (MD)

Louisville, KY

Louisville/Jefferson County (MSA)

Seattle, WA

Seattle-Bellevue-Everett (MD)

Washington, DC

District of Columbia

San Francisco Bay Area

 

Oakland, CA

Oakland-Hayward-Berkeley (MD)

San Francisco, CA

San Francisco-Redwood City-South San Francisco (MD)

San Jose, CA

San Jose-Sunnyvale-Santa Clara (MSA)

Although we do not directly analyze the cities that raised the minimum wages (except in the case of Washington, DC), analyzing employment trends in the entire metropolitan areas may still provide useful information on the impact of the minimum wage on low-wage employment. In addition, by evaluating the metropolitan areas, we are also able to take into account trends in smaller localities that are following in the footsteps of the major cities by raising their own minimum wages. This is particularly true in the San Francisco Bay Area where a large number of smaller towns surrounding the major cities have enacted their own minimum wage increases. For instance, the Oakland-Hayward-Berkley MD includes Oakland and Berkeley, both of which began phasing in minimum wage increases. In addition, the San Jose-Sunnyvale-Santa Clara MSA includes San Jose, Sunnyvale, and Mountain View, which all increased their minimum wages to $10.30 this year.

For each metropolitan area, we measure the growth in food services and drinking places employment in 2015. Consequently, we measure the percent change in food services and drinking places employment from December 2014 to September 2015, the most recent month with available data. We do the same for the state that the metropolitan area is located in and exclude the metropolitan area itself. Since Washington, DC is not located in a state, we compare restaurant employment growth the District to the growth in Virginia and Maryland combined.

We then combine food services and drinking places employment in all of the metropolitan areas and combine employment in all of the metropolitan areas’ surrounding states. This allows us to compare restaurant employment growth in all the metropolitan areas that raised the minimum wage together to restaurant employment growth in all of the rest of the states together.

Results

The chart below shows that restaurant employment in metropolitan areas with cities that raised the minimum wage in 2015 has lagged behind employment in the rest of the states.

Clearly, restaurant employment has grown far slower this year in cities that raised the minimum wage than in the rest of the states in which those cities are located. This year restaurant employment in the metropolitan areas with major cities that raised the minimum wage only grew 1.1 percent through September. In the surrounding state areas, however, restaurant employment grew 2.8 percent.

Table 3 shows the percent growth in restaurant employment in each metropolitan area and in the rest of the area’s state.

Table 3: 2015 Growth in Restaurant Employment: Metropolitan Area vs. Rest of the State

City

Metropolitan Area

Rest of the State

Chicago, IL

0.5%

0.4%

Louisville, KY

2.6%

3.7%

Seattle, WA

0.6%

6.0%

Washington, DC

-0.6%

0.9%

San Francisco Bay Area

1.9%

3.6%

Oakland, CA

4.1%

3.2%

San Francisco, CA

1.4%

3.4%

San Jose, CA

0.0%

3.4%

Total

1.1%

2.8%

Restaurant employment in the major cities that raised the minimum wage this year has consistently been slower than the rest of the states in which the cities are located. Restaurant employment in Louisville grew 2.6 percent, 1.1 percentage points lower than restaurant employment growth in the rest of Kentucky. In Seattle, where the minimum wage is increasing to $15 per hour, restaurant employment has only grown 0.6 percent this year while restaurant employment in the rest of the state of Washington grew 6 percent.[1] In the other city that is phasing in a $15 minimum wage, San Francisco, restaurant employment only grew 1.4 percent and in the rest of California it grew 3.2 percent.

There are also a few exceptions in the major cities we examine. In particular, restaurant employment in Chicago grew at about the same rate as in the rest of Illinois (0.5 percent versus 0.4 percent). Also, restaurant employment in Oakland grew faster than the rest of California (4.1 percent versus 3.2 percent). Despite the faster growth in Oakland, however, restaurant employment in the three major cities of the San Francisco Bay Area only grew 1.9 percent. That is 1.7 percentage points slower than the 3.6 percent growth in restaurant employment experienced in the rest of California.

In Washington, DC, meanwhile, restaurant employment fell 0.6 percent. In Maryland and Virginia, it increased 0.9 percent. Interestingly, Washington, DC, is the only city that we are able to directly measure the change in restaurant employment in the District itself (rather than the metropolitan area) and it by far has the most negative employment trends. In all other cities we measure the change in restaurant employment in the metropolitan regions where these cities are located, which include towns that did not increase the minimum wage. As a result, it is possible that the employment trends in the smaller towns also included in metropolitan areas measured could be making the employment growth in the metropolitan regions more positive than what is actually being experienced in the cities that raised their minimum wage this year.

Conclusion

While cities continue to phase in large minimum wage hikes, the verdict is still out on how these policies will impact local employment. The early data, however, do indicate a slowing of restaurant employment growth. As these steep minimum wage increases continue to be implemented, it is possible that these trends may become more worrisome over time.



[1] Our results for Seattle differ from AEI’s Mark J. Perry’s because he used Seattle MSA and we used Seattle MD.

Nearly two years ago, the Affordable Care Act (ACA) implemented a new individual insurance marketplace, along with the promise of stability and affordability. The new individual marketplace was to rival that of the employer sponsored insurance (ESI) marketplace in stability and predictability, while premiums were to rise at rates much lower than the historical average. This paper seeks to evaluate the degree to which these promises have been fulfilled heading into year three. In this study we find that the cost of both the benchmark Silver plan and the lowest cost Bronze plan will increase by 10 percent in 2016. 

Data and Methodology

The primary sources of data for this report are the 2014, 2015, and 2016 individual market medical landscape files that are available through the Centers for Medicare and Medicaid Services (CMS). These landscape files contain data on the health insurances plans that were offered through the federally-facilitated or federal-state partnership exchanges, which include 36 states in 2014, 37 states in 2015, and 38 states in 2016—only 35 states are included in all three datasets.

Breakaway Policy and the Robert Wood Johnson Foundation maintain a dataset on Silver plans offered in all states in 2014, which we use to calculate premiums for 2014 benchmark Silver plans in non-federal exchange states. In order to obtain information on 2015 and 2016 silver plans that were not included in the data available through CMS, we use plan compare tools on state-based exchanges where they are available. Overall, we analyze data on 2014, 2015, and 2016 Silver plans for 43 states, which account for 461 of the 501 rating areas nationwide. Our estimates on the number of health insurance issuers in a given rating area are also based on Silver plan data.  

We use plan compare tools on state-based exchanges to obtain information on 2015 and 2016 Bronze plans not included in the federal data. However, we were unable to include data on 2014 Bronze plans offered outside of federally facilitated or federal-partner exchanges. As a result, our analysis of Bronze plans is restricted to the 36 states that were included in the federal exchange data in 2014, which account for 407 of the 501 rating areas nationwide. In computing average effects across rating areas, we calculate a weighted average of the effects in each rating area using the potentially eligible population. The potentially eligible population is defined as the number of individuals who are either uninsured or insured through the individual market, ineligible for Medicaid or the Children’s Health Insurance Program, and determined to be a legal resident. We estimate this population using the 2010-2012 American Community Survey. All of the premium estimates in this report are based on those offered to a 27-year-old non-smoker. Unless otherwise noted, the premium changes do not include any subsidies that are available for middle-to-low income households. In our analysis of subsidized 2014 benchmark plans, we consider the post-subsidy premium for a 27-year-old non-smoker that earns $30,000 per year. Where we consider the implications for families, we calculate the premium for a married couple of 27 year olds with two children. We also use both the 2014 income contribution scale and 2014 federal poverty guidelines for calculating the premium subsidy for all three years. This is a simplification that ignores a slight increase in both the federal poverty guidelines and the income contribution scale that roughly offset each other in terms of the overall effect on the premium subsidy for an individual whose income does not change. More importantly, we assume that household income remains the same. An estimate that included some increase in income would lead to a reduction in premium subsidy.

Determining Premium Growth

Each state market operates under different conditions and costs of health insurance vary between rating areas of different states. There are also a variety of plans within each rating area that consumers can purchase, and the enrollment numbers for each of those plans will fluctuate according to several factors. Additionally, risk protections that were guaranteed by the ACA (a guarantee that has proven empty), the growing pool of insured, and changes in the plans offered can all affect premiums in more indirect ways. Averages mask a lot of this variation; therefore, it is necessary to look at the national insurance marketplace from various viewpoints if any legitimate analysis of premiums is to be made. 

In the discussion of health insurance costs, the premiums for the second-lowest cost Silver plan, referred to as the benchmark Silver plan, is often the focus because of its high enrollment numbers—roughly 11 percent of the individual market. This year, in 461 observed rating areas, the average premium for the benchmark Silver plan increased by 10 percent. However, many of those plans have changed. Roughly 71 percent of the 2015 benchmark plans will not remain benchmark plans in 2016, with many of those plans leaving the market all together. Of the 2015 benchmark plans that are still offered, but are not necessarily still the benchmark plan, in 2016, there has been an average premium increase of 10 percent. Since 2014, the average cost of the benchmark has grown by 13.5 percent. 

While the average cost of benchmark Silver plans serves as a quick way to gauge the cost of health insurance, averages mask variation.  Non-benchmark Silver plans enrolled roughly 59 percent of the marketplace in 2015, while Bronze plans—the cheapest metal level on the market—enrolled about 20 percent. Premiums for the lowest cost Bronze plan have increased by 10 percent between 2015 and 2016, and nearly 14 percent since 2014. Using these lower-cost plans as a judge may give us a better understanding of how people are affected throughout the market because lower cost plans are more likely to enroll those ineligible for subsidies and cost sharing. 

Average rate increases disguise substantial variation across the country, which is divided into 501 different rating areas. For example, we estimate that the lowest cost Bronze plans increased by about 10 percent on average while 20 of the 461 rating areas observed saw an average 10 percent decrease. On the other hand, 85 of 461 observed rating areas saw increases of over 25 percent. There is similar variation in the premiums of 2016 benchmark plans as premiums will increase by more than 30 percent in some areas while others will see decreases of to 20 percent or more.

Though averages only tell part of the story, it is noteworthy that the increases in both benchmark Silver plans and lowest cost Bronze plans are already approaching the 8 to 12 percent increases that afflicted the individual market before 2014. Increases of this size were promised to stop under the ACA.

Plan Turnover and its Impacts

Last year, 53 percent of consumers enrolled in a 2014 plan shopped around for a different plan in 2015. This year, the administration is encouraging people to do the same. Those people who do not shop around for a new plan will be auto-enrolled in the plan for which they are currently enrolled or a similar cost plan if their current plan is not available. For example, of those who are currently enrolled in a 2015 benchmark Silver plan many will have to enroll in a new plan in 2016. This turnover in benchmark plans—and all the other plans on the market—is problematic because it affects both premiums and overall cost of insurance as subsidies change accordingly. Another unfortunate outcome of this turnover is the possibility of people being forced to switch health care providers as networks change with plans.

In 321 of the 461 rating areas, the benchmark plan has changed, and many are no longer offered. As mentioned earlier, the average increase in premiums of 2015 benchmark plans was 10 percent, and when subsidies are considered, premiums for those eligible increased 6.7 percent. In many rating areas, a rise in competition—introduced by new entrants and the process of bidding for the benchmark plan—can lead to a decrease in the benchmark premium even as premiums rise for most of the other offered plans. As this happens, many people in those rating areas will see premiums grow while subsidies decrease. In areas where there are only two insurers, average benchmark premiums increased by 13 percent while premiums increase by 8.3 percent on average in those areas where there were more than two insurers.

In the employer sponsored insurance market, consumers can expect a consistent premium growth average of 6 percent without needing to worry about switching plans year to year. The change in benchmark plans from year to year also indicates the high level of flux in the market.  This too illustrates the amount of uncertainty consumers face with an unstable market.

Conclusion

It is important to remember that the current marketplace structure provided by the ACA is relatively new. In many ways, the health insurance marketplace is just as unpredictable as it was a year ago. In regards to cost, it is marked by disparities between states and rating areas. There is still a significant amount of movement among insurers as they enter and exit markets, introduce new plans, and remove old plans. Because of these factors, it will still take time to pinpoint the underlying cost growth in the individual market.


  • Consumers and governments stand to benefit from these innovative changes and a hands off approach that incentivize their creation.

  • Disruptive technologies bring new and expanded services to consumers.

In previous work, AAF focused on the online and traditional gig economy by exploring the changes in this labor force since the economic downturn.[1] But within this broad class, a narrower category of networked technologies has taken the spotlight to become a constant focus for policymakers, critics, and presidential hopefuls. These technologies meld peer-to-peer networks on both the production and consumption sides to sell goods and services.[2] While labor practices have been a constant topic of conversation, many have lost sight of the primary benefit of new network-based service technologies, and the chance that it affords policymakers. Disruptive technologies bring new and expanded services to consumers. As consumers switch and efforts are made to place the new services into the old regulatory buckets, the costs of the old system are highlighted and policymakers have the opportunity to rebalance asymmetric regulations. The burdens of the regulatory system are extensive but often go unseen. While there has been a lot of concern about how to regulate disruptive technologies, consumers and governments stand to benefit from these innovative changes and a hands off approach that incentivize their creation. This paper highlights both the state of the modern gig economy, its consumer benefits, and the cost and affiliated burdens imposed by regulators.

How Networked Technologies Are Bringing Benefits to Consumers

Disruptive technologies and innovative business models benefit consumers because they internalize the power of markets, thus bringing to bear their power to help transform a service. Markets are characterized by three broad features: they must attract a sufficient proportion of potential market participants both as buyers and sellers to come together; they must mitigate the problem of congestion by allowing participants alternative possibilities or making the markets clear quickly; and finally, they must make it safe to participate in the market as simply as possible.[3] Disruptive network technologies employ advanced communication methods to help solve the first and third issues, while also bringing into the market new participants and protection mechanisms.

Broadly speaking, what unites these new technologies are four broad features:[4]

1.        Enabled by advanced communication networks, new technologies reduce transaction costs and allow new trades to occur.

2.      These trades often occur with assets that otherwise will not be used, thus bringing “dead capital” back to work for consumers.

3.      These new organizational forms push services from isolated exchange to market exchange.

4.      Review systems and other dynamic policing mechanisms instill trust into the both side of the bargain.

The reduction of transaction costs

For some time, the existence of companies posed a problem for economists. When consumers want a good or service, why is it that they cannot just go to the market and contract for it? In other words, why are there so many companies? As Nobel Prize winning economists Ronald Coase and Oliver Williamson explained, there are huge costs that one has to incur to participate in the market, to find out who has the good or service, to make a deal, and to ensure that the provider follows through on their agreement. Thus, companies capitalize on what economists have come to call transactions costs. The nature of firms is intimately tied with the transactions costs within the market.   

Advanced networked technologies such as Internet purchasing, mobile app commerce, etc., have dramatically reduced the cost of interacting, bargaining, and monitoring, allowing for the creation of new kinds of organizations. Thus, with the internalization of market processes, trade in services and goods is expanded. In turn, consumers have flocked to these networked services because of the added convenience, lower prices, and higher quality services. This new organizational form first took hold with the Linux operating system and has become especially well known via the Wikipedia project. Decentralized exchanges and cheap communication over the Internet allowed for programmers to connect to collectively edit and advance Linux, while those same tools have allowed for Wikipedia editors to collect and publish over 5 million English language pages.[5] The biggest names in this space, including Uber, Lyft, and Airbnb operate in industries with large barriers to entry, which also might be understood as having high initial setup costs, a kind of transaction cost.

While today these platforms are transforming specific services, there are countless smaller players that have created their own niche market that could not have been possible before ubiquitous communication. Two of the more well-known organizations include Kiva, which allows people to lend money via the Internet to low-income and underserved entrepreneurs and students across 82 countries, and Kickstater, which crowdfunds creative projects. These patronage platforms evince how powerful networks can be if they utilize a vast network of suppliers, who have committed relatively small funds, to affect a targeted domain of consumers. The case of financing is, however, just a small part of a much larger change in how assets are being deployed in the marketplace. 

Resurrecting dead capital

At any given time, there are countless assets not being used. In the US, about 1.5 rooms exist for every person, cars will often sit idle in parking lots during business hours, and countless other durable goods will go unused for long swaths of time.[6] Because these assets cannot easily be bought, sold, valued or used as an investment, they are often called dead capital.[7] Reducing the cost to transact means that these assets can be brought back into the market to be bought and sold, creating beneficial exchanges for both consumers and producers. While much of the focus has centered around Uber, Lyft, and Airbnb, it is likely that niche markets for specific goods and services like tools, boats, musical instruments, and extra factory space will see the biggest growth in upcoming years.[8]  

Consider Airbnb. Renting out spare rooms and couches resembles the once common practice of "boarding out." Boarding was especially popular in the early 1900s with lower-income and recently immigrated families to supplement income and help assimilate newcomers, who were often within that family’s social network.[9] Boarding out was prevalent, but fell out of favor as regulation of boarding homes increased. Airbnb has brought back these kinds of services to consumers, reemploying extra rooms into the economy.

There are a number of clear benefits from expanded services. For one, consumers and producers have greater choices. One of the biggest advantages of more choice could come in the form of narrowing of prices. Thus, while prices will not always decrease, it could be that the absolute range of prices will be narrowed around an average.[10]

As methodological techniques advance and data is collected, it is likely that the real benefits will be conferred to the owners of the asset. Early data from Airbnb and other online marketplaces suggests that the introduction of these peer-to-peer production firms has the effect of substituting rentals for ownership, thus conferring onto these new owners the financial benefits that go along with ownership. Moreover, used-good prices do tend to see drops that come hand in hand with consumer surplus increases.[11] As a recent paper on this topic explained,

Consumption shifts are significantly more pronounced for below-median income users, who also provide a majority of rental supply. Our results also suggest that these below-median income consumers will enjoy a disproportionate fraction of eventual welfare gains from this kind of ’sharing economy’ through broader inclusion, higher quality rental-based consumption, and new ownership facilitated by rental supply revenues.[12]

While still early, these results are promising and suggest that both sides of the bargain actually win.

The transition from isolated to market exchange

By reducing transaction costs and reintroducing dead capital back into the market, new technologies are able to transform isolated exchanges, a one-to-one interaction, into market exchanges, that brings both customer and producer knowledge to a wider audience. In the parlance of economists, search costs are reduced and the market is expanded.

The concept of isolated exchange is one that dates back to Aristotle’s work on economics.[13] In this kind of exchange, the deal is limited to those in a close network, often among friends, and is not standardized, nor does it take into consideration other players in the market. The knowledge imbued in prices that comes from extensive and liquid markets is simply not the same when those interactions are just one offs. Transitioning from these one off trades to a more fluid market exchange has serious benefits, especially for consumers. As experiments in this space have shown, in some cases only four buyers and four sellers are needed to bring about competitive outcomes, but only if the structure of those exchanges is a market.[14]

Limited knowledge about demand for a service has long been a limiting factor in the expansion of some services. Airbnb has built a massive back catalogue of rooms in areas that have extra spaces but aren’t typically in the same places that hotels aggregate. Indeed, this is a driving factor behind the company. Instead of a typical hotel room, many flock to the site because it offers a unique and comfortable space to connect with new people and rest.[15]

Taxicabs have long been criticized for not providing service in some neighborhoods and to some economic groups. In part, this is due to knowledge problems in that the demand for a product or service exists, but isn’t known to exist. According to one analysis of actual Uber rides, the expansion of the company has largely benefited low-income New Yorkers, who live in the bottom half of New York City median household income.[16] While there has always been a demand for these services in underserved neighborhoods, knowledge of that demand can only now be quantified, pushing service to expand into these new areas. Indeed, in a survey of consumers of these products, the third most cited reason for use is that the product or service cannot be elsewhere.[17]

Personal reputation and trust enabled by reviews

Like other kinds of trade, economic exchange enabled by new tech is built on a system of trust, which is supported via rating systems and electronic verification techniques. The sophistication of these rating systems enables the ecosystem and is clearly changing the nature of consumer regulation of bad actors. Paradoxically, while consumers have lost trust in large companies, peer-to-peer networks and social selling might be adopted in part because they are more psychologically appealing as they provide an experience that is distant from bureaucratic organizations.[18]

It is a mistake to think that online reputational systems are simple affairs. As should be expected, buyers consider reviews and use them for a variety of purposes when purchasing goods. Moreover, “negative reviews do hurt sellers, but that a negative reputation hurts more than a positive one helps on some dimensions but not on others.”[19] Also, online sellers are able to distinguish themselves by replicating unique selling points highlighted by successful brands.

Combined, these forces act as constraints on businesses as evidenced in research on Yelp, a service rating web site. For one, it has been found that a one star increase in Yelp’s five star rating system leads to between a 5 and 9 percent increase in revenue for independent restaurants. For chain restaurants, however, ratings seem to not have an appreciable effect on revenue. However, as Yelp penetration has expanded, the market share of chain restaurants has diminished.[20] In other words, online consumer reviews are standing in for more traditional forms of reputation.

Yet those traditional forms of reputation are also undergoing changes, mirroring one of the oldest forms of selling, direct selling. Direct selling is built on relationships, distinct from physical and online retail outlets. As such, the performance of the service or good is being continually monitored because there is a close connection between the buyer and seller. Now, those kinds of connections and the resulting consumer insights are being extended online into online social channels via technology, fostering closer experiences. New platforms create the spaces for communication and personal brand management that had largely gone unnoticed apart from direct sellers. Thus, the constant two-way communication imbued in networks has expanded the scope of this important constraint into other methods of selling.   

This more robust knowledge ecosystem has had direct effects on taxicabs much like Yelp reviews. Because there is extensive data provided by Uber for use, there are some interesting relationships to be found in their rise and effects on the quality of local taxis. After controlling for a number of other factors, Uber’s expansion is connected with a decline in consumer complaints per trip about taxis in New York. In Chicago, there is a similar story. The networked service’s growth is associated with a decline in broken credit card machines, air conditioning and heating, rudeness, and talking on cell phones.[21]

But for taxicabs, this is a two way street. As a profession, taxi drivers are subject to some of the most significant levels of workplace violence, up to 33 times higher than average. Nonfatal assaults, homicides, robberies, and verbal abuse are among the highest for taxicab drivers, while the perpetrators of the less serious actions typically face few legal consequences. With Uber, personal reputation is quantified. Moreover, because there is no cash exchange, the possibility for robbery is severely diminished, and if there are problems, identifying information on the rider is kept which allows for ex post enforcement. All combined, the safety of the driver is better assured with Uber.

The Unseen Cost of Regulation

While innovation in business models for independent businesses has enhanced economic opportunity for sellers and quality of service options for consumers, changes in the marketplace have also given rise to new efforts to regulate these firms. The laws some want to apply to these upstarts are meant to ensure safety, quality, and transparency. Yet, as the above section highlights, desirable outcomes for both consumers and producers are occurring with these networked technologies. As has been detailed for some time, the laws meant to ensure consumer protections act as hurdles for new entrants, imposing costs on the economy, producers and consumers.

However, it is difficult to calculate just how extensive this drag is. Two problems plagued past studies of regulation. For one, specific and narrow areas of regulation were often studied, like environmental or telecommunication regulation, and thus limited in scope. On the other hand, more generalized methods, like the World Bank’s Ease of Doing Business Index, quantified the regulatory burden in a comparative fashion among regions.[22] While this method has proven useful to researchers in making comparisons and connecting this data with economic growth figures, the downside is that the absolute cost to consumers is difficult to ascertain. Efforts to calculate the total burden of government mandates have been taken up in recent years, and are taking a number of different forms.

One way to get at the cost of regulation can be found in AAF’s Regulation Rodeo, which compiles the cost of final rules as published in the Federal Register. The innovation of this program lies in data collection and ease of access, as it compiles official figures. According to the total implemented rules so far this year, consumers will ultimately have to bear $90 billion and 48 million paperwork hours.[23] However, this method doesn’t include all of the proposed rules, which when summed for the whole of 2014 came to a grand total of $181.5 billion.[24]

Another method of articulating the cost of regulation uses textual analysis to count all those instances in the Code of Federal Regulations (CFR) where individuals or companies are constrained in their actions.[25] Using this method, researchers found that the total number of restrictions in the entire CFR rose from about 835,000 in 1997 to over 1 million in 2012.[26]

Both of these projects help to illuminate the extensive unseen costs built into markets that occur when the federal government either requires or restricts some kind of action on the part of companies or individuals, which in turn gets placed as a burden on consumers. Though it takes many forms, what unites disruptive technologies and business models is that they are disrupting regulatory regimes and lowering the barriers to entry.

The reduction of taxicab medallion prices across the country provides a visceral example. Before Uber and Lyft, driving a taxicab in Chicago required that you first pay the city $357,000 for a taxicab medallion.[27] For New York City, the cost to get into taxis was much higher, at right around $1 million.[28] In Boston, the price of a medallion is $625,000, while in San Francisco, medallions were selling for $300,000 with the city taking a $100,000 cut.[29] Those prices are falling dramatically. In Chicago, the price fell to about $270,000 and the number of medallions being transferred also dropped dramatically. In New York city, the cost of these medallions were down about 25 percent at the beginning of 2015, and the relatively liquid market has come to a standstill.[30]

So why did the medallions cost so much and why are they now falling? Only a certain number of medallions were ever available at one time, so a medallion’s cost represents in part the going rate of exclusion also known as an entry barrier. With those entry barriers coming down, so too has the cost of the medallion. The story is the same for nearly every city. At first, there was free entry and exit into the taxicab market. Then, as a result of efforts to regulate the industry, the medallion system was put into place. Only those with a medallion could drive a taxi. When the first set of medallions were stamped in the 1930s, those grandfathered into the system got a windfall. As economists say, the asset (medallion) is capitalized and the costs to acquire it are baked in. But from then on, anyone who wants to enter into the market has to pay for access to the market by purchasing or leasing a medallion. Once competition enters into the market, the asset holders are left with real asset decreases, leading to what Buchanan called into a transitional gains trap.

And yet, there isn’t a huge difference between the medallion system and other regulations meant to affect quality, and quantity of service. Occupational licenses, which aim to ensure these consumer benefits, are proving to be another sad story of regulatory overreach, affecting about 1 out of every 3 jobs, compared to less than 5 percent in the early 1950s.[31] Both the medallion system and occupational licensing schemes act as a fixed cost and deter entry, thus increasing prices for consumers and keeping people out of certain professions. Only those that can jump through the regulatory hoops are able to secure their benefits, which has real and immediate impact. While the goals of these regulations are often laudable, the implementation imposes inefficiencies in the market, including limiting consumer choice, raising consumer costs, increasing practitioner income, limiting practitioner mobility, and depriving the poor of adequate services.[32] The White House has even chastised the broad nature of these regulations, noting that:

Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing. In some cases, alternative forms of occupational regulation, such as State certification, may offer a better balance between consumer protections and flexibility for workers.[33]

It is important to note that the real benefit of these regulations – improved quality of service – is often hard to find in the data.[34] So not only do these laws impart costs, they do so in a way that does not improve the underlying services. By bringing together consumers and producers of services and goods, networked technologies and independent business models not only place pressure on this clunky regulatory system, but also benefit consumers via the introduction of new services.

Conclusion

New technologies are upending the typical role of regulation. In a previous generation, it was regulation that protected consumers. Now reciprocal rating systems and transparency provide consumers critical buying information, giving all of us a chance to rethink why we regulate. Importantly, review and rating systems come as a result of expanding networks, which have brought a bevy of benefits to consumers and producers. Legislators need to be acutely aware of the benefits of change as they craft policy and work to ratchet down laws already on the books. Moreover, many services are growing and are being funded because there are opportunities separate from the traditionally regulated economy. In their zeal to protect and tax, governments all too often get in the business of picking winners and losers. New waves of products are forcing cities to justify their policies. Here’s hoping the innovators win.     


[1] Will Rinehart, Ben Gitis, Independent Contractors and the Emerging Gig Economy,

http://americanactionforum.org/research/independent-contractors-and-the-emerging-gig-economy

[2] Companies in this category include both on demand services like Uber, Lyft, and Instacart, with online marketplaces like Airbnb, Kiva, and Etsy.

[3] Alvin Roth, What Have We Learned from Market Design?, https://dash.harvard.edu/bitstream/handle/1/2579650/Roth_market%20design.pdf.

[4] All of these four features have been highlighted elsewhere, but this paper is especially indebted to the work conducted by Christopher Koopman, Matthew Mitchell, and Adam Thierer in “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.”  

[5] Wikipedia, Wikipedia:Size in Volumes, https://en.wikipedia.org/wiki/Wikipedia:Size_in_volumes

[7] Free to Choose Network, Dead Capital, http://www.thepowerofthepoor.com/concepts/c6.php.

[8] See note 6.

[10] Michael R. Baye, John Morgan, and Patrick Scholten, Information, Search, and Price Dispersion, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.482.7461&rep=rep1&type=pdf.

[11] Samuel P. Fraiberger and Arun Sundararajan, Peer-to-Peer Rental Markets in the Sharing Economy, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2574337.

[12] Ibid.

[13] Robert B. Ekelund, Jr., Robert F. Hébert, A History of Economic Theory and Method.

[16] Jared Meyer, How Uber is serving low-income New Yorkers: The mayor and City Council should study that, not only Manhattan congestion, http://www.nydailynews.com/opinion/jared-meyer-uber-serving-low-income-new-yorkers-article-1.2346353.

[17] Jeremiah Owyang, People Are Sharing in the Collaborative Economy for Convenience and Price, http://www.web-strategist.com/blog/2014/03/24/people-are-sharing-in-the-collaborative-economy-for-convenience-and-price/.

[19] Anindya Ghose, Panagiotis G. Ipeirotis, Arun Sundararajan, The Dimensions of Reputation in Electronic Markets, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885568.

[20] Monica Luca, Reviews, Reputation, and Revenue: The Case of Yelp.com, http://www.hbs.edu/faculty/Pages/item.aspx?num=41233.

[21] Scott Wallsten, The Competitive Effects of the Sharing Economy: How is Uber Changing Taxis?, http://www.techpolicyinstitute.org/files/wallsten_the%20competitive%20effects%20of%20uber.pdf.

[23] American Action Forum, Regulation Rodeo, https://regrodeo.com/?year%5B0%5D=2015  

[24] Sam Batkins, 2014: Year of Action, Year of Regulation, http://americanactionforum.org/research/2014-year-of-action-year-of-regulation.

[25] Omar Al-Ubaydli and Patrick A. McLaughlin, RegData A Numerical Database on Industry-Specific Regulations for All US Industries and Federal Regulations, 1997–2012, http://mercatus.org/sites/default/files/McLaughlin-RegData.pdf.

[26] Ibid.

[27] Aamer Madhani, Once a sure bet, taxi medallions becoming unsellable, http://www.usatoday.com/story/news/2015/05/17/taxi-medallion-values-decline-uber-rideshare/27314735/

[28] Matt Flegenheimer, $1 Million Medallions Stifling the Dreams of Cabdrivers, http://www.nytimes.com/2013/11/15/nyregion/1-million-medallions-stifling-the-dreams-of-cabdrivers.html?_r=0.

[30] Josh Barro, New York City Taxi Medallion Prices Keep Falling, Now Down About 25 Percent, http://www.nytimes.com/2015/01/08/upshot/new-york-city-taxi-medallion-prices-keep-falling-now-down-about-25-percent.html?_r=0.

[33] White House, Occupational Licensing: A Framework for Policymakershttps://www.whitehouse.gov/sites/default/files/docs/licensing_report_final_nonembargo.pdf

[34] Morris M. Kleiner, Allison Marier, Kyoung Won Park, and Coady Wing, Relaxing Occupational Licensing Requirements: Analyzing Wages and Prices for a Medical Service, http://www.nber.org/papers/w19906

In previous work, AAF focused on the online and traditional gig economy by exploring the changes in this labor force since the economic downturn.[1] But within this broad class, a narrower category of networked technologies has taken the spotlight to become a constant focus for policymakers, critics, and presidential hopefuls. These technologies meld peer-to-peer networks on both the production and consumption sides to sell goods and services.[2] While labor practices have been a constant topic of conversation, many have lost sight of the primary benefit of new network-based service technologies, and the chance that it affords policymakers. Disruptive technologies bring new and expanded services to consumers. As consumers switch and efforts are made to place the new services into the old regulatory buckets, the costs of the old system are highlighted and policymakers have the opportunity to rebalance asymmetric regulations. The burdens of the regulatory system are extensive but often go unseen. While there has been a lot of concern about how to regulate disruptive technologies, consumers and governments stand to benefit from these innovative changes and a hands off approach that incentivize their creation. This paper highlights both the state of the modern gig economy, its consumer benefits, and the cost and affiliated burdens imposed by regulators.

How Networked Technologies Are Bringing Benefits to Consumers

Disruptive technologies and innovative business models benefit consumers because they internalize the power of markets, thus bringing to bear their power to help transform a service. Markets are characterized by three broad features: they must attract a sufficient proportion of potential market participants both as buyers and sellers to come together; they must mitigate the problem of congestion by allowing participants alternative possibilities or making the markets clear quickly; and finally, they must make it safe to participate in the market as simply as possible.[3] Disruptive network technologies employ advanced communication methods to help solve the first and third issues, while also bringing into the market new participants and protection mechanisms.

Broadly speaking, what unites these new technologies are four broad features:[4]

  1. Enabled by advanced communication networks, new technologies reduce transaction costs and allow new trades to occur.
  2. These trades often occur with assets that otherwise will not be used, thus bringing “dead capital” back to work for consumers.

  3. These new organizational forms push services from isolated exchange to market exchange.

  4. Review systems and other dynamic policing mechanisms instill trust into the both side of the bargain.    

The reduction of transaction costs

For some time, the existence of companies posed a problem for economists. When consumers want a good or service, why is it that they cannot just go to the market and contract for it? In other words, why are there so many companies? As Nobel Prize winning economists Ronald Coase and Oliver Williamson explained, there are huge costs that one has to incur to participate in the market, to find out who has the good or service, to make a deal, and to ensure that the provider follows through on their agreement. Thus, companies capitalize on what economists have come to call transactions costs. The nature of firms is intimately tied with the transactions costs within the market.   

Advanced networked technologies such as Internet purchasing, mobile app commerce, etc., have dramatically reduced the cost of interacting, bargaining, and monitoring, allowing for the creation of new kinds of organizations. Thus, with the internalization of market processes, trade in services and goods is expanded. In turn, consumers have flocked to these networked services because of the added convenience, lower prices, and higher quality services. This new organizational form first took hold with the Linux operating system and has become especially well known via the Wikipedia project. Decentralized exchanges and cheap communication over the Internet allowed for programmers to connect to collectively edit and advance Linux, while those same tools have allowed for Wikipedia editors to collect and publish over 5 million English language pages.[5] The biggest names in this space, including Uber, Lyft, and Airbnb operate in industries with large barriers to entry, which also might be understood as having high initial setup costs, a kind of transaction cost.

While today these platforms are transforming specific services, there are countless smaller players that have created their own niche market that could not have been possible before ubiquitous communication. Two of the more well-known organizations include Kiva, which allows people to lend money via the Internet to low-income and underserved entrepreneurs and students across 82 countries, and Kickstater, which crowdfunds creative projects. These patronage platforms evince how powerful networks can be if they utilize a vast network of suppliers, who have committed relatively small funds, to affect a targeted domain of consumers. The case of financing is, however, just a small part of a much larger change in how assets are being deployed in the marketplace. 

Resurrecting dead capital

At any given time, there are countless assets not being used. In the US, about 1.5 rooms exist for every person, cars will often sit idle in parking lots during business hours, and countless other durable goods will go unused for long swaths of time.[6] Because these assets cannot easily be bought, sold, valued or used as an investment, they are often called dead capital.[7] Reducing the cost to transact means that these assets can be brought back into the market to be bought and sold, creating beneficial exchanges for both consumers and producers. While much of the focus has centered around Uber, Lyft, and Airbnb, it is likely that niche markets for specific goods and services like tools, boats, musical instruments, and extra factory space will see the biggest growth in upcoming years.[8]

Consider Airbnb. Renting out spare rooms and couches resembles the once common practice of "boarding out." Boarding was especially popular in the early 1900s with lower-income and recently immigrated families to supplement income and help assimilate newcomers, who were often within that family’s social network.[9] Boarding out was prevalent, but fell out of favor as regulation of boarding homes increased. Airbnb has brought back these kinds of services to consumers, reemploying extra rooms into the economy.

There are a number of clear benefits from expanded services. For one, consumers and producers have greater choices. One of the biggest advantages of more choice could come in the form of narrowing of prices. Thus, while prices will not always decrease, it could be that the absolute range of prices will be narrowed around an average.[10]

As methodological techniques advance and data is collected, it is likely that the real benefits will be conferred to the owners of the asset. Early data from Airbnb and other online marketplaces suggests that the introduction of these peer-to-peer production firms has the effect of substituting rentals for ownership, thus conferring onto these new owners the financial benefits that go along with ownership. Moreover, used-good prices do tend to see drops that come hand in hand with consumer surplus increases.[11] As a recent paper on this topic explained,

Consumption shifts are significantly more pronounced for below-median income users, who also provide a majority of rental supply. Our results also suggest that these below-median income consumers will enjoy a disproportionate fraction of eventual welfare gains from this kind of ’sharing economy’ through broader inclusion, higher quality rental-based consumption, and new ownership facilitated by rental supply revenues.[12]

While still early, these results are promising and suggest that both sides of the bargain actually win.

The transition from isolated to market exchange

By reducing transaction costs and reintroducing dead capital back into the market, new technologies are able to transform isolated exchanges, a one-to-one interaction, into market exchanges, that brings both customer and producer knowledge to a wider audience. In the parlance of economists, search costs are reduced and the market is expanded.

The concept of isolated exchange is one that dates back to Aristotle’s work on economics.[13] In this kind of exchange, the deal is limited to those in a close network, often among friends, and is not standardized, nor does it take into consideration other players in the market. The knowledge imbued in prices that comes from extensive and liquid markets is simply not the same when those interactions are just one offs. Transitioning from these one off trades to a more fluid market exchange has serious benefits, especially for consumers. As experiments in this space have shown, in some cases only four buyers and four sellers are needed to bring about competitive outcomes, but only if the structure of those exchanges is a market.[14]

Limited knowledge about demand for a service has long been a limiting factor in the expansion of some services. Airbnb has built a massive back catalogue of rooms in areas that have extra spaces but aren’t typically in the same places that hotels aggregate. Indeed, this is a driving factor behind the company. Instead of a typical hotel room, many flock to the site because it offers a unique and comfortable space to connect with new people and rest.[15]

Taxicabs have long been criticized for not providing service in some neighborhoods and to some economic groups. In part, this is due to knowledge problems in that the demand for a product or service exists, but isn’t known to exist. According to one analysis of actual Uber rides, the expansion of the company has largely benefited low-income New Yorkers, who live in the bottom half of New York City median household income.[16] While there has always been a demand for these services in underserved neighborhoods, knowledge of that demand can only now be quantified, pushing service to expand into these new areas. Indeed, in a survey of consumers of these products, the third most cited reason for use is that the product or service cannot be elsewhere.[17]

Personal reputation and trust enabled by reviews

Like other kinds of trade, economic exchange enabled by new tech is built on a system of trust, which is supported via rating systems and electronic verification techniques. The sophistication of these rating systems enables the ecosystem and is clearly changing the nature of consumer regulation of bad actors. Paradoxically, while consumers have lost trust in large companies, peer-to-peer networks and social selling might be adopted in part because they are more psychologically appealing as they provide an experience that is distant from bureaucratic organizations.[18]

It is a mistake to think that online reputational systems are simple affairs. As should be expected, buyers consider reviews and use them for a variety of purposes when purchasing goods. Moreover, “negative reviews do hurt sellers, but that a negative reputation hurts more than a positive one helps on some dimensions but not on others.”[19] Also, online sellers are able to distinguish themselves by replicating unique selling points highlighted by successful brands.

Combined, these forces act as constraints on businesses as evidenced in research on Yelp, a service rating web site. For one, it has been found that a one star increase in Yelp’s five star rating system leads to between a 5 and 9 percent increase in revenue for independent restaurants. For chain restaurants, however, ratings seem to not have an appreciable effect on revenue. However, as Yelp penetration has expanded, the market share of chain restaurants has diminished.[20] In other words, online consumer reviews are standing in for more traditional forms of reputation.

Yet those traditional forms of reputation are also undergoing changes, mirroring one of the oldest forms of selling, direct selling. Direct selling is built on relationships, distinct from physical and online retail outlets. As such, the performance of the service or good is being continually monitored because there is a close connection between the buyer and seller. Now, those kinds of connections and the resulting consumer insights are being extended online into online social channels via technology, fostering closer experiences. New platforms create the spaces for communication and personal brand management that had largely gone unnoticed apart from direct sellers. Thus, the constant two-way communication imbued in networks has expanded the scope of this important constraint into other methods of selling.   

This more robust knowledge ecosystem has had direct effects on taxicabs much like Yelp reviews. Because there is extensive data provided by Uber for use, there are some interesting relationships to be found in their rise and effects on the quality of local taxis. After controlling for a number of other factors, Uber’s expansion is connected with a decline in consumer complaints per trip about taxis in New York. In Chicago, there is a similar story. The networked service’s growth is associated with a decline in broken credit card machines, air conditioning and heating, rudeness, and talking on cell phones.[21]

But for taxicabs, this is a two way street. As a profession, taxi drivers are subject to some of the most significant levels of workplace violence, up to 33 times higher than average. Nonfatal assaults, homicides, robberies, and verbal abuse are among the highest for taxicab drivers, while the perpetrators of the less serious actions typically face few legal consequences. With Uber, personal reputation is quantified. Moreover, because there is no cash exchange, the possibility for robbery is severely diminished, and if there are problems, identifying information on the rider is kept which allows for ex post enforcement. All combined, the safety of the driver is better assured with Uber.

The Unseen Cost of Regulation

While innovation in business models for independent businesses has enhanced economic opportunity for sellers and quality of service options for consumers, changes in the marketplace have also given rise to new efforts to regulate these firms. The laws some want to apply to these upstarts are meant to ensure safety, quality, and transparency. Yet, as the above section highlights, desirable outcomes for both consumers and producers are occurring with these networked technologies. As has been detailed for some time, the laws meant to ensure consumer protections act as hurdles for new entrants, imposing costs on the economy, producers and consumers.

However, it is difficult to calculate just how extensive this drag is. Two problems plagued past studies of regulation. For one, specific and narrow areas of regulation were often studied, like environmental or telecommunication regulation, and thus limited in scope. On the other hand, more generalized methods, like the World Bank’s Ease of Doing Business Index, quantified the regulatory burden in a comparative fashion among regions.[22] While this method has proven useful to researchers in making comparisons and connecting this data with economic growth figures, the downside is that the absolute cost to consumers is difficult to ascertain. Efforts to calculate the total burden of government mandates have been taken up in recent years, and are taking a number of different forms.

One way to get at the cost of regulation can be found in AAF’s Regulation Rodeo, which compiles the cost of final rules as published in the Federal Register. The innovation of this program lies in data collection and ease of access, as it compiles official figures. According to the total implemented rules so far this year, consumers will ultimately have to bear $90 billion and 48 million paperwork hours.[23] However, this method doesn’t include all of the proposed rules, which when summed for the whole of 2014 came to a grand total of $181.5 billion.[24]

Another method of articulating the cost of regulation uses textual analysis to count all those instances in the Code of Federal Regulations (CFR) where individuals or companies are constrained in their actions.[25] Using this method, researchers found that the total number of restrictions in the entire CFR rose from about 835,000 in 1997 to over 1 million in 2012.[26]

Both of these projects help to illuminate the extensive unseen costs built into markets that occur when the federal government either requires or restricts some kind of action on the part of companies or individuals, which in turn gets placed as a burden on consumers. Though it takes many forms, what unites disruptive technologies and business models is that they are disrupting regulatory regimes and lowering the barriers to entry.

The reduction of taxicab medallion prices across the country provides a visceral example. Before Uber and Lyft, driving a taxicab in Chicago required that you first pay the city $357,000 for a taxicab medallion.[27] For New York City, the cost to get into taxis was much higher, at right around $1 million.[28] In Boston, the price of a medallion is $625,000, while in San Francisco, medallions were selling for $300,000 with the city taking a $100,000 cut.[29] Those prices are falling dramatically. In Chicago, the price fell to about $270,000 and the number of medallions being transferred also dropped dramatically. In New York city, the cost of these medallions were down about 25 percent at the beginning of 2015, and the relatively liquid market has come to a standstill.[30]

So why did the medallions cost so much and why are they now falling? Only a certain number of medallions were ever available at one time, so a medallion’s cost represents in part the going rate of exclusion also known as an entry barrier. With those entry barriers coming down, so too has the cost of the medallion. The story is the same for nearly every city. At first, there was free entry and exit into the taxicab market. Then, as a result of efforts to regulate the industry, the medallion system was put into place. Only those with a medallion could drive a taxi. When the first set of medallions were stamped in the 1930s, those grandfathered into the system got a windfall. As economists say, the asset (medallion) is capitalized and the costs to acquire it are baked in. But from then on, anyone who wants to enter into the market has to pay for access to the market by purchasing or leasing a medallion. Once competition enters into the market, the asset holders are left with real asset decreases, leading to what Buchanan called into a transitional gains trap.

And yet, there isn’t a huge difference between the medallion system and other regulations meant to affect quality, and quantity of service. Occupational licenses, which aim to ensure these consumer benefits, are proving to be another sad story of regulatory overreach, affecting about 1 out of every 3 jobs, compared to less than 5 percent in the early 1950s.[31] Both the medallion system and occupational licensing schemes act as a fixed cost and deter entry, thus increasing prices for consumers and keeping people out of certain professions. Only those that can jump through the regulatory hoops are able to secure their benefits, which has real and immediate impact. While the goals of these regulations are often laudable, the implementation imposes inefficiencies in the market, including limiting consumer choice, raising consumer costs, increasing practitioner income, limiting practitioner mobility, and depriving the poor of adequate services.[32] The White House has even chastised the broad nature of these regulations, noting that:

Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing. In some cases, alternative forms of occupational regulation, such as State certification, may offer a better balance between consumer protections and flexibility for workers.[33]

It is important to note that the real benefit of these regulations – improved quality of service – is often hard to find in the data.[34] So not only do these laws impart costs, they do so in a way that does not improve the underlying services. By bringing together consumers and producers of services and goods, networked technologies and independent business models not only place pressure on this clunky regulatory system, but also benefit consumers via the introduction of new services.

Conclusion

New technologies are upending the typical role of regulation. In a previous generation, it was regulation that protected consumers. Now reciprocal rating systems and transparency provide consumers critical buying information, giving all of us a chance to rethink why we regulate. Importantly, review and rating systems come as a result of expanding networks, which have brought a bevy of benefits to consumers and producers. Legislators need to be acutely aware of the benefits of change as they craft policy and work to ratchet down laws already on the books. Moreover, many services are growing and are being funded because there are opportunities separate from the traditionally regulated economy. In their zeal to protect and tax, governments all too often get in the business of picking winners and losers. New waves of products are forcing cities to justify their policies. Here’s hoping the innovators win.     



[1] Will Rinehart, Ben Gitis, Independent Contractors and the Emerging Gig Economy,

http://americanactionforum.org/research/independent-contractors-and-the-emerging-gig-economy

[2] Companies in this category include both on demand services like Uber, Lyft, and Instacart, with online marketplaces like Airbnb, Kiva, and Etsy.

[3] Alvin Roth, What Have We Learned from Market Design?, https://dash.harvard.edu/bitstream/handle/1/2579650/Roth_market%20design.pdf.

[4] All of these four features have been highlighted elsewhere, but this paper is especially indebted to the work conducted by Christopher Koopman, Matthew Mitchell, and Adam Thierer in “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.”  

[5] Wikipedia, Wikipedia:Size in Volumes, https://en.wikipedia.org/wiki/Wikipedia:Size_in_volumes

[7] Free to Choose Network, Dead Capital, http://www.thepowerofthepoor.com/concepts/c6.php.

[8] See note 6.

[10] Michael R. Baye, John Morgan, and Patrick Scholten, Information, Search, and Price Dispersion, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.482.7461&rep=rep1&type=pdf.

[11] Samuel P. Fraiberger and Arun Sundararajan, Peer-to-Peer Rental Markets in the Sharing Economy, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2574337.

[12] Ibid.

[13] Robert B. Ekelund, Jr., Robert F. Hébert, A History of Economic Theory and Method.

[16] Jared Meyer, How Uber is serving low-income New Yorkers: The mayor and City Council should study that, not only Manhattan congestion, http://www.nydailynews.com/opinion/jared-meyer-uber-serving-low-income-new-yorkers-article-1.2346353.

[17] Jeremiah Owyang, People Are Sharing in the Collaborative Economy for Convenience and Price, http://www.web-strategist.com/blog/2014/03/24/people-are-sharing-in-the-collaborative-economy-for-convenience-and-price/.

[19] Anindya Ghose, Panagiotis G. Ipeirotis, Arun Sundararajan, The Dimensions of Reputation in Electronic Markets, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=885568.

[20] Monica Luca, Reviews, Reputation, and Revenue: The Case of Yelp.com, http://www.hbs.edu/faculty/Pages/item.aspx?num=41233.

[21] Scott Wallsten, The Competitive Effects of the Sharing Economy: How is Uber Changing Taxis?, http://www.techpolicyinstitute.org/files/wallsten_the%20competitive%20effects%20of%20uber.pdf.

[23] American Action Forum, Regulation Rodeo, https://regrodeo.com/?year%5B0%5D=2015.  

[24] Sam Batkins, 2014: Year of Action, Year of Regulation, http://americanactionforum.org/research/2014-year-of-action-year-of-regulation.

[25] Omar Al-Ubaydli and Patrick A. McLaughlin, RegData A Numerical Database on Industry-Specific Regulations for All US Industries and Federal Regulations, 1997–2012, http://mercatus.org/sites/default/files/McLaughlin-RegData.pdf.

[26] Ibid.

[27] Aamer Madhani, Once a sure bet, taxi medallions becoming unsellable, http://www.usatoday.com/story/news/2015/05/17/taxi-medallion-values-decline-uber-rideshare/27314735/

[28] Matt Flegenheimer, $1 Million Medallions Stifling the Dreams of Cabdrivers, http://www.nytimes.com/2013/11/15/nyregion/1-million-medallions-stifling-the-dreams-of-cabdrivers.html?_r=0.

[30] Josh Barro, New York City Taxi Medallion Prices Keep Falling, Now Down About 25 Percent, http://www.nytimes.com/2015/01/08/upshot/new-york-city-taxi-medallion-prices-keep-falling-now-down-about-25-percent.html?_r=0.

[32] S. David Young, Occupational Licensing, http://www.econlib.org/library/Enc1/OccupationalLicensing.html.

[33] White House, Occupational Licensing: A Framework for Policymakershttps://www.whitehouse.gov/sites/default/files/docs/licensing_report_final_nonembargo.pdf.

[34] Morris M. Kleiner, Allison Marier, Kyoung Won Park, and Coady Wing, Relaxing Occupational Licensing Requirements: Analyzing Wages and Prices for a Medical Service, http://www.nber.org/papers/w19906.

Introduction

In 2011, two million Americans were infected with—and 23,000 killed by—drug-resistant bacteria.[1] Over the past 20 years incidents of drug-resistant infections acquired in hospitals have increased from a low of 10 percent of all hospital acquired infections to 60 percent today.[2] Despite the evident need for new, novel antibiotics, almost no major pharmaceutical companies are investing in antibiotic development. Only 11 new antibiotics were approved between 1998 and 2014, and no new classes of antibiotics have been approved since 1987.[3]

What’s Going Wrong?

Antibiotics developers, like all pharmaceutical manufacturers, are market actors sensitive to incentives. Currently there are very few incentives to invest in antibiotic research and development (R&D) and many reasons to direct those funds elsewhere.

Antibiotics are Expensive to Produce

Because we rarely face entirely new bacteria, antibiotic development is often intended to better combat a well-known bacterium that has adapted defenses against currently available treatments through repeated exposures. Bacteria’s ability to evolve quickly renders once therapeutic drugs useless against later generations.[4] Scientists must meet this challenge by developing either a more powerful generation of an existing drug, or a new drug with a unique method of attacking for which the bacteria has no adaptation.[5] Unfortunately, both these options—especially the latter—are expensive and scientifically complex, and therefore do not create much incentive for investment in R&D relative to other drugs that can more readily be improved upon.

Antibiotics Provide Low Return on Investment

Antibiotics offer drug manufacturers a particularly low return on their investment. They are intended to be single use drugs that cure the condition they treat.[6] Even an expensive antibiotic is unlikely to earn profits equal to a cheap treatment for a chronic disease; which must be purchased regularly throughout the patient’s lifetime.

Unlike other classes of drugs, antibiotics are often their own worst enemy financially. The more often an antibiotic is prescribed and used, the more opportunities the bacterium has to evolve and develop a resistance to the drug.

FDA Approval Process is Long and Expensive

Pathways to Food and Drug Administration (FDA) approval are slow and difficult. By the time a new antibiotic comes to market, the strain of bacteria it treats may have had over a decade to develop its resistance; particularly if the new drug is an update of a similar drug, rather than a novel form of attack.

Another important market influence acting against antibiotic R&D is the seeming irrationality of consumers. Bacterial threats to individual and population health are not as immediate as they once were, and deaths caused by uncomplicated bacterial infections are still relatively rare.[7] Public consciousness is currently more focused on “scarier” threats like cancer and heart disease, and most people do not take the threat of untreatable, drug-resistant bacteria seriously. This lack of public understanding of the seriousness of the situation causes consumers in situations that are not immediately life-threatening to undervalue antibiotics – they are unwilling to pay more for new treatments for old bugs, despite the substantial R&D costs that go into producing dugs that the bacteria are not resistant to. In fact, at the time of discovery, the average net value of a new antibiotic for the developer today is a loss of $50 million.[8]

What’s Being Done About It?

The slowdown in production of new and effective antibiotics is a clear public health threat. The FDA and Congress have both taken steps to address the problem, but most promising responses have unsurprisingly come from the free market.

Government Response

The FDA has released guidance directing farmers raising animals for consumption to alter their use of antibiotics that are approved for human use. The FDA is asking these farmers to voluntarily limit their use of antibiotics to therapeutic purposes under the oversight of a licensed veterinarian.[9] Though this guidance is voluntary, it appears that most livestock operations are willing to comply.[10] This is an important step in minimizing the opportunities bacteria have to develop resistances to drugs that are important to human health.

In 2012, Congress passed the Food and Drug Administration Safety and Innovation Act (S. 3187), which among other provisions, included the Generating Antibiotic Incentives Now (GAIN) Act.[11] The GAIN Act allows for more antibiotics that treat serious and life threatening infections to have an extended five-year exclusivity period beyond that of other drugs.[12] This law attempts to counterbalance the disincentives of a long FDA approval process and low returns on R&D investment, but may also drive up the price of these drugs by preventing competition from generic manufacturers. Nonetheless, the scientific reality is that antibiotics are still liable to lose their effectiveness (and therefore value) over time.

Congress is also currently considering the Promise for Antibiotics and Therapeutics for Health (PATH) Act to minimize the negative effects of the long FDA approval proves.[13] If enacted, this bill would allow for an expedited approval pathway for antibiotics for use in narrowly defined patient populations with serious, specific illnesses and an unmet need for treatment. These developers would later be able to apply for an expanded approval and the drug could be used off label if appropriate.

Private Sector Response

The response to the vacuum in antibiotic production has been most impressive in the private sector. The rise in new tests and specialized firms can both prevent and meet the need for new drugs.

The development of rapid diagnostics and changes in technology of Laboratory Developed Tests can provide health care providers with accurate diagnoses quicker.[14] By providing a diagnosis in hours rather than days, better diagnostic tests can enable the use of antibiotics before an infection has time to spread. It will also help reduce misuse of antibiotics.[15] These benefits will help slow the development of drug-resistant strains of infectious bacteria.

Specialized drug development firms and academic institutions with outside grant funding that focus exclusively on the development of antibiotics have largely replaced major pharmaceutical companies in the field.[16] With no other drugs on the market, small firms can focus exclusively on R&D. Once a drug has been developed, firms may apply for FDA fast-track approval for small, targeted populations of the sickest patients.[17] If successful, this business model encourages the sale of the patent to large drug companies with the resources to manufacture the drug on a large scale, and follow through on the full FDA approval process. By eliminating manufacturing and trial costs, small developers can generate a modest profit developing otherwise profitless antibiotics.[18]


[1] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry

[2] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry

[3] http://cddep.org/blog/posts/recent_fda_antibiotic_approvals_good_news_and_bad_news

[4] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[5] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[6] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[7] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[8] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[9] http://www.fda.gov/AnimalVeterinary/GuidanceComplianceEnforcement/GuidanceforIndustry/ucm216939.htm

[10] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry

[11] https://www.congress.gov/bill/112th-congress/senate-bill/3187

[12] https://www.congress.gov/bill/112th-congress/house-bill/2182

[13] https://www.congress.gov/bill/114th-congress/senate-bill/185/text

[14] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[15] http://www.tufts.edu/med/apua/news/news-newsletter-vol-30-no-1-2.shtml

[16] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry

[17] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry

[18] http://www.healthline.com/health/antibiotics/why-pipeline-running-dry


Two new Federal Reserve rulemakings in the last month have pushed Dodd-Frank’s aggregate financial costs past $35 billion, with $29.3 billion in final rules. With more than five years of implementation, it’s clear that regulators still have dozens of new measures left to propose and finalize. The most expensive portions of the law could ultimately lie ahead for consumers.

Federal Reserve’s Actions

Broadly, the Federal Reserve occupies a somewhat discrete place in the regulatory world. It doesn’t consume the oxygen of the Environmental Protection Agency or even the Securities and Exchange Commission. On its own authority, it has imposed roughly $300 million in costs pursuant to Dodd-Frank, in addition to 4.8 million paperwork burden hours. These might seem like pedestrian figures, but it’s important to recognize that it would take more than 2,400 employees working full-time (2,000 hours a year) to complete the Federal Reserve’s new paperwork. Now, two rulemakings, one jointly authored by the Federal Reserve and one issued independently, have only added to the agency’s Dodd-Frank portfolio.

Recently, a quintet of financial regulators issued a final rule implementing a Dodd-Frank measure on “Margin and Capital Requirements for Covered Swap Entities.” Despite initial projections of costs ranging up to $46 billion, both the proposed “Regulatory Impact Analysis” (RIA), and final RIA centered on a cost figure of $5.2 billion; it now represents the most expensive rule under the landmark financial reform bill in its five-year history.

Given that there were five agencies (Comptroller of the Currency, Federal Reserve, FDIC, Farm Credit Administration, and Federal Housing Finance Agency) involved in the formation of this rulemaking, the proposed rule stretches to 281 pages, but the analysis from the proposed to final version is almost identical. Both proposed and final rules estimated $644 billion of initial margin would be required.  As with many of the estimates in the analysis, the range of figures is questionable. For example, the margin required could be as low as $280 billion or as high as $3.6 trillion, but the agencies chose $644 billion because it “falls roughly in the middle.” This is only true, if you have a creative definition of “roughly” and “middle.” The mathematical midpoint of the above range is $1.94 trillion or three times the central figure the agencies selected.

The midpoint figure of margin dictates a baseline for calculating compliance costs. As the rule notes, “Using our initial margin estimate of $644 billion, every basis point of lower return equals $64 million. If we assume a 100 basis point difference in return, the opportunity cost of the fully phased-in initial margin requirement would be $6.4 billion.” This 100 basis point is the high-end range of opportunity costs; at 25 basis points, the costs total only $2.9 billion. A discounted figure of $5.2 billion is the primary benchmark for the rule’s cost, in addition to more than 52,000 paperwork burden hours.

The baseline for cost arises from three different global studies conducted in 2013. In order to approximate the U.S. share, the regulators merely reduced the amounts in the study by 65 percent, since the U.S. share of the swaps market is roughly 35 percent. In other words, the main benefit-cost estimates rest on assumptions piled on assumptions to yield an incredibly uncertain final calculation.

Strangely, there is no direct discussion of high-end costs in the RIA, just the preamble to the final rule. This could be due to the OCC’s sole drafting of the RIA. In the actual text of the rule, the cost scenarios are more explicit: “The estimated annual costs of the initial margin requirements range from $672 million to roughly $46 billion depending on the specific initial margin estimate and incremental funding cost that is used to compute the estimate.” A 68-fold difference in cost is hardly the level of precision desired from the nation’s premier financial regulators implementing one of the most important provisions of Dodd-Frank. It is even more peculiar that the $46 billion figure was omitted from both the proposed and final RIA.

What does this mean for the nation? The map below attempts to approximate the regional impact of the margin rule based on the affected establishments and the assumptions contained in the cost estimate.

Given the rule’s wide disparities in costs, it’s safe to assume the benefit figures will be equally lacking. The $644 billion margin is considered the prime quantified benefit, just as it contributes to the costs. In the words of regulators, it “creates a new buffer against losses for covered swap entities and other swap market participants.” In addition to this quantified benefit, regulators included four non-quantified benefits, including 1) reduced uncertainty, 2) limiting participants from taking excessive risks, 3) added protection against insolvency of counterparties, and 4) decreased risk associated with swaps.

Finally, on October 30, the Federal Reserve issued additional requirements for “Systemically Important U.S. Bank Holding Companies,” or “Too Big to Fail” institutions. Banks with this systemically important tag will be required to maintain a minimum amount of unsecured long-term debt. Under section 165 of Dodd-Frank, this measure generally applies to all institutions with more than $50 billion in assets.

The cost of this “Too Big to Fail” oversight measure, if finalized, would make it the seventh costliest Dodd-Frank measure to date, at $1.5 billion. To derive this figure, regulators found the covered banks had a capital shortfall of roughly $120 billion, approximately “1.7 percent of aggregate risk-weighted assets.” To replace this shortfall, the Federal Reserve estimated it would cost between 20 and 200 basis points, depending on the type of debt. This translates into a cost range of $680 million to $1.5 billion annually. For perspective, if finalized in FY 2014, it would have been the most expensive rule, according to the Office of Information and Regulatory Affairs.

In terms of benefits, the “Too Big to Fail” proposal projects that it would, “materially reduce the risk that the failure of a covered [bank] would pose to the financial stability of the United States.” The text does not quantify a benefit, but notes that prior to Dodd-Frank, the odds of major financial crisis were between 3.5 percent and 5.2 percent. Presumably, this rule alone won’t significantly lower those odds, only in concert with other rules addressing risk will Dodd-Frank reduce the chances of another crisis.

Dodd-Frank’s Mounting Burdens

Thanks in part to the two new rules mentioned, Dodd-Frank’s aggregate costs have climbed substantially since AAF’s last update on the law’s five-year anniversary. Since July, the law’s costs have increased to $29.3 billion, from $24 billion. Paperwork has also accelerated to 72.8 million, up from just 61 million in July. Those are only the final rule costs; combined with proposals that will likely be final, and the Dodd-Frank’s burdens climb to $35 billion in costs and 75 million paperwork burden hours.

What do these huge figures mean for the economy? Someone must comply with this paperwork and pay for the costs. Whether it’s employees at affected entities, shareholders, or eventually consumers, everyone generally pays for the cost of federal regulation. On the compliance side, there is strong evidence these costs now translate into more compliance officers, solely responsible for filling out forms and ensuring companies follow the new law. For example, in the “Finance and Insurance” industry, the number of compliance officers had increased 13 percent since 2010 and 20 percent since 2005. Meanwhile, wages paid to those compliance officers have risen 24 percent since 2005 (unadjusted for inflation). The graph below tracks the growth of compliance officers in the “Finance and Insurance” industry.

The effects of Dodd-Frank extend beyond just finance, however. In addition to the anecdotal evidence that the law has driven many industries from certain profit-making fields to compliance-driven, there is strong quantitative evidence that federal regulation has contributed to a surge in compliance officers nationwide. Since 2010, the growth in compliance officers nationwide has topped 21 percent. For comparison, from July 2010 to July 2015, total nonfarm payroll in the U.S. increased by just 8.9 percent. In other words, the nation is generating new compliance officer jobs more than 2.3 times faster than the average nonfarm job.

This is hardly the end of the law’s impact. Even five years after passage, the latest Unified Agenda of federal regulation from the administration logs at least 80 scheduled Dodd-Frank regulations pending. According to the data, the Securities and Exchange Commission has the largest slate, with 22 rulemakings. The Federal Deposit Insurance Commission and the Consumer Financial Protection Bureau are second and third with 11 and 10 pending regulations. It’s clear that after litigation, implementation of Dodd-Frank could take another five years, with economic impacts for decades in the future.

Consumer Impact

Far beyond the walls of compliance offices and law firms, Dodd-Frank is reaching into family bank accounts and small businesses with its burdens and costs. The purpose of Dodd-Frank was partly to stem abuses and fix systemic weaknesses in the financial sector that came about when the housing bubble burst and helped cause the financial crisis. But, in its efforts to make the financial system safer, Dodd-Frank has restricted the availability of many financial products and credit, particularly for low-income borrowers, young people, and minorities.

Lending, whether commercial, real estate, or consumer, has not recovered in this economic recovery as quickly as the average post-WWII economic recovery due to the severity of the financial crisis and environment of tightened credit encourage by Dodd-Frank. Real estate lending has been the slowest to recover, taking six years to reach its pre-recession levels.

Specifically, the swaps rule will duplicate the burdens from another Dodd-Frank product: the Volcker Rule. Volcker was only adopted by four agencies (CFTC, SEC, OCC, and the Federal Reserve), unlike the five for swaps, but it took up 1,000 pages to tell banks to be less risky. In its aftermath, banks have been forced to shed their market-making operations, thereby diminishing liquidity and preventing consumers from being able to trade quickly and at a steady price. Combine that with the higher capital requirements and banks are deciding not to take up excess space on their balance sheets holding inventories of assets awaiting a buyer.

For example, JPMorgan carried $2.7 trillion in corporate bonds in 2007; this year that number is down to $1.7 trillion and falling. That lack of liquidity hurts consumers directly as their options for financial products and services are limited, and indirectly, as less liquid U.S. banks lose their competitiveness – Europe and much of the rest of the world are not restricted by bans on certain activities. It also should come as no surprise to regulators that the banning of certain bank activities, like proprietary trading, will only drive those activities into less regulated or even unregulated areas of the financial system, thereby undermining the entire purpose for these rules.

Further, it is well documented that as liquidity decreases, the cost of capital for businesses increases. There is a positive relationship between stocks’ excess monthly returns and bid-ask spreads for any given level of systematic risk. Thus, average returns are higher for stocks with higher bid-ask spreads, and the increase in the bid-ask spread as a result of rules like Volcker and the swaps rule, will result in higher capital costs for those businesses. The resulting increased capital costs mean slower economic growth and reduced job creation.

More generally, Dodd-Frank’s new rules, oversight agencies, and capital requirements – specifically, the requirements arising from the Too Big To Fail rule, at the current rate, are slated to reduce our GDP over the 2016-2025 period by $895 billion, or $4,346 per working-age person. That is money out of the pockets of consumers as a result of a law that was supposed to be helping consumers.

Conclusion

These two actions by the Federal Reserve have added $6.7 billion in costs, hardly a trivial figure for the nation’s economy, and more specifically, the financial system. This has contributed to Dodd-Frank’s $29.3 billion financial imposition, including more than 72.8 million paperwork burden hours. To put the paperwork in perspective, it would take 36,437 employees working full-time to complete a year of the law’s figurative “red tape.” The margin rule is evidence that, even after five years, the slow crawl of Dodd-Frank regulation will continue to pile up significant economic burdens for the financial system and consumers.

  • Raising New York's minimum wage to $12 per hour would cost the state 216,200 jobs

  • Raising New York's minimum wage to $15 per hour would cost the state 432,500 jobs.

  • Less than 7 percent of the increase in wage earnings from a $15 minimum wage would go to people in poverty.

Research published by the American Action Forum and the Empire Center

 

FOREWORD

“Unfortunately, the real minimum wage is always zero, regardless of the laws, and that is the wage that many workers receive in the wake of the creation or escalation of a government-mandated minimum wage, because they lose their jobs or fail to find jobs when they enter the labor force.”

— Thomas Sowell

Under a proposal championed by Governor Andrew Cuomo, New York would become the first state in the nation to mandate a minimum wage of $15 an hour. That’s more than double the federal minimum wage, and 67 percent higher than the $9 statewide minimum already scheduled to take effect in New York at the end of 2015.

Advocates of such a policy believe that low-income workers will be its primary beneficiaries. This paper, however, suggests that the poorest New Yorkers would have the most to lose from a sharp rise in the government-mandated wage floor.

The authors, economists Douglas Holtz-Eakin and Ben Gitis of the American Action Forum, draw on three credible research models to estimate low, medium and high impacts from raising the statewide minimum wage to $12 or $15.

The key finding: a $15 minimum wage ultimately would cost the state at least 200,000 jobs, with proportionately larger employment decreases in upstate regions. That’s the authors’ “low-impact” scenario, based on a model developed by the Congressional Budget Office, of which Holtz-Eakin is a former director.

The other two models point to even bigger losses, indicating that a $15 an hour minimum wage would lead to 432,200 and 588,000 fewer jobs under the “medium impact” and “high impact” scenarios, respectively.

Job losses would be smaller, but still more than New Yorkers should be willing to tolerate, if the state was to set the minimum at $12 an hour, according to Holtz-Eakin and Gitis. 

Based on national labor force data, the authors of this paper estimate less than 7 percent of the wages generated by a $15 wage, and less than 6 percent of the wages generated by a $12 wage, would actually go to households in poverty. 

As Holtz-Eakin and Gitis note, these findings are consistent with the preponderance of economic research, which has long indicated that higher minimum wages are associated with a decline in employment. To be sure, economists differ on the strength of the effect, which is why this paper draws from different approaches—with the CBO model at the lower bound—to illustrate a range of possible impacts.

The “Fight for $15” is rooted in well-founded and understandable concern about the challenges faced by low-income households, especially those struggling to get by in hyper-expensive New York City. But a $15 minimum wage is only likely to make those challenges worse.

As shown in this paper and as strongly suggested by other research on the general issue, enacting the biggest increase ever in New York’s minimum wage would benefit some low-income workers at the expense of others. The losers would be stuck with the ultimate minimum wage: zero.

In a prolonged period of slow economic growth, the potential loss of at least 200,000 jobs would be an extreme and unacceptable tradeoff.

—E.J. McMahon

President of the Empire Center for Public Policy

 

EXECUTIVE SUMMARY 

We applied the methodology of three different independent research models to examine the employment and earnings effects of raising New York’s statewide minimum wage to $12 per hour and to $15 per hour, respectively, by 2018 in New York City and 2021 in the rest of the state.

We focus on how raising the state minimum wage would affect the very workers whom the policy is intended to help. Overall, we find significant tradeoffs to raising New York’s proposed minimum wage.

While a minimum wage hike would benefit some workers by increasing their earnings, it would also hurt hundreds of thousands of others whose earnings would sink because they could no longer find or keep a job.

Our medium-impact estimates (below) show that raising the state’s minimum wage to $12 per hour would affect 2.3 million low-wage workers while costing the state 216,200 jobs. On net, total wage earnings among low-wage workers would rise by $1.1 billion.

Labor Market Effects of $12 per hour Minimum Wage in New York State

Workers Affected

2,265,000

Jobs Lost

216,200

Net Wage Earnings Change

$1.1 billion

Similarly, we find that increasing the state’s minimum wage to $15 per hour would affect 3.1 million workers and cost 432,500 jobs. Total wage earnings among low-wage workers would rise by $4.6 billion, after accounting for earnings declines from job losses.

Labor Market Effects of $15 per hour Minimum Wage in New York State

Workers Affected

3,074,000

Jobs Lost

432,500

Net Wage Earnings Change

$4.6 billion

Taken at face value, the net total wage earnings gains for both the $12 and $15 per hour minimum wage may seem large. But, it is important to point out that only a very small amount of the total wage earnings gains would benefit workers in poverty.

In previous American Action Forum (AAF) research, we found that only 5.8 percent of net total wage earnings gained from a $12 federal minimum wage and only 6.7 percent of the net total wage earnings gained from a $15 federal minimum wage would actually go to households in poverty. In New York, this means that of the $1.1 billion gained from the $12 minimum wage, only $61.9 million would go to workers in poverty. Likewise, only $308.0 million of the $4.6 billion gained from the $15 minimum wage would help workers in poverty.

Because the exact effect of the minimum wage on employment remains unsettled, we check the robustness of our results by employing a range of estimates from literature that imply low, medium and high employment impacts.

INTRODUCTION

The federal minimum wage has been set at $7.25 an hour since July 2009. In recent years, some American policymakers and labor advocates have argued for further increases in the wage at the federal, state, and local levels. On the federal level, the Obama administration and top congressional Democrats have rallied behind a proposal to raise the federal minimum to $12 per hour by 2020. Under another proposal championed by, among others, Sen. Bernie Sanders, the federal minimum would rise to $15 per hour, a level now in the process of being implemented in Seattle and a handful of other localities.

AAF, along with the Manhattan Institute, previously analyzed the employment and earnings effects of these two policies on a national level, and found that they would induce substantial job losses with little benefit for those in poverty.

New York’s minimum wage currently stands at $8.75 and is scheduled to reach $9 an hour on December 31, 2015.  However, based on recommendations of a Wage Board empaneled to study the issue at the request of Governor Andrew Cuomo, the state labor commissioner has issued an order raising the minimum wage to $15 an hour for employees of fast-food chain restaurants in New York.

Soon after the Wage Board recommendation for fast-food workers, the governor announced he would propose legislation raising New York’s statewide minimum wage to $15 per hour for all workers. The governor’s proposal would parallel the fast-food schedule, raising the state’s minimum wage to $15 per hour by 2018 in New York City and by 2021 for the rest of the state.

To test the impact of such policies, this paper estimates the employment and earnings effects of increasing New York’s minimum wage to two alternative higher levels— $12 and to $15 per hour—in each region of the state, focusing on the low-wage workers whom such raises would be intended to assist. In doing so, we project a range of job losses if lawmakers were to raise the state’s minimum wage to $12 or to $15 per hour and the net change in total wage earnings for all low-wage workers in the state.

PREVIOUS RESEARCH

We utilize research by the CBO (2014)[1]  Meer & West (2015),[2] and Clemens & Wither (2014)[3] to provide a range of estimates for the impact of a $12 and a $15 minimum wage on New York state employment and total wage earnings. These studies examined different labor-market aspects of the minimum wage, resulting in different conclusions regarding the policy’s impact on employment and wage earnings. Using these three studies, we consider the effects of the minimum wage under three scenarios — low, medium and high employment impacts.

In our previous paper, we used these same studies to examine the impact of raising the federal minimum wage on national employment and total wage earnings. Our medium-impact scenario was based on the model developed by Meer & West (2015). With that model, we found that raising the federal minimum wage to $12 per hour by 2020 would cost 3.8 million jobs and, on net, increase total wage earnings by $14.2 billion. Increasing the federal minimum wage to $15 per hour by 2020 would cost 6.6 million jobs and, on net, increase total wage earnings by $105.4 billion.[4]

CBO

In 2014, the Congressional Budget Office (CBO) examined the impact of raising the federal minimum wage to $9.00 or $10.10 per hour, two of the most popular proposals at that time. For the $10.10 proposal, the CBO found that the policy would result in employment falling by 500,000 jobs relative to their projected 2016 baseline. The CBO assumed that, in addition to those earning between $7.25 and $10.10 getting a raise, those earning just above $10.10 would also see their wages increase. Specifically, those who earn up to 50 percent more than the minimum-wage hike would see their hourly earnings rise. As a result, people earning below $11.50 (who stay employed) would benefit from a wage increase of some sort.

The CBO concluded that net total wage earnings for low-wage workers would increase by $31 billion: 19 percent of those additional earnings would go to families below the poverty threshold; 52 percent to families with incomes one to three times the poverty threshold; and 29 percent to families with incomes more than three times the poverty threshold. We employ these findings when developing our projection of a low-impact employment scenario resulting from an increase in New York’s statewide minimum wage.

Meer & West

While there is an ongoing debate regarding the impact of the minimum wage on the level of employment, Texas A&M economists Jonathan Meer and Jeremy West suggested in their recent research that the negative impact of the minimum wage is best isolated by focusing on employment dynamics—that is, the change in job growth once the higher wage is implemented. Specifically, they found that a 10 percent increase in the real minimum wage is associated with a 0.30 to 0.53 percentage-point decrease in the net job-growth rate. They found that, as a result, the 10 percent minimum wage increase reduces future employment by 0.7 percent.

Previously, the AAF applied Meer & West’s work to California’s recent law that raises the state’s minimum wage to $10 per hour (effective 2016). Using Meer & West’s result, the AAF found that this wage increase in California would mean a loss of 191,000 jobs that would never be created.[5] In addition, the AAF found that if every state followed suit, more than 2.3 million new jobs would be lost across the United States. We employ the estimates found in Meer & West’s study to characterize medium-impact employment consequences of raising the minimum wage in New York.

Clemens & Wither

In late 2014, Jeffrey Clemens and Michael Wither of the University of California at San Diego released research examining what happened to low-wage workers the last time that the federal government raised its minimum wage—rising in three steps, during 2007–09, from $5.15 to $7.25 per hour. Using data from the Survey of Income and Program Participation (SIPP), they focused on how the minimum-wage hike affected employment and earnings among those whom the minimum-wage hike affected most: low-wage workers earning below $7.50 per hour.

Clemens and Wither found significant, negative consequences for low-wage workers. From 2006 to 2012, employment in this group fell by 8 percent, translating to about 1.7 million jobs.[6] The job loss in this low-wage group accounted for 14 percent of the national decline in employment during this period.[7] The minimum-wage hike also increased the probability of working without pay (e.g., unpaid internships) by 2 percentage points. Workers with at least some college education were 20 percent more likely to work without pay than before the minimum wage rose.

As a result of the reduction in employment and paid work, net average monthly earnings for low-wage workers fell by $100 during the first year after the minimum wage increased and fell by an additional $50 in the following two years. We use the Clemens & Wither estimates to estimate our high-impact employment scenario.

METHODOLOGY

In identifying the number of workers whom the minimum wage hike would affect in New York, we assume that those most directly affected by the minimum wage increase are the workers who, we project, would earn between $9 per hour (New York’s statewide minimum wage effective December 31, 2015) and the new minimum wage level in 2021 (2018 for the New York City region) under current law.[8] For the $12 minimum wage, this includes all workers who would earn between $9 and $12 per hour; for the $15 minimum wage, it includes everyone who would earn between $9 and $15 per hour. This is the group of workers that we assume would both suffer from all the job losses and benefit from any wage earnings gain.

To estimate the actual number of workers who would be impacted by the minimum wage and thus either lose their jobs or see their wages rise, we first project total regional employment levels by 2021 (2018 for New York City). We accomplish this by using the state Labor Department’s Employment Projections[9] to calculate the projected compounded annual growth rate for total employment in each region. Starting with 2014 total employment reported in each region by the New York Quarterly Census of Employment and Wages,[10] we project employment levels to 2021 (2018 for New York City).

After projecting total future employment levels in each region, we estimate how many of those workers would earn between $9 and $12 per hour and between $9 and $15 per hour under current law. With regional American Community Survey (ACS) data, we obtain the percent of workers who would earn at least $9 per hour but less than $12 and $15 per hour by 2018 in New York City and 2021 in all other regions.[11], [12] We multiply that percentage by the projected total employment levels in each region to estimate the number of workers who would earn between $9 per hour and the new minimum wage levels in 2021 (2018 for New York City) under current law.

WORKERS IMPACTED[13]

Table 1 on page 5 shows the projected number of workers statewide and in each region who would be impacted if lawmakers raised New York’s minimum wage to $12 or to $15 per hour.

Table 1: Workers Impacted by Minimum Wage Hike (in thousands)

 

$12-per-hour

$15 -per-hour

Region

Workers

Percent

Workers

Percent

Total

2,265

24.4

3,074

33.2

Capital

122

22

173

31.2

Central New York

92

25.7

124

34.8

Finger Lakes

149

26

208

36.3

Hudson Valley

203

21.1

277

28.7

Long Island

244

18.1

338

25.1

Mohawk Valley

54

27.9

75

38.2

New York City

1,104

26.4

1,470

35.1

North Country

48

30.5

65

41.6

Southern Tier

80

29.3

108

39.6

Western New York

169

25.7

237

35.9

 

 We estimate that raising New York’s minimum wage to $12 per hour would impact about 2.3 million workers statewide, about 24.4 percent of the employed population. Raising it to $15 per hour would impact about 3.1 million workers or 33.2 percent of the workforce.

The percent of the population affected in each region depends entirely on the frequency of low-wage employment. Long Island, for instance, appears to have the fewest low-wage workers as a $12 minimum wage would impact 18.1 percent of workers and a $15 minimum wage would impact 25.1 percent. In the North Country, however, those minimum wage hikes would affect 30.5 percent and 41.6 percent of workers, respectively.

EMPLOYMENT

Supporters of a minimum wage generally echo Governor Cuomo’s view that such a policy is needed to lift workers out of poverty, “improve the standard of living for workers, encourage fair and more efficient business practices, and ensure that the most vulnerable members of the workforce can contribute to the economy.”[14]  

However, the potential employment consequences of mandating a minimum wage hike call the merits of this policy into question. When a state government increases the minimum hourly pay for private-sector workers, it effectively increases the per-hour cost of low-wage labor. Employers have three main ways to pay for the additional labor cost: lower profits, higher prices, and fewer workers.

While many labor advocates assume that employers will absorb a state-mandated increase in labor costs through a reduction in profits, the evidence suggests that the vast majority of low-wage workers are in industries with thin profit margins, such as retailers and restaurants.[15] In these industries, businesses tend to pay for minimum wage hikes by increasing prices, reducing current and future employment, or both. While the exact impact of a minimum wage hike on employment is debated, extensive economic literature, from the 1950s to today,[16] concludes that raising the minimum wage has a negative impact on employment levels and job creation.[17] Moreover, the literature shows that the workers who tend to become jobless are the low-skilled, low-wage workers whom the policy intends to help.

The CBO, Meer & West, and Clemens & Wither research on the issue demonstrate negative labor market consequences with varying degrees of severity. In this section, we apply their findings to proposals to raise New York’s statewide minimum wage. In particular, for each region we will estimate the labor market consequences of raising the state’s minimum wage to $12 and to $15 by 2021 (2018 for New York City). We follow our previous study’s methodology by assuming that all job losses occur within the group of workers who would earn between $9 and the new minimum wage level under current law. As a result, we assume no job losses among workers earning more than the new minimum-wage level.

To preview our estimates, we find that increasing New York’s minimum wage to $12 per hour would reduce employment by 76,600 to 290,400 jobs. Raising the minimum wage to $15 per hour would cost the state 200,000 to 588,800 jobs.

$12 Minimum Wage

Overall, we estimate that if the minimum wage rose to $12 per hour, low-wage employment in New York would be 76,600 to 290,400 lower than under current law.

 
Table 2: Job Losses from $12 per hour Minimum Wage
Study Models - Low, Medium and High Impacts

 

CBO

Meer & West

Clemens & Wither

Region

Job loss

Percent

Job loss

Percent

Job loss

Percent

Total

76,600

0.8

216,200

2.3

290,400

3.1

Capital

4,100

0.7

12,900

2.3

18,500

3.3

Central New York

3,100

0.9

8,300

2.3

10,900

3.1

Finger Lakes

5,100

0.9

13,400

2.3

18,000

3.1

Hudson Valley

6,900

0.7

22,500

2.3

30,700

3.2

Long Island

8,200

0.6

31,400

2.3

35,700

2.6

Mohawk Valley

1,800

0.9

4,600

2.3

6,100

3.1

New York City

37,400

0.9

97,800

2.3

135,500

3.2

North Country

1,600

1.0

3,600

2.3

5,300

3.4

Southern Tier

2,700

1.0

6,300

2.3

8,800

3.2

Western New York

5,700

0.9

15,400

2.3

20,900

3.2

 

Using the CBO report, our low-impact employment scenario, we find that raising the minimum wage to $12 per hour by 2018 for New York City and 2021 for the rest of the state would cost 76,600 jobs statewide. This means that total employment would decline by 0.8 percent. The job losses would range from about 1,600 in North Country to 37,400 in New York City. Although the rural areas of New York would lose fewer workers than the urban areas, the job losses would be proportionally more significant. 

In the Hudson Valley and on Long Island, for instance, employment would only decline by 0.7 percent and 0.6 percent, respectively. But in the North Country and Southern Tier, employment would decline by 1 percent.

In our medium-impact employment scenario, derived from Meer & West, we find that this minimum wage increase would reduce employment significantly, costing 216,200 low-wage jobs. Meer & West’s research models the impact of the minimum wage on total employment levels, not just low-wage workers. So while the other models are sensitive to the prevalence of low-wage work in each region, our medium-impact employment model is not. As a result, the estimated percent reduction in employment is the same across the state.

Finally, with the Clemens & Wither model, our high-impact employment scenario, we estimate that raising the minimum wage to $12 per hour would cost 290,400 jobs. This model does depend on the prevalence of low-wage employment, which allows us to examine how raising the minimum wage in New York would affect each region differently. We find that the decline in employment in this scenario ranges from 2.6 percent on Long Island to 3.4 percent in the North Country.

$15 Minimum Wage

We estimate that 200,000 to 588,800 fewer low-wage jobs would exist in 2021 (2018 in New York City) if New York policymakers increased the state minimum wage to $15 per hour.

Table 3: Job Losses from $15 per hour Minimum Wage
Study Models - Low, Medium and High Impacts

 

CBO

Meer & West

Clemens & Wither

Region

Job loss

Percent

Job loss

Percent

Job loss

Percent

Total

200,000

2.2

432,500

4.7

588,800

6.4

Capital

11,300

2.0

25,900

4.7

36,900

6.6

Central New York

8,100

2.3

16,700

4.7

22,800

6.4

Finger Lakes

13,600

2.4

26,800

4.7

38,400

6.7

Hudson Valley

18,000

1.9

45,000

4.7

58,800

6.1

Long Island

22,000

1.6

62,800

4.7

70,400

5.2

Mohawk Valley

4,900

2.5

9,100

4.7

13,200

6.8

New York City

95,600

2.3

195,600

4.7

273,800

6.5

North Country

4,200

2.7

7,300

4.7

11,400

7.3

Southern Tier

7,000

2.6

12,700

4.7

18,800

6.9

Western New York

15,400

2.3

30,700

4.7

44,300

6.7

 

Using the CBO estimate, we find that increasing the state’s minimum wage to $15 per hour would cost 200,000 jobs. Among the state’s regions, the reduction in employment would range from 1.6 percent on Long Island to 2.7 percent in the North Country. The reduction in job creation captured by the Meer & West estimate reveals that New York would have 432,500 fewer jobs than under current law, reducing projected employment by 4.7 percent.

Using the Clemens & Wither estimate points to a loss of 588,800 jobs, a 6.4 percent decline in employment. Under this scenario, we find that the job losses would range from 5.2 percent on Long Island to 7.3 percent in the North Country.

TOTAL WAGE EARNINGS

In this section, we project how increasing New York’s statewide minimum wage to $12 and to $15 per hour by 2021 (2018 in New York City) would affect total wage earnings of low-wage workers. This involves calculating the total wage earnings increase for those employed and the total wage earnings loss for those who can no longer find jobs. After subtracting total wage earnings lost from total wage earnings gained, we derive the net change in total wage earnings for all low-wage workers.

Assumptions and Methods

For all workers who would keep their jobs and would earn between $9 and the new minimum wage level under current law, we assume that increasing the minimum wage would raise their hourly pay rate to the new minimum wage level.

In the $12 minimum wage scenario, for all workers who earn between $9 and $12 per hour in 2021 (2018 in New York City), we assume that wages would rise to $12 per hour—if they stay employed.

Likewise, in the $15 minimum wage scenario, we assume that all works earning between $9 and $15 per hour would see their wages rise to $15 per hour—if they stay employed.

Under both the $12 and $15 minimum wage scenarios, we assume that the minimum wage increase itself would have no impact on hours worked per week and weeks worked per year—for those who keep their jobs. In addition, we assume that all who would be jobless as a result of the minimum wage increase would see their annual wage earnings fall to $0.

To estimate the change in total wage earnings, we utilize ACS data to project average hourly pay, annual pay, and hours per year in each region for workers who would earn between $9 and $12 per hour and between $9 and $15 per hour under current law.[18] With that information, we estimate the total wage earnings lost among the low-wage workers who lose their jobs and the total wage earnings gained among those who remain employed. Overall, we calculate the net change in total wage earnings for low-wage workers by subtracting total wage earnings lost from total wage earnings gained.

$12 Minimum Wage

As shown in Table 4, the impact of raising New York’s minimum wage to $12 per hour on the total wage earnings of low-wage workers depends largely on how many become jobless.

 

Table 4: Net Wage Earnings Change from $12 per hour Minimum Wage (dollars in millions)
Study Models - Low, Medium and High Impacts

 

CBO

Meer & West

Clemens & Wither

Region

Net Change

Percent

Net Change

Percent

Net Change

Percent

Total

$3,754

0.7

$1,068

0.2

-$370

-0.1

Capital

$162

0.6

$9

0.0

-$88

-0.3

Central New York

$133

0.7

$38

0.2

-$10

-0.1

Finger Lakes

$215

0.8

$68

0.2

-$13

0.0

Hudson Valley

$294

0.4

$4

0.0

-$149

-0.2

Long Island

$371

0.4

-$55

-0.1

-$133

-0.1

Mohawk Valley

$73

0.8

$24

0.3

-$4

0.0

New York City

$2,072

0.9

$816

0.4

$33

0.0

North Country

$79

1.0

$40

0.5

$8

0.1

Southern Tier

$109

0.8

$48

0.3

$6

0.0

Western New York

$245

0.8

$75

0.2

-$22

-0.1

 

In the low-impact CBO and medium-impact Meer & West employment scenarios, raising the New York’s minimum wage to $12 per hour would, on net, increase total wage earnings among low-wage workers. In the CBO model, a $12 minimum wage would, on net, raise low-wage worker total wage earnings by $3.8 billion or 0.7 percent. In the Meer & West model, it would raise low-wage worker total wage earnings by $1.1 billion or 0.2 percent. However, in the high-impact Clemens & Wither employment model, the earnings lost stemming from the reduction in jobs actually outweighs earnings gained among those who remained employed. In particular, low-wage worker total wage earnings would, on net, decline by $370.2 million or 0.1 percent.

In this case, the regions with the most low-wage workers could potentially see the largest total wage earnings gains. In the Meer & West scenario, for instance, net total wage earnings would rise by 0.5 percent in the North Country and decline 0.1 percent on Long Island.

$15 Minimum Wage

Table 5 illustrates how raising the New York minimum wage to $15 per hour would affect total wage earnings for low-wage workers.

Table 5: Net Wage Earnings Change from $15 per hour Minimum Wage (dollars in millions)
Study Models - Low, Medium and High Impacts

 

CBO

Meer & West

Clemens & Wither

Region

Net Change

Percent

Net Change

Percent

Net Change

Percent

Total

$10,577

1.9

$4,598

0.8

$1,288

0.2

Capital

$429

1.7

$139

0.5

-$199

-0.4

Central New York

$383

2.1

$179

1.0

$34

0.2

Finger Lakes

$612

2.2

$300

1.1

$24

0.1

Hudson Valley

$845

1.1

$200

0.3

-$131

-0.2

Long Island

$1,032

1.0

$58

0.1

-$121

-0.1

Mohawk Valley

$289

2.4

$125

1.3

$23

0.2

New York City

$5,787

2.6

$2,960

1.3

$1,507

0.7

North Country

$210

2.5

$133

1.6

$28

0.3

Southern Tier

$315

2.2

$184

1.3

$42

0.3

Western New York

$683

2.2

$321

1.0

$1

0.0

 

In total, we estimate that raising New York’s minimum wage to $15 per hour would, on net, increase total wage earnings for low-wage workers by between $1.3 billion and $10.6 billion. The net total wage earnings gains in the $15 minimum wage scenario would be much larger than in the $12 minimum wage scenario because the $15 minimum wage would both impact more workers and raise their average wages by a larger margin. 

In the medium-impact Meer & West employment scenario, total wage earnings would, on net, rise by $4.6 billion. The CBO model again results in the largest net total wage earnings gain ($10.6 billion) because it projects the fewest job losses. In this case the Clemens & Wither model projects the smallest net total wage earnings gain ($1.3 billion) because it projects the most job losses.

Once again, the regions with the most low-wage workers tend to have the largest total wage earnings gains. In the medium-impact Meer & West employment scenario, total wage earnings in the high-wage Long Island region would, on net, only increase by 0.1 percent. But in the low-wage North Country region, total wage earnings would, on net, increase by 1.6 percent.

POVERTY AND PRICE IMPACTS

It is important to understand that very little wage earnings gained from raising the minimum wage actually benefits workers who live in households below the poverty line. Due to poverty data limitations, we are unable to estimate what portion of the projected total wage earnings gained in New York from each minimum wage hike would go to workers in poverty.

In our previous, nationally focused report, however, we found that under the Meer & West model only 5.8 percent of net total wage earnings gained from raising the federal minimum wage to $12 per hour would actually go to workers in poverty. Applying this estimate to New York would mean that only $61.9 million of the $1.1 billion net total wage earnings gained would go to workers in poverty. Likewise, we found that only 6.7 percent of the increase in net total wage earnings from a $15 per hour federal minimum wage would benefit workers in poverty. In New York, that means that of the $4.6 billion gained from a $15 minimum wage, only $308.0 million would benefit workers who are in poverty.

 It should also be noted that these projections of employment impacts and net changes in total wage earnings do not take into account the economic impact of any price increases resulting from higher minimum wages. There is a substantial body of research concluding that raising the minimum wage would also raise prices, and that the higher prices would disproportionately burden low-income households and families. MaCurdy (2014) analyzed how the 90 cent federal minimum wage increase from $4.25 in 1996 to $5.15 in 1997 impacted prices. The restaurant industry, which has the highest concentration of low-wage workers, increased average prices by 1.85 percent. Families spent $136 (in 2010 dollars) more per year on average due to this modest minimum wage hike. Moreover, the price increases facing families in extreme poverty imposed a slightly larger percentage increase in their annual expenditures than it did for families who were not in poverty. In particular, the minimum wage hike increased annual expenditures for families with incomes less than half the poverty threshold by 0.63 percent. For families with incomes more than three times the poverty threshold, it increased their expenditures by 0.56 percent.[19]

 A $12 and $15 per hour minimum wage in New York may indeed increase total wage earnings after accounting for job losses. But, only a very small portion of the total wage earnings gained from the policy would actually benefit those it intends to help: households in poverty. Meanwhile, evidence suggests it would disproportionately burden those in poverty with higher prices.

CONCLUSION

Across the nation, lawmakers continue to debate the merits of raising the minimum wage to $12 per hour and to $15 per hour. In this paper, we estimate the employment and wage earnings effects of raising the state minimum wage in New York to $12 and to $15 per hour. We find that any potential benefits from raising the minimum wage would be greatly offset by the negative labor market consequences of the policy.

In particular, the wage earnings gains from raising the minimum wage would come at a significant cost to the large number of workers who would become jobless. In effect, raising the minimum wage transfers wage earnings from the low-wage workers who are unfortunate enough to become jobless to the low-wage workers who remain employed.



[1] Congressional Budget Office, “The Effects of a Minimum-Wage Increase on Employment and Family Income,” February 2014, https://www.cbo.gov/publication/44995.

[2] Jonathan Meer and Jeremy West, “Effects of the Minimum Wage on Employment Dynamics,” August 2015,  http://econweb.tamu.edu/jmeer/Meer_West_MinimumWage_JHR-final.pdf.

[3] Jeffrey Clemens and Michael Wither, “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers,” National Bureau of Economic Research, December 2014, http://www.nber.org/papers/w20724.

[4] Douglas Holtz-Eakin & Ben Gitis, “Counterproductive: The Employment and Income Effects of Raising America’s Minimum Wage to $12 and to $15 per Hour,” American Action Forum & the Manhattan Institute, July 2015, http://americanactionforum.org/research/counterproductive-the-employment-and-income-effects-of-raising-americas-min.

[5] Ben Gitis, “The Steep Cost of a $10 Minimum Wage,” American Action Forum, October 2013, http://americanactionforum.org/research/the-steep-cost-of-a-10-minimum-wage.

[6] The 1.7 million jobs figure is based on authors’ analysis of Clemens & Wither (2014) estimates.

[7] Clemens & Wither (2014) accounted for the effects of the recession by using state, time, and individual effects and controlling for the Federal Housing Finance Agency (FHFA) House Price Index. For more information on their methodology, see http://www.nber.org/papers/w20724.

[8] New York’s current statewide minimum wage is $8.75 per hour. Under current law, however, the minimum wage is scheduled to increase to $9 per hour at the end of 2015. Therefore, to analyze the impact of raising the minimum wage relative to current law, we estimate the employment and income effects of increasing it from $9 to $12 and $15 per hour.

[9] Employment Projections, Labor Statistics, New York State Department of Labor, https://labor.ny.gov/stats/lsproj.shtm.

[10] Quarterly Census of Employment and Wages, Labor Statistics, New York State Department of Labor, https://labor.ny.gov/stats/lsqcew.shtm.

[11] We thank Bob Scardamalia of RLS Demographics, Inc. for performing this portion of the analysis for us.

[12] As in our previous paper, we follow CBO methodology and assume baseline wages increase 2.9 percent each year to project future wage levels. In addition, the latest regional ACS data are from 2013, which is before New York’s current state minimum wage law was enacted. As a result, even after assuming wages grow 2.9 percent each year, some workers still end up earning below the current law $9 per hour minimum wage in 2018 and 2021. We address this issue by assuming any workers projected to earn below $9 per hour by 2018 and 2021 will earn between $9 and $12 or $15 per hour.

[13] Regional estimates in Tables 1 through 5 may not add to state totals due to rounding.

[18] Again, we thank Bob Scardamalia of RLS Demographics, Inc. for performing this portion of the analysis for us.

[19] Thomas MaCurdy, “How Effective Is the Minimum Wage at Supporting the Poor?” Stanford University, February 2014, http://www.jstor.org/stable/10.1086/679626.  


Introduction

The Environmental Protection Agency (EPA) recently released a new round of air quality standards for ground-level ozone. The measure lowers the threshold for “nonattainment” status from 75 parts per billion (ppb) to 70 ppb and imposes $1.4 billion in nationwide annual costs. Generally, “nonattainment” counties exceed EPA’s threshold for ozone (“smog”) pollution. Counties deemed nonattainment must work with their states to devise a State Implementation Plan (SIP) to meet EPA goals. According to EPA, this must, “show how the nonattainment area will attain the primary [ozone] standard ‘as expeditiously as practicable,’ but no later than within the relevant time frame.”

But what are the deeper economic effects? Looking back at the previous standard of 75 ppb, the American Action Forum (AAF) found that even this less stringent threshold reduced total wage earnings, average annual worker pay, and employment. Observed nonattainment counties experienced losses of $56.5 billion in total wage earnings, $690 in pay per worker, and 242,000 jobs between 2008 and 2013.

How EPA Regulates Ozone

Under the Clean Air Act, EPA must set air quality standards for six types of criteria pollutants, including ozone. Ozone regulation can be difficult for states and localities because unlike other pollutants, there is no factory or machine that directly emits ozone; it generally forms as a product of a number of chemical reactions between other pollutants and sunlight. When a particular county has ozone concentrations above EPA’s threshold, 75 ppb as of 2008, it is considered a nonattainment area. Affected states and localities must develop a plan to both reduce the amount of ozone in their nonattainment counties as well as manage activities that could contribute to higher ozone levels. There is no one plan or pollution strategy that will allow a state or locality to comply with ozone regulation. In fact, EPA acknowledges there are some “unknown” technologies or strategies that will be required to comply with the most recent standards. Typically, compliance involves a combination of proposals from local governments, review by EPA, and constant monitoring by all jurisdictions to ensure a compliance pathway. Ozone regulation affects virtually every activity that burns fossil fuels.

Since there are so many factors involved in the formation of ozone, a nonattainment designation means that any economic activity involving emissions – including such industries as manufacturing and construction – could face certain operational restrictions. Since nonattainment is generally consistent throughout a county, it is possible that some of these industries could choose to slow hiring and limit raises or move their operations to other nearby counties that do not face such restrictions in order to be more productive and profitable. This study assesses these consequences by examining wage and employment patterns in nonattainment counties, relative to their neighboring attainment counties.

Methodology

In this paper, we study the 2008 Ozone rule, and specifically, its impact on county level labor markets — total wage earnings, average annual worker pay, and total employment. To accomplish this, we compare the 208[1] counties that the EPA deemed in nonattainment (and therefore subject to larger restrictions) to the 234 attainment counties that border nonattainment areas before and after the regulation took effect. As a result, we estimate the impact of the regulation on nonattainment county labor markets relative to labor markets in neighboring attainment counties, which are presumably legally, economically, and socially similar, but were not directly impacted by the rule. In total, we examine 442 counties from 28 states.

Empirical Model

In order to find the relationship between the ozone regulation and total wage earnings, average annual worker pay, and total employment growth in nonattainment counties, AAF performs regression analysis to test the change in the annual growth rates relative to neighboring attainment counties after the regulation was published and went into effect in 2008. For the pre-regulation period, we calculate the compound annual total wage earnings, average annual worker pay, and total employment growth rates from 2004 to 2007. The same is done for the regulation period, which is 2008 to 2013. For each variable, we pool the data from both time periods and use a binary variable to indicate if the regulation is in effect.

Our empirical model is captured in equation 1:

The dependent variable, Labor Market Growth Ratei, represents the county level average annual growth rate of total wage earnings, average annual worker pay, or total employment; these data are based on county level wage and employment data found in the Bureau of Labor Statistics’ Occupational Employment Statistics.[2] We use three binary variables to indicate whether a county is ever affected by the regulation, whether the regulation is in effect, and the interaction between them. The first binary variable, Nonattainmenti, equals 0 in both periods if the county is attainment or 1 if it is nonattainment. This variable by itself measures labor market growth in nonattainment counties relative to bordering attainment counties before the regulation took effect. The second binary, Regulationi, equals 1 for every county in the period after the regulation took effect, 2008 to 2013, or 0 in the period before the regulation took effect, 2004 to 2007. This variable by itself captures the change in labor market growth in the attainment counties after the regulation took effect. Finally, the interaction term only equals 1 if both conditions, being a nonattainment county and being in the post-regulation period, are satisfied. If it is an attainment county, the pre-regulation period, or both, then the variable equals 0 and it drops from the model.

In this paper, we are interested in the combination of Nonattainmenti and the interaction term. The sum of those two variables’ estimated coefficients captures the total change in labor market growth rates in nonattainment counties when the regulation took effect relative to the change in bordering attainment counties, which were not impacted by the regulation.

In testing the impact of the regulation on total wage earnings, average annual worker pay, and total employment growth rates, there are plenty of other factors that influence the labor market and need to be held constant. We include state Freddie Mac House Price Index[3] as a variable to control for the negative macroeconomic labor market trends during the Great Recession. Additionally, we control for the percent of county employment in the services industry,[4] population of each county,[5] and percent of the population with at least a Bachelor’s degree[6] to reflect the quality and size of the labor force. Lastly, we include state-level tax rates[7] to control for the effect of state fiscal policy on the labor market. Similar to the employment measurements, we take the compound annual growth rates for each of these variables for before and after the regulation took effect. Finally, we cluster our standard errors to control for any analysis errors that may be correlated with the counties over time as well as potential heteroscedasticity present in our data.

Results

We find that nonattainment status had significant negative effects on total wage earnings, average annual worker pay, and total employment for a county. Table 1 illustrates the results for the impact of the ozone regulation on the labor market in nonattainment counties.

Table 1: Results

Employment Measurements

Average Marginal Effect

Total Wage Earnings

-0.4***

Average Annual Worker pay

-0.3***

Total Employment

-0.1**

**Jointly Significant at the 5% Level

***Jointly Significant at the 1% Level

†Average marginal effect of ozone regulation on compound annual growth rate in nonattainment counties

 

The results consistently show that the ozone regulation had statistically significant adverse effects on nonattainment county labor markets. In particular, we find that in nonattainment counties, the ozone regulation reduced the annual average growth rates of total wage earnings, average annual worker pay, and total employment by 0.4 percentage point, 0.3 percentage point, and 0.1 percentage point respectively. To put this in perspective, the average annual growth rate of total wage earnings in nonattainment counties was 1.7 percent when the regulation took effect. Our results indicate that the average annual growth rate would have been 2.1 percent had the regulation not taken effect. Moreover, the annual growth rate of average annual worker pay in nonattainment counties would have increased on average from 1.6 percent to 1.9 percent. The average annual total employment growth rate would have increased from 0.1 percent to 0.2 percent.

Implications

We find consistent and statistically significant evidence that the ozone regulation reduced the growth rate of total wage earnings, average annual worker pay, and total employment in nonattainment counties. Table 2 illustrates the implications of our results on pay and employment.

Table 2: Implications[8]

Labor Market Variables

Average Annual Decrease in Each County

Total Decrease in Each County 2008-2013

Total Wage Earnings

$54,000,000

$272,000,000

Average Annual Worker Pay

$140

$690

Total Employment

233

1,200


We find that in nonattainment counties, the ozone regulation was associated with a 0.4 percentage point decline in the annual growth rate of total wage earnings. This translates to a loss of $54 million per year in total wage earnings on average in each county. By 2013, nonattainment counties on average lost $272 million in total wage earnings due to the ozone regulation. This means that the ozone regulation reduced total wage earnings by $56.5 billion across all the observed nonattainment counties.

In addition, we find that between 2008 and 2013 the ozone regulation reduced the annual growth rate of average annual worker pay by 0.3 percentage point. This means the regulation cost each worker on average $140 per year. So by 2013, each worker in nonattainment counties lost a total of $690.

Finally, we find evidence that the ozone regulation reduced job growth in nonattainment counties by 0.1 percentage point. This indicates that the regulation cost each nonattainment county an average of 233 workers each year. By 2013, each nonattainment county lost 1,200 jobs on average. Across all the observed ozone regulated counties, this adds up to a loss of 242,000 jobs total. To put the employment figure in perspective, the U.S. economy created 223,000 jobs in July 2015

Discussion

These findings should not come as a huge surprise given the existing research. The results of AAF’s study resemble those of past studies of ozone regulation’s effects on wages and employment. For instance, in 2002 Professor Michael Greenstone found that earlier iterations of ozone regulation under the Clean Air Act resulted in losses of “590,000 jobs, $37 billion in capital stock, and $75 billion (1987$) of output” in nonattainment areas.

Our study’s findings are not as dramatic, but that is likely due to differences in scope and magnitude of regulation. Professor Greenstone’s study primarily examined “pollution intensive industries” whereas this study looks at all industries and all employees. Many industries are only tangentially related to the standards, whereas higher polluting industries will likely experience the most acute effects. Finding that this rule has negative effects on an economy-wide scale in spite of dilution from industries that do not directly pollute further implicates the effects of ozone regulation. In addition, Professor Greenstone examined earlier standards that may have been more difficult to comply with at that time due to unfamiliarity with mitigation approaches and new technology.

The other curious aspect of our findings is why there is such an outsized imbalance between lost jobs and lost wages. The basic phenomenon this study sought to examine was the change in employment and pay in nonattainment counties relative to neighboring attainment counties. We find that the regulation reduced total wage earnings in each county by $54 million each year and cost 233 jobs annually. But clearly, the job lost in the average nonattainment county did not pay roughly $232,000 annually. Rather it seems that the total wage earnings fell mostly because the average annual pay per worker who remained employed fell substantially as well.

Conclusion

EPA’s ozone standards affect a broad array of industries, and due to the nature of how ozone actually forms, nonattainment areas can have difficulty meeting standards quickly. California, for example, will have nearly a generation to reach the new standards. This could result in certain businesses making economic decisions to reduce hiring, limit raises, and move or expand operations elsewhere. The result? $56.5 billion in lost wages and 240,000 fewer jobs in nonattainment counties struggling to meet standards.



[1] There were 232 counties affected by the 2008 Ozone rule; however, we only include 208 counties as a result of unavailable data.

[2] Occupational Employment Statistics, Bureau of Labor Statistics, http://www.bls.gov/oes/tables.htm

[4] Quarterly Census of Employment and Wages, Bureau of Labor Statistics, http://www.bls.gov/cew/

[5] Population Estimates, Census Bureau, https://www.census.gov/popest/index.html

[6] Current Population Survey, Census Bureau, http://www.census.gov/cps/data/

[7] State Individual Income Tax Rates, Tax Foundation, http://taxfoundation.org/article/state-individual-income-tax-rates

[8] Numbers may not add to total due to rounding

  • The FCC's Lifeline program costs $2,078 to add an additional wireless subscriber to the already existing telephone system for one year.

Since the 1980s, the Lifeline program has provided a subsidy for telephone service for low income households. However, changes in the broader communication ecosystem has prompted the Federal Communications Commission (FCC) to implement reforms to the program. Among the changes that the Federal Communication Commission (FCC) is currently considering for the wired and wireless Lifeline subsidy, the most important should be the implementation of an evaluation program. According to our model, which incorporates the most recent data, the program costs $2,078 to add an additional wireless subscriber to the already existing telephone system for one year. In total, just under 6 percent of all the recipients of Lifeline added wireless service because of the program.    

There are other ways to induce the adoption of new technologies. In 2012, U.S. consumers paid on average over 17 percent in wireless taxes and fees, which ranks these taxes among the highest for goods.[1] More than just good government and tax policy, lowering the wireless tax rate could go a long way to meet the goals of a majority of Americans having access to telephone service.

Time and again, the FCC has resisted implementing a program evaluation of Lifeline, even after reports from Government Accountability Office (GAO) suggested these changes, because, as the agency proclaims, the structure of the program “makes it difficult to determine a causal connection between the program and the penetration.”[2] Well structured, independent studies have been conducted of the program for nearly as long as its existence, and almost universally it is found to be economically inefficient and ineffectual in achieving its stated goals.[3]

By updating a much cited model with variables for wireless, we were able to test Lifeline’s usefulness for expanding wireless use.[4] While there is a statistically significant correlation between wireless penetration rates for a state and their Lifeline program, the overall effect is small. An increase in the size of the program by 10 percent would only increase wireless penetration from .08 percent to .09 percent in our three models.[5] Assuming the higher number, the cost of adding a marginal user was $2,078. In 2012, 1.05 million marginal households got wireless services because of the program, even though 18.1 million households received the subsidy for both wired and wireless services. In total, just under 6 percent of the Lifeline households used their subsidy to get onto wireless services.  The appendix contains the methodology for these findings.

 Our outcomes are in line with other studies, and suggest that the program is rather ineffective in getting people into wireless services.

Appendix

The chart below displays the results of three models. The first model includes just a poverty variable, the second includes just an income variable and the third model includes both. Data from 2012 was used because it is the most recent date for the dependent variable, total wireless penetration by state, which comes from the Centers for Disease Control.[6] Because Montana, South Dakota, and Wyoming have such small populations, the CDC could not reliably estimate their telephone use, leading to 48 total observations. The tax variable is the total combined state and local taxes percentages.[7] The poverty variable is an estimate from the March Supplement of households living in poverty as a percentage of total households by state.[8] The income variable is a Census estimate of median household income from 2012.[9] The Lifeline variable represents the expenditures on Lifeline divided by the number of poor households.[10] The density variable is a percentage of population living in urban areas.[11] Because the previous model tries to track the effect of previous household consumption of telephone service, the Wireless 2010 variable is a two year lag variable, capturing those households that consumed some kind of wireless telephone service in 2010.[12]

 

 

Total Wireless Penetration

 

(1)

(2)

(3)

--------------------------------------------------------------------------------------------------------

Tax

-0.016

-0.017

-0.016

 

(0.015)

(0.015)

(0.015)

       

Poverty

-0.021

 

-0.014

 

(0.016)

 

(0.024)

       

Income

 

0.031

0.016

   

(0.025)

(0.037)

       

Lifeline

0.009*

0.008*

0.009*

 

(0.005)

(0.005)

(0.005)

       

Density

0.039**

0.031

0.035*

 

(0.018)

(0.019)

(0.020)

       

Wireless 2010

0.332***

0.338***

0.332***

 

(0.070)

(0.069)

(0.070)

       

Constant

-0.155***

-0.444

-0.309

 

(0.055)

(0.284)

(0.371)

--------------------------------------------------------------------------------------------------------

Observations

48

48

48

R2

0.448

0.446

0.45

Adjusted R2

0.382

0.38

0.369

Residual Std. Error

0.026 (df = 42)

0.026 (df = 42)

0.026 (df = 41)

F Statistic

6.804*** (df = 5; 42)

6.749*** (df = 5; 42)

5.589*** (df = 6; 41)

--------------------------------------------------------------------------------------------------------

Note: *p<0.1; **p<0.05; ***p<0.01

 



[1] Joseph Henchmen and Scott Drenkard, State and Local Governments Impose Hefty Taxes on Cell Phone Consumers, http://taxfoundation.org/article/state-and-local-governments-impose-hefty-taxes-cell-phone-consumers

[2] United States Government Accountability Office, FCC Should Evaluate the Efficiency and Effectiveness of the Lifeline Program, http://www.gao.gov/assets/670/669209.pdf.

[3] Christopher Garbacz and Herbert G Thompson Jr, Assessing the Impact of FCC Lifeline and Link-Up Programs on Telephone Penetration, https://www.researchgate.net/publication/5155900_Assessing_the_Impact_of_FCC_Lifeline_and_Link-Up_Programs_on_Telephone_Penetration

[4] The basic model used by Garbacz and Thompson cited in the previous note was penetration = f (tax rate, poverty rate, lifeline, population density, median income, lagged wireless penetration rate). See the appendix for more information and caveats.

[5] The total number of households in 2012 was 114,991,725.

[6] Stephen J. Blumberg, Nadarajasundaram Ganesh, Julian V. Luke, and Gilbert Gonzales, Wireless Substitution: State-level Estimates From the National Health Interview Survey, 2012, http://www.cdc.gov/nchs/data/nhsr/nhsr070.pdf.   

[7] See note 8.

[8] Since this number is not readily available, AAF estimated this for each of the states. Data is available upon request.

[9] Amanda Noss, Household Income: 2012, https://www.census.gov/prod/2013pubs/acsbr12-02.pdf

[10] Universal Service Administration Company, USAC 2012 Annual Report, https://www.usac.org/_res/documents/about/pdf/annual-reports/usac-annual-report-2012.pdf

[11] Census, Population Density for States and Puerto Rico: July 1, 2012https://www.census.gov/popest/data/maps/2012/popdens-2012.html

[12] Stephen J. Blumberg, Nadarajasundaram Ganesh, and Michel H. Boudreaux, Wireless Substitution: State-level Estimates From the National Health Interview Survey, 2010–2011,  http://www.cdc.gov/nchs/data/nhsr/nhsr061.pdf

Related Research

Introduction

As the world meets in Paris at the United Nations (UN) Conference on Climate Change during the first two weeks in December, it is important to take note how the U.S. has already regulated greenhouse gases (GHGs). According to American Action Forum (AAF) research, regulators have already imposed $26 billion in annual costs to limit GHGs and have proposed an additional $1.7 billion. However, to meet President Obama’s climate goals the nation will have to spend up to $45 billion more each year by 2025.

What are the benefits of these investments? According to the Environmental Protection Agency (EPA) estimates, previous actions will avert a combined 0.0573 degrees Celsius of warming. Meeting the president’s 2025 goals could add reductions up to 0.125 degree Celsius. In other words, full achievement of the president’s climate goals will cost more than $73 billion in annual burdens to alleviate less than two-tenths of one degree of warming.

Past GHG Regulation

President Obama and EPA have aggressively regulated greenhouse gases from a variety of sources. AAF research counted at least 15 final rules designed to ensure proper reporting of GHGs, increasing fuel economy for cars and heavy-duty trucks, methane reductions from fracking, and limiting GHG emissions from new and existing power plants. These rules have consumed more than 3,000 pages in the Federal Register. The total cost from these 15 measures is $230 billion in net present value costs, more than $26 billion in annual burdens, and 2.9 million paperwork burden hours. These totals omit Department of Energy rules that also limit GHG emissions (and impose substantial burdens) and EPA’s Mercury Air Toxics and Cross-State Air Pollution measures. The recent regulatory burdens fighting climate change act essentially like taxes, raising the cost of energy for consumers and manufacturers.

Below is a list of the five most significant GHG regulations since President Obama took office:

Regulation

Total Cost (in billions)

Annual Cost (in billions)

CAFE Standards: 2017-2025 Cars

$156

$10.8

CAFE Standards: 2012-2016 Cars

$51

$4.9

GHG Standards for Power Plants

$11.9

$8.4

CAFE Standards: Heavy-Duty Trucks

$8.1

$0.6

CAFE Standards: 2011 Cars

$1.4

$1.4

Totals

$229.2

$26.1


Combined, all 15 major rules aim to eliminate more than six billion tons of carbon dioxide (CO2) during the life of the rulemakings or roughly 860 million annually. For comparison, the U.S. emitted 6.1 billion tons in 2005. Yet, despite the heavy costs of regulation now and the questionable future benefits, regulators will continue to issue new rules to address climate change.

President’s Climate Goals

At the beginning of his administration, President Obama set a goal to reduce U.S. GHG emissions by 17 percent, below 2005 levels. Based on the latest Energy Information Administration (EIA) projections, the U.S. will emit 5.49 billion tons of GHG in 2020. (This baseline includes all regulatory and legislative actions as of October 2014.) A 17 percent reduction from 2005 levels means the U.S. needs to emit just slightly more than five billion tons in 2020, so the U.S. is still 303 million tons away from the goal.

That might not appear to be difficult, given the nation has already cut more than 900 million tons annually, but consider that time is hardly on the administration’s side. What’s more, relaxing the initial deadline for power plant compliance means the “Clean Power Plan” will only save 81 million tons of CO2 by 2020. Any similar regulation affecting stationary sources of GHG will likely reap only marginal cuts in the remaining four to five years.

The 2025 goals are even more daunting. According to EIA, the nation will emit 5.5 billion tons, barely an increase above 2020, but President Obama has called for a reduction of up to 28 percent below 2005 levels. This puts the president’s goal at 3.9 billion tons or a reduction of more than 1.2 billion tons by 2025. This includes EIA’s baseline, in additional to the Clean Power Plan, and pending proposals for a second round of heavy-duty truck efficiency and a new fracking measure. With all of these reductions assumed, the nation must still double its GHG cuts to date and then find additional reductions of more than 400 million tons of CO2. Even though the Clean Power Plan is scheduled to cut 265 million tons of GHGs by 2025, it is just a fraction of the remaining goal.

The graph below displays the heavy lifting required to meet the president’s targets:

 

Cost of Achieving Goals

What will it cost to cut an additional 1.2 billion tons by 2025? If history is any guide, up to $45 billion in annual regulatory costs. To arrive at this figure, AAF examined the cost of reducing past GHG emissions from the Obama Administration. For example, examining the full slate of the administration’s regulatory actions on climate yields $261 billion in total costs to reduce more than seven billion tons of GHG. This yields a cost of $37.04 to prevent one ton of CO2 from entering the atmosphere. Comparing annual costs ($28 billion) to annual GHG reductions (902 million tons) results in a slightly lower figure: $31.11 to prevent the release of an additional ton of CO2.

With these figures in mind, we know in 2020 the nation will need to eliminate 303 million tons above EIA’s baseline and what is currently in proposed form. By 2025, that figure grows to 1.2 billion tons. Assuming the cost of eliminating CO2 remains constant for the next decade nets the following figures: $11 billion in additional annual costs by 2020 and $45.5 billion by 2025. Using the lower figure of $31.11 to reduce emissions could impose costs of $9.4 billion (greater than the burden of the Clean Power Plan) and $38.2 billion, respectively. The 2025 goals are the equivalent of two to three years’ worth of government-wide regulation.

With those costs pending, what’s left to regulate?

The administration has already targeted oil and natural gas wells twice, existing and new power plants, and the transportation sector. Yet, there are obviously more sources of GHG left for regulators.

EPA has already put out a proposed “Endangerment Finding” on aircraft emissions, although eminent regulation is unlikely until international regulators conclude their work. The fracking boom has generated plenty of sources of methane, beyond the well sites, that could be attractive for the Department of Interior or EPA.

In addition, the regulation of refineries or large manufacturing plants could be an attractive target in the future, although controlling GHG emissions from those sources will hardly net significant cuts. For example, the largest refinery (by emissions) in the U.S. emits 10.5 million tons of equivalent CO2 and there are only 145 refineries that report to EPA. Compare that to the power plant sector, where the largest facility emits 22.2 million tons and there are 1,572 facilities that report to EPA. In total, power plants emitted 2.1 billion metric tons of equivalent CO2 in 2013, compared to just 177 million tons for refineries. Suppose the nation eliminated 100 percent of refinery emissions by 2025; that would reduce the amount needed to reach the president’s 2025 goal by just 14 percent. 

Finally, according to EPA data, electricity generation and transportation generated 58.5 percent of U.S. GHG emissions in 2013. Industrial emissions, which regulators will doubtless address during the next decade, consumed 21 percent. The next closest contributor of GHG emissions is the agricultural sector, at 8.8 percent. Hardly a popular villain, regulating farms, large or small, will be politically difficult for any administration, expensive, and likely to yield few returns. Regardless of which industry is next for regulators, the price of reducing an additional 1.2 billion tons will remain high.

Benefits

There will be benefits from continuing to reduce GHGs. Some industries will flourish and others will shrink. There will be indeterminate health gains and some climate benefits. EPA admits climate benefits will remain small because climate change is a global problem and there is only so much a nation with four percent of the world’s population can do to avert the problem.

EPA has produced estimates from its past efforts to alleviate climate change. In five previous rules, EPA projected temperature declines ranging from 0.0042 Celsius to 0.0176 Celsius by the year 2100. Combined, previous EPA action to limit climate change could avert just 0.0573 degrees by 2100 (including the second round of heavy-duty trucks in this baseline). Those small reductions come at the total cost of more than $208 billion.

There are also GHGs reductions associated with these five measures. Combined, EPA projects they will eliminate 546 million annual tons of CO2. Using these metrics as a baseline, AAF assumed the same ratio of GHG reduction to temperature reduction for the president’s 2025 goal. Assuming the rate remains constant for the next decade, reducing emissions by an additional 1.2 billion tons could avert 0.125 degrees of warming. In other words, unless there is a drastic change in EPA’s modeling, meeting President Obama’s 2025 climate goal will reduce global temperature by 0.1823 degrees Celsius; a number that is difficult to measure, much less halt drastic climate change. This is also a high-end figure, as agencies typically give a range of temperature declines and AAF always assumed the most beneficial climate scenario.

What about monetizing the benefits? According to the White House, the “Social Cost of Carbon” (SCC) represents “an estimate of the economic damages associated with a small increase in carbon dioxide (CO2) emissions, conventionally one metric ton, in a given year.” It also represents a per ton monetary equivalent of the value of avoided climate change. For example, if a regulation reduced emissions by 265 million tons in 2025, it could yield $13.5 billion in climate benefits, assuming a discount rate of 3 percent.

The latter is important when comparing costs and benefits because costs are likely incurred initially and benefits are typically years, decades, or in the case of climate change, generations away. White House guidance dictates a discount rate of three and seven percent. “As a default position … a real discount rate of 7 percent should be used as a base-case for regulatory analysis.” However, for climate change, the administration has ignored its own guidance. It SCC analysis uses 5 percent, three percent, 2.5 percent, and an extreme that represents the most catastrophic effects of climate change. EPA completely omits a seven percent rate and instead concocts a 2.5 percent rate.

Assuming a discount rate of five percent, (the average of three and seven) as EPA does in one analysis, yields a $16 benefit per ton of CO2 eliminated in 2025. Given AAF’s analysis that the cost of eliminating a ton of CO2 ranges between $31 and $37, it appears the costs far exceed the benefits, at least at the five percent rate.

Conclusion

After $28 billion in annual regulatory costs to address climate change, the U.S. is hardly finished if it wants to meet the president’s goals. As much as $45 billion per year in regulatory burdens will likely be required to meet the 2025 benchmark, with just a 0.125-degree temperature reduction as a reward. As the U.S. approaches another round of climate negotiations, these figures demonstrate the American people have already shouldered a heavy burden to reduce emissions. 

Introduction 

On October 6, 2015, the Centers for Medicare and Medicaid (CMS) published its final rule on Stage 3 of the Meaningful Use Requirements for the Electronic Health Record Incentive Program.[1] This program adjusts payments to Medicare and Medicaid providers for implementing and “meaningfully using” (or not) interoperable electronic health records (EHR) systems. These standards are being implemented in three stages to gradually move providers toward the desired end point: to “provide efficiencies in administrative processes which support clinical effectiveness, leveraging automated patient safety checks, supporting clinical decision making, enabling wider access to health information for patients, and allowing for dynamic communication between providers.”

At each stage, CMS has chosen various metrics—of both quality and health information technology (HIT) functionality—to evaluate a provider’s performance in caring for their patients. In order to attest to each stage of the incentive program, providers must track and report their performance on such metrics. The first meaningful use requirements (Stage 1) were outlined in 2010 and focused on capturing patient data, such as demographic information and family medical history. Stage 2 began in 2014 and focused on the exchange of information between providers and patients as well as providers within a given practice in order to improve treatment adherence and care coordination. Beginning in 2016, penalties will be issued to providers that fail to meet the program’s requirements in the preceding year.[2] Stage 3 objectives are focused on improving the interoperability of EHR systems in different practices.

Modified Stage 2 and the 2015-2017 Transition Period

The new rule combines stages 1 and 2 into a “Modified Stage 2,” which will allow providers to meet a single set of objectives for up to three years (2015-2017), rather than having to meet the objectives of both Stage 1 and 2 separately. This is intended to reduce the reporting burdens and allow providers to move more quickly through the stages. Providers will be required to at least attest to the Modified Stage 2 beginning in 2015, with limited exceptions. This should help ensure that all providers will be ready and able to transition to Stage 3, and thus possess the same capabilities, in 2018—a necessary achievement in order for interoperability to be successful.

Calendar year reporting will begin in 2015, as opposed to fiscal year reporting, as has been the case. Providers still only need to attest to meaningful use for any continuous 90-day period through 2015, but beginning in 2016 and in 2017, they must attest for the entire year.[3] Full year reporting will be required of all providers beyond that—this is critical to meeting the goals of the program: the more data collected, the more useful it is for comprehensive analysis and care improvement.

Beginning in 2015, measures that are redundant, duplicative, or considered to be “topped out” will be removed; this will be an ongoing process to reduce the burdensome reporting requirements and to allow for the most appropriate use of funds.[4] Topped out measures do not provide an opportunity to differentiate performance among providers and presumably no longer need to be incentivized. Certain other measures will be selected as “high-priority,” particularly for use in the Clinical Decision Support objective.

Objectives and Corresponding Measures for Modified Stage 2: 2015-2017

The following objectives and corresponding measures will be used for evaluating whether or not a provider has met the necessary requirements for attestation to Modified Stage 2 in 2015, 2016, and 2017:

1.     Protect Patient Health Information

 

a.    Conduct or review a security risk analysis

b.    Establish a risk management process and correct any problems or deficiencies identified

2.    Use Clinical Decision Support to Improve Performance on High-Priority Health Conditions

 

a.    Implement 5 clinical decision support interventions related to 4 or more clinical quality measures at a relevant point in patient care

b.    Enable functionality for drug-drug and drug-allergy interaction checks

3.    Use Computerized Provider Order Entry (CPOE) for Medication, Laboratory, and Radiology Orders

CPOE must be used for:

a.    More than 60 percent of medication orders

b.    More than 30 percent of the laboratory orders

c.     More than 30 percent of radiology orders

4.    Generate and Transmit Permissible Prescriptions Electronically (eRx)

 

a.    For Providers: more than 50 percent of prescriptions

b.    For Hospitals/CAHs: More than 10 percent of hospital discharge medication orders

5.    Provide Patients with a Summary of Care Record for Each Transition of Care or Referral and Electronically Transmit Such Summary

a.    The referring provider must provide and electronically transmit such summaries for more than 10 percent of transitions and referrals

 

6.    Use Clinically Relevant Information from Certified Electronic Health Record Technology (CEHRT) to Identify Patient-specific Education Resources and Provide those Resources to the Patient

a.    For providers: patient specific education resources must be provided to more than 10 percent of all unique patients

b.    For hospitals/CAHs: patient specific education resources must be provided to more than 10 percent of all unique patients admitted to the hospital’s inpatient or emergency department

7.    Perform Medication Reconciliation for any Patient Received from Another Setting of Care

a.    Medication reconciliation must be performed for more than 50 percent of patients transitioned into the care of the provider or hospital’s inpatient or emergency department

8.    Provide Patients the Ability to View Online, Download, and Transmit their Health Information within 4 Business Days of the Information Being Available to the Provider

 

a.    More than 50 percent of all unique patients must be provided timely access to view online, download, and transmit to a third party their health information

                 i.    In 2015 and 2016, at least one patient must use such functionality

                ii.    In 2017, more than 5 percent of patients must use such functionality

9.    Use Secure Electronic Messaging to Communicate with Patients on Relevant Health Information

 

a.    In 2015, patients must have the capability to send and receive a secure electronic message with their provider

b.    In 2016, at least one patient must send or receive a message

c.     In 2017, more than 5 percent of patients must send or receive a message

10.  Provider or Hospital Actively Engages with a Public Health Agency to Submit Electronic Public Health Data from CEHRT

Provider must submit the following data to public health agencies:

a.    Immunization data

b.    Syndromic surveillance data

c.     Specialized registry reporting

d.    Electronic reportable laboratory results

 

Stage 3

Stage 3 is intended to bring about advancements in care delivery by requiring more advanced EHR functionality and standards for structuring data, increasing thresholds compared to Stage 1 and 2 measures, and requiring more coordinated care and patient engagement. All providers will be required to meet the Stage 3 objectives in 2018 for the entire calendar year, but providers will be encouraged and able to begin attesting to Stage 3 in 2017.

Objectives for Stage 3: 2017 and beyond

The following objectives and corresponding measures will be used for evaluating whether or not a provider has met the necessary requires for attestation to Stage 3 in 2017 and subsequent years. All CEHRT must be able to perform the necessary functions to meet these objectives, including recording and reporting all necessary data electronically.  

1.     Protect Electronic Patient Health Information (ePHI) Created or Maintained by the CEHRT through the Implementation of Appropriate Technical, Administrative, and Physical Safeguards

 

a.    A security risk analysis must be conducted, including addressing the security (including encryption) of data created or maintained by the CEHRT

b.    Security updates must be implemented as necessary

c.     Identified security deficiencies must be corrected as part of the provider’s risk management process

2.    Electronic Prescribing: Generate and Transmit Permissible Prescriptions Electronically (eRx)

 

a.    For Providers: more than 60 percent of prescriptions must be transmitted electronically using CEHRT

b.    For Hospitals/CAHs: More than 25 percent of hospital discharge medication orders must be transmitted electronically

3.    Implement Clinical Decision Support (CDS) Interventions Focused on Improving Performance on High-Priority Health Conditions

a.    5 CDS interventions related to 4 or more CQMs must be used at a relevant point in care

b.    Drug-drug and drug-allergy interaction checks must be enabled and implemented

4.    Use Computerized Provider Order Entry (CPOE) for Medication, Laboratory, and Diagnostic Imaging Orders

CPOE must be used for:

a.    More than 60 percent of medication orders

b.    More than 60 percent of laboratory orders

c.     More than 60 percent of diagnostic imaging orders

 

5.    Provide Patient with Timely Electronic Access to Health Information and Patient Specific Education Materials

 

a.    More than 80 percent of all unique patients seen or discharged:

                 i.    Must be provided timely access to view online, download, and transmit his or her health information; and

                ii.    The provider must ensure the patient’s health information is available for the patient to access using any application of their choice that is configured to interact with the provider’s CEHRT

b.    Provider must use clinically relevant information from CEHRT to identify patient-specific educational resources and provide electronic access to those materials to more than 35 percent of unique patients

6.    Patient Engagement and Coordination of Care: Use CEHRT to Engage with Patients or their Authorized Representatives for Improved Coordination of Care

 

a.    More than 10 percent of all unique patients (or their authorized representative) must actively engage with the EHR and either:

                 i.    View, download, or transmit to a third party their health information; or

                ii.    Access their health information through the use of an application in the provider’s CEHRT; or

              iii.    A combination of (i) and (ii)

b.    More than 25 percent of all unique patients must receive an electronic message using the CEHRT

Patient generated health data or data from a nonclinical setting must be incorporated into the CEHRT for more than 5 percent of all unique patients

7.    Health Information Exchange (HIE): A Summary of Care Record is Provided when Transitioning or Referring a Patient to Another Setting of Care and Incorporates Summary of Care Information from Other Providers into their EHR Using the Functions of CEHRT

 

a.    For more than 50 percent of transitions and referrals, the provider that transitions or refers their patient must create a summary of record using CEHRT and electronically transmit the record

b.    For more than 40 percent of transitions received and new patients, the provider must incorporate into the patient’s EHR an electronic summary of care document

c.     For more than 80 percent of transitions or referrals received and new patients, the provider must perform a clinical information reconciliation for medication, medication allergies, and a current problem list

8.    Public Health and Clinical Data Registry Reporting: The Provider Actively Engages with a Public Health Agency or Clinical Data Registry to Submit Electronic Public Health Data in a Meaningful Way Using CEHRT

Providers must report the following information to the appropriate setting:

a.    Immunization data

b.    Syndromic surveillance data

c.     Electronic case reporting

d.    Public health registry reports

e.    Clinical data registry reports

f.     Electronic reportable laboratory result reports

 

 

Medicaid Providers

The Medicaid EHR Incentive Program is voluntary for the states, and each state may choose whether or not to administer such a program. Providers treating both Medicare and Medicaid patients can avoid Medicare penalties by successfully demonstrating meaningful use to their state Medicaid agency, even if it occurs after the Medicare attestation period closes, if the state chooses to participate. States must determine whether and how electronic reporting of CQMs would occur, or whether they wish to allow reporting through attestation. The federal government will fund 90 percent of the state’s administrative costs for implementing the technology and systems necessary for compliance.

Medicare Advantage Incentive Payments

Medicare Advantage Organizations (MAOs) are unique in how they are treated under this incentive program because providers treating MA patients are not paid directly by CMS, as providers treating traditional fee-for-service (FFS) providers are. Instead, CMS pays MAOs a flat monthly fee for each enrollee at the beginning of each month. Through these incentive programs, MAOs may receive payment adjustments according to how well their patients are treated, similarly to how payments are adjusted for providers. In Stage 3, there will not be any changes to the existing policies and regulations for MAOs.

Conclusion

CMS is “statutorily required to require more stringent measures of meaningful use over time.” Since 2009, requirements have gradually increased and required more advanced technology. Each progression is designed to move all providers in the same direction, create standards of practice and for capturing data, and allow for improved patient outcomes. Progress is being made: in 2014, 57 percent of office-based physicians electronically shared health information with their patients and 42 percent electronically shared patient information with other providers, including 26 percent who electronically shared patient information with providers outside their office or group.[5] CMS is also seeking comments on adopting meaningful use attestation into the new MIPS payment scheme which will be implemented beginning in 2019.


[1] A comment period was issued with this final rule to allow for consideration for adjustments in the future as CMS works to establish regulations relating to implementation of the Merit-Based Incentive Payment System (MIPS) called for in the Medicare Access and CHIP Reauthorization Act of 2015 (MACRA). Current EHR Incentive Program payment adjustments will end in 2018 and will be incorporated through MIPS beginning in 2019.

[3] Unless it is their first year attesting or they are attesting to Stage 3 in 2017, in which case they will be allowed to attest through any continuous 90-day period.

[4] “Topped out” measures are those that have a very high level of achievement among most providers.


  • In 2020, the U.S. will short 7.5 million workers

  • If immigrant workers don’t grow, the U.S. will be short 11.2 million workers in 2020

  • If immigrant worker growth increases 25%, the U.S. labor shortage in 2020 will fall from 7.5 million to 6.4 million

Executive Summary

By 2020 – a mere five years away – we project the United States and Silicon Valley economies will face crucial labor force skill shortages. Our analysis and policy recommendations make the case for a multi-pronged approach to meeting the projected skills gaps: increase the growth rate in immigrant workers across the economy; adjusting visas to better reflect skills mismatches; improving training and education opportunities for the existing immigrant workforce; and allowing training and education earned abroad to be applied to degree attainment and U.S. licensure processes here. Regardless of the policy mix chosen, failing to address these challenges will undermine the United States’ global competitiveness.

United States

United States Private Sector Projected Worker Shortages

Education

Current Projection

Immigrant Worker Growth Increases 25%

 

Workers

Percent

Workers

Percent

Total

7,506,256

4.9%

6,361,152

4.2%

Less than High School Degree

5,271,950

29.3%

5,171,993

28.8%

High School Degree

1,234,352

3.0%

959,900

2.4%

Some College, Associates

123,040

0.3%

-103,048

-0.2%

Bachelor's, Master's, Professional, Doctorate

876,915

1.8%

332,306

0.7%

 

In 2020 the United States will be short roughly 7.5 million private sector workers, with substantial shortfalls occurring across all skill levels. While improving training for the existing workforce, increasing the growth rate in the immigration workforce at all skill levels would substantially reduce overall shortages. We find that increasing the growth rate of immigrant workers by 25 percent across all skill levels would decrease the United States’ labor shortage to 6.4 million, mostly by eliminating middle- and high-skill labor shortages.

Silicon Valley

Silicon Valley Private Sector Projected Worker Shortages

Education

Current Projection

Immigrant Worker Growth Increases 25%

 

Workers

Percent

Workers

Percent

Total

72,536

5.0%

46,415

3.2%

Less than High School Degree

52,911

30.9%

49,192

28.7%

High School Degree

66,851

17.3%

63,761

16.5%

Some College, Associates

31,190

8.1%

25,066

5.9%

Bachelor's, Master's, Professional, Doctorate

-78,416

-17.6%

-91,603

-19.1%

 

Silicon Valley meanwhile will be short 72.5 thousand private sector workers, with the shortfalls occurring at lower- and middle-skill jobs. Increasing the growth rate of immigrant workers by 25 percent across all skill levels would decrease the region’s shortage from 72.5 thousand to 46.4 thousand.

Policy Options

Changes in immigration policies and patterns can help fill the middle-skill gaps in the Silicon Valley and meet our nation’s economic needs. Policy options that could increase the immigrant workforce to fill these jobs include:

1. Broad Immigration Reform that Ensures Employers’ Labor Needs Are Met

Congress could enact broad immigration reform policies that ensure that future labor needs are met through the legal immigration system. The employment-based visa program needs to be reformed to reflect the different workforce needs of employers and our country and to include flexibility for the visa quotas to adjust to economic realities. 

2. Adjust the Mix of Visas to Better Match Skill Needs of Employers

The proportion of working age immigrants entering the United States with at least a Bachelor’s degree has increased substantially. In particular, in 2013 over 50 percent of working age immigrants entering the United States and over 60 percent entering Silicon Valley had at least a Bachelor’s degree. One option would be to expand visas that target immigrants with an Associate’s level education or to create new temporary and permanent work visas that are specifically for workers at this skill level.

3. Increase Educational and Skills of Existing Immigrant Workforce

State educational policies could encourage and enable immigrant workers already living in the United States and Silicon Valley to obtain an Associate’s degree or comparable industry-recognized credential to meet the middle-skills gaps. Federal, state, and local governments could promote effective integration of education (adult education, community college, four-year institutions) and workforce support services. Integrating these systems can assist immigrant workers in accelerating language acquisition while concurrently developing knowledge and job skills needed to obtain occupational credentials. Government offices could share successful preparation models. State and local models, such as career pathways, can be applied to preparing immigrant workers for middle-skill jobs. 

Employers could also provide development opportunities for native and immigrant staff, such as in-house training, partnering with community colleges and postsecondary institutions, and joining with community organizations to provide contextualized English language classes and supportive services. 

4. Obtain Credit for Work, Experience, and Credentials from Abroad

Employers and the nation’s economy benefit from having skilled immigrant professionals that are readily able to contribute their expertise and credentials gained from abroad. State educational policies could enable immigrant workers to receive credit for work, experience, and credentials attained from abroad. This could increase the value of education and training for immigrant workers because it would limit redundancy in their personal education, accelerate their degree completion, and save them time and money. Additionally, states could consider how immigrants can gain credit for training and work experience gained abroad when applying for licensure in the United States, while maintaining public health and safety standards. 

 

Introduction

Silicon Valley’s regional economy is among the fastest growing in the United States. From 2003 to 2013 real Gross Domestic Product (GDP) in the San Jose-Sunnyvale-Santa Clara metropolitan area grew at an average annual rate of 4.5 percent, significantly higher than the 1.8 percent average growth rate nationwide.[1] During that time period, employment in San Mateo and Santa Clara counties grew 9.8 percent, more than twice as quickly as the 4.8 percent job growth in the entire United States. Silicon Valley is certainly growing fast and employers will continue to demand a multitude of workers at all skill levels. But, with a national labor force participation rate at a historic low, will there be enough workers to fill all those job openings?

In this paper, we project the difference between the skills employers will demand in 2020 and the skills workers will supply to estimate potential skill gaps in the economy. In turn, we project that by 2020 the United States and Silicon Valley economies could be confronted with significant worker shortages at multiple skill levels. There are different options to developing a pipeline of skilled workers, and this paper focuses on how adjusting the growth of immigrant workers could substantially close the impending skill gaps in the United States and Silicon Valley.  

Although we find that it would be helpful to use immigration to address future skill gaps, doing so would require substantial shifts from recent immigrant trends. In particular, from 2003 to 2013 the portion of immigrants coming to the United States and Silicon Valley with Bachelor’s degrees has increased substantially and the portion with all other skill levels has decreased. As a result, absent a change in policy or immigration patterns, increasing the immigrant workforce would bring in additional high-skilled workers, but not a large number of low- or middle-skilled workers. Our analysis leads to policy recommendations to help fill middle-skill gaps by ensuring that immigrant workers can gain skills in the United States and apply their work experiences and credentials attained abroad.

Education and Immigration Characteristics in the United States and Silicon Valley

When examining the impact of immigration on the supply of skilled labor, it is important to analyze regions that actually rely on foreign born workers and a highly skilled labor force. This is why we compare labor market trends in the entire United States to trends in Silicon Valley, where there is a highly educated workforce and a large concentration of immigrants.

In this paper, we analyze growth of workers at four distinct education levels. These four categories of workers include those who (1) have not completed high school, (2) obtained a high school degree but did not complete any higher level of education, (3) completed some college or obtained an Associate’s degree but did not obtain a Bachelor’s degree, and (4) obtained a Bachelor’s, Master’s, Professional, or Doctorate degree.

Table 1 compares educational attainment of workers ages 25 to 64 nationwide to Silicon Valley.[2]

Table 1: Worker Education Characteristics

Education Level

Nation

Silicon Valley[3]

Less than High School

9.6%

10.2%

High School Degree

25.3%

14.3%

Some College, Associate’s Degree

31.4%

25.4%

Bachelor's, Master’s, Professional, Doctorate Degree

33.7%

50.0%

 

Overall, Silicon Valley has a much higher prevalence of highly skilled workers and a lower concentration of middle-skilled workers than the entire United States. In particular, 33.7 percent of workers in the United States have Bachelor’s degrees. In Silicon Valley, those workers represent 50 percent of the workforce. However, the United States has a greater percentage of workers with high school and Associate’s degrees than Silicon Valley.

Silicon Valley also has a much higher concentration of foreign residents than the entire United States. Table 2 compares the immigrant population in the United States to Silicon Valley.[4]

Table 2: Immigrant Residents

Immigrant Status

Nation

Silicon Valley[5]

Foreign

13.0%

36.2%

Native

87.0%

63.8%

 

In the entire United States, immigrants represented 13 percent of the population in 2013, but, in Silicon Valley they were 36.2 percent of residents. In other words, a person in Silicon Valley is almost three times more likely to be an immigrant than a person in the United States as a whole.

Going forward, Silicon Valley will continue to demand skilled workers at a higher rate than the entire United States. The California Employment Development Department projects that between 2012 and 2022, 8 of the 30 occupations with the most job openings in San Mateo County and 13 of the top 30 in Santa Clara County will require a Bachelor’s degree or more.[6] The Bureau of Labor Statistics meanwhile projects that only 3 of the 30 occupations with the most job openings in the entire United States will require a Bachelor’s degree or more.[7]

In the following, we utilize recent employment trends to estimate if there will be enough workers (both native and foreign) to meet employment needs in 2020 at all skill levels in the United States and Silicon Valley.

Methodology for Estimating Workforce Skill Shortages

United States

There are three steps to projecting education shortfalls in the labor market by 2020. First, we project the number of workers the labor market will have at each of four education levels. Second, we project the number of workers at each of those four education levels that employers will demand. And finally, to yield the shortage (or surplus) of workers by skill level we calculate the difference between the projected number workers at each education level employers will demand and the number of workers at each education level who will actually exist.

To project the number of workers who will exist across all skills, we use Current Population Survey (CPS) March Supplements to estimate the compounded annual growth rate of workers at each of the four education levels in each industry from 2003 to 2013.[8] We perform this exercise for all workers, immigrant workers, and non-immigrant workers with the national population weights available in the CPS data files. Assuming those long-term average growth rates will remain constant going forward, we project the growth in workforce levels in each industry from 2013 to 2020 for all, native, and foreign workers at each education level.

When estimating the number of jobs at each skill level employers will seek, we use data reported by the Georgetown Center on Education and Workforce (CEW) to calculate the compounded annual growth rates of workers demanded at each skill level in every industry.[9] Then starting with actual employment levels in 2013, we use these compounded annual growth rates to project the growth in the number of jobs available in 2020 that will require less than a high school education, a high school degree, some college or an Associate’s degree, and a Bachelor’s, Master’s, Professional, or Doctorate degree.

Finally, by comparing the projected number of workers who will be in the labor force at every education level in 2020 to those employers will require, we estimate the future shortages or surpluses in workers by skill level in each industry.

Silicon Valley

To our knowledge, a detailed data set similar to the CPS is not available for Silicon Valley. To project the potential worker shortages or surpluses specifically in Silicon Valley, we use the exact same data and methods as we did for the United States with one key difference. Instead of using the national population weight in the CPS, we use regional Silicon Valley demographic characteristics to weigh the national data set so it more closely resembles Silicon Valley’s population. The demographic information we use to weigh the national CPS data came from the Census Bureau and includes four main categories: race, Hispanic ethnicity, immigrant status, and age.[10]

By combining the four main demographic categories we derive 112 unique demographic subcategories. We then calculate relative probabilities and apply them to the entire national sample as weights for the 112 different subcategories of people. In essence the relative probabilities indicate the number of people each observation in the national sample would represent in Silicon Valley. For instance, one relative probability indicates that for every person in the national sample who in 2013 was foreign, Asian, not Hispanic, and between the ages of 25 to 34, there were 2.3 people who met those characteristics in Silicon Valley.

Limitations to the Silicon Valley Analysis

There are three central limitations to our Silicon Valley analysis that are important to understand.

First, to derive the relative probabilities, we use the four main categories to estimate the percent of people who fall into each of the 112 subcategories in the national sample and in Silicon Valley. We then divide the percentage of people in each subcategory in Silicon Valley by the percentage in national sample. With the CPS, we could directly find the percent of the sample that falls under each of the 112 demographic subcategories in the national sample. For Silicon Valley, we only have direct information for the distribution of people in the region for each of the 4 major demographic categories. Thus, to find the percent of people in each of the 112 unique categories we multiply together the percentages of the four major categories. For instance, we find the percent of people in Silicon Valley who are foreign, Asian, not Hispanic, and between the ages of 25 to 34 by multiplying the percent of people in Silicon Valley who are foreign by the percent who are Asian, the percent who are not Hispanic, and the percent who are ages 25 to 34. The major limitation with this part of our analysis is that we assume that each subgroup has the same demographic characteristics as the entire region. For instance, 32.2 percent of the entire region of Silicon Valley is Asian and our methods result in us assuming that 32.2 percent of people who are foreign, not Hispanic, and ages 25 to 34 are also Asian.

Second, the Silicon Valley population weights help the national data resemble that specific region, which allows us to track the growth in workers at all education levels within the region. This in turn allows us to project the growth in skills supplied by Silicon Valley workers to 2020. But, it does not allow us to make Silicon Valley specific projections for the growth in skills required by employers. As a result, for our Silicon Valley analysis, we assume that the projected growth rates in workers demanded in each industry matches the national growth rates and that the distributions of workers demanded by skill level in 2020 will match the national distributions.

Third and finally, although we weigh the national sample from the CPS to reflect the Silicon Valley population, we do not actually analyze a sample of people from that region of the country. As a result, if there are trends unique to Silicon Valley that are not captured in our weights, we are not able to account for them.

Despite these limitations, we are confident that our analysis provides informative projections of skills needs across industries within the Silicon Valley by 2020.

Projected Labor Shortages

Our projected skills shortfalls and surpluses in 2020 for the private sector of the entire United States and just Silicon Valley are shown in Table 3.[11] Estimated shortfalls and surpluses for each industry are available in the Appendix.

Table 3: Private Sector Projected Worker Shortages: Nation vs. Silicon Valley

Education

Nation

Silicon Valley

 

Workers

Percent

Workers

Percent

Total

7,506,256

4.9%

72,536

5.0%

Less than High School Degree

5,271,950

29.3%

52,911

30.9%

High School Degree

1,234,352

3.0%

66,851

17.3%

Some College, Associates

123,040

0.3%

31,190

7.4%

Bachelor's, Master's, Professional, Doctorate

876,915

1.8%

-78,416

-16.3%

 

Overall, we find that both the entire nation and Silicon Valley will face substantial and similar worker shortages in 2020. In the United States, there will be about 7.5 million or 4.9 percent fewer workers than demanded by employers. Silicon Valley employers meanwhile will be short 72.5 thousand workers, which translates to a 5.0 percent shortfall.

In the United States, a worker shortage will occur at every skill level. In 2020, the nation will be short 5.3 million workers (29.3 percent) who did not complete high school, 1.2 million (3 percent) with a high school degree, 123 thousand (0.3 percent) with some college or an Associate’s degree, and 876.9 thousand (1.8 percent) with a Bachelor’s degree or higher.

Silicon Valley, however, will face a greater shortage for middle-skilled jobs. In particular, the region will be short 66.9 thousand workers with a high school degree. This means that the gap between employer demands and worker supply for high school degree holders will be 17.3 percent, over five times larger than the 3.0 percent gap nationwide. Silicon Valley will also have a 7.4 percent shortage of workers with some college or an Associate’s degree, which is much larger than the national shortfall of 0.3 percent.

Finally, the table shows that while the entire nation will be short highly skilled workers, Silicon Valley will have surplus of workers with at least a Bachelor’s degree. In particular, we project that the region will have 78.4 thousand or 16.3 percent more workers with a Bachelor’s degree or above than employers will demand.

It is important to note that the projected surplus of workers with at least a Bachelor’s degree in Silicon Valley could be due to the limitations of this analysis previously highlighted. In particular, due to lack of data we assume that the projected growth rates in workers demanded in each industry in Silicon Valley matches the national growth rates and that the distributions of workers demanded by skill level in 2020 will match the national distributions. In reality, however, it is likely that employer demand for workers with at least a Bachelor’s degree will grow faster in Silicon Valley than the rest of the nation. As a result, we suspect that the surplus of worker’s with at least a Bachelor’s degree shown in Table 3 may be overstated or may not actually exist.

Immigrant Workers’ Impact on the Shortfall in Skill Levels

Adjusting the growth rate of immigrant workers at all skill levels reveals how important immigrants can be to the future of the labor force. In particular, keeping the number of immigrant workers from growing would cause the skill gaps to widen substantially. Increasing the growth of immigrant workers, however, would reduce the projected skill shortfalls. In general, we find that the higher the education level, the more responsive the worker shortfalls and surpluses are to increasing or decreasing the growth of immigrant workers.

Stopping Growth of Immigrant Workers                         

If instead of growing at the same rate as the previous ten years, immigrant workers in each skill level did not change and stayed at their 2013 levels, then the private sector labor shortages in 2020 would increase substantially both nationally and in Silicon Valley. This is illustrated in Table 4.

Table 4: Private Sector Worker Shortage if Immigrants Increase 0%: Nation vs Silicon Valley

Education

Nation

Silicon Valley

 

Workers

Percent

Workers

Percent

Total

11,155,656

7.3%

138,787

9.5%

Less than High School Degree

5,286,009

29.4%

45,922

26.8%

High School Degree

2,173,156

5.4%

71,218

18.5%

Some College, Associates

931,342

2.1%

50,372

11.9%

Bachelor's, Master's, Professional, Doctorate

2,765,149

5.6%

-28,725

-5.9%

 

If the number of immigrants at each education level did not change after 2013, then the overall worker shortage in 2020 would grow from 7.5 million to 11.2 million nationally (4.9 percent to 7.3 percent) and from 72.5 thousand to 138.8 thousand in Silicon Valley (5.0 percent to 9.5 percent). In general, the higher the education level, the quicker the worker shortage would rise.

Nationally shortages for workers with high school degrees and greater would grow substantially. The shortfall would rise from 1.2 million to 2.2 million (3.0 percent to 5.4 percent) for workers with a high school degree, 123 thousand to 931.3 thousand (0.3 percent to 2.1 percent) for workers with some college or an Associate’s degree, and 876.9 thousand to 2.8 million (1.8 percent to 5.6 percent) for workers a Bachelor’s degree or above. Generally, the higher the education level the more the shortage would grow if immigrant workers were to simply stay at their 2013 levels.

In Silicon Valley, keeping immigrant workers at 2013 levels would cause the shortages of workers with high school degrees and some college or Associate’s degrees in 2020 to increase and the surplus of workers with Bachelor’s degrees or above to decrease. In particular, the 2020 shortage of workers with high school degrees would increase from 66.9 thousand to 71.2 thousand (17.3 percent to 18.5 percent). The shortage of workers with some college or Associate’s degrees would rise from 31.2 thousand to 50.4 thousand (7.4 percent to 11.9 percent). Meanwhile, the surplus of workers with Bachelor’s degrees or above would fall from 78.4 thousand to 28.7 thousand (16.3 percent to 5.9 percent).

Increasing Growth of Immigrant Workers

Increasing the annual growth rate of immigrant workers at each skill level would reduce the projected 2020 private sector skill shortfalls. Table 5 shows the shortages and surpluses of workers if the annual growth rate of immigrant workers at each skill level increased by 25 percent.

Table 5: Private Sector Worker Shortage if Immigrant Growth Rate Increase 25%: Nation vs. Silicon Valley

Education

Nation

Silicon Valley

 

Workers

Percent

Workers

Percent

Total

6,361,152

4.2%

46,415

3.2%

Less than High School Degree

5,171,993

28.8%

49,192

28.7%

High School Degree

959,900

2.4%

63,761

16.5%

Some College, Associates

-103,048

-0.2%

25,066

5.9%

Bachelor's, Master's, Professional, Doctorate

332,306

0.7%

-91,603

-19.1%

 

The nationwide private sector total labor shortage would fall from 7.5 million to 6.4 million (4.9 percent to 4.2 percent) and the Silicon Valley private sector labor shortage would decrease from 72.5 thousand to 46.4 thousand (5.0 percent to 3.2 percent). Like above, worker shortages and surpluses are more responsive to increasing immigration growth at the higher end of the skill distribution. Specifically, increasing the growth rate of immigrant workers would reduce the middle- and high-skilled worker shortages the most.

Nationally, the shortages of workers with some college or Associate’s degrees would disappear entirely and the shortages of workers with Bachelor’s degrees or greater would decline substantially. The shortage of workers with some college or an Associate’s degree would become a 0.2 percent surplus and the shortage of workers with a Bachelor’s degree or greater would fall to 0.7 percent.

In Silicon Valley, the shortage of workers with some college or Associate’s degrees in the private sector would fall from 31.2 thousand to 25.1 thousand (7.4 percent to 5.9 percent). Meanwhile, increasing the growth rate of immigrant workers with a Bachelor’s degree or greater would increase the surplus of workers in that category from 78.4 thousand to 91.6 thousand (16.3 percent to 19.1 percent).

How Immigrants Influence Labor Force Skill Levels

Immigrant workers expand the talent pool that employers can utilize to meet their workforce needs. Overall, we find that in recent history a greater proportion of working-age immigrants coming to the United States and Silicon Valley have been entering with at least a Bachelor’s degree, while the proportion with all other skill levels has been falling. Meanwhile, once in the United States and Silicon Valley very few immigrants upgrade their skills by attaining higher education.

Immigrants are more frequently coming to the United States and Silicon Valley with High Skills Levels

The evidence suggests that the proportion of new immigrants coming to the United States and Silicon Valley with at least a Bachelor’s degree has increased substantially over time. Table 6 compares the skills of prime working age immigrants (25 to 54) who came to the United States from 2002 to 2004 to the skills of those who entered from 2012 to 2014.[12]

Table 6: Prime Working Age Immigrant Entrant Skill Levels in United States

Educational Attainment

2002-2004

2012-2014

Percentage Point Change

Less than High School Degree

26.8%

21.2%

-5.7

High School Degree

19.6%

17.2%

-2.4

Some College, Associate’s Degree

15.3%

9.3%

-6.0

Bachelor's, Master's, Professional, Doctorate Degree

38.3%

52.4%

14.1

 

Over the ten-year period, the proportion of prime working age immigrants who came to the United States with at least a Bachelor’s degree increased by a substantial 14.1 percentage points from 38.3 percent in the 2002 to 2004 period to 52.4 percent in the 2012 to 2014 period. Meanwhile, the percentage of entrants at all other skill levels fell over the ten-year time frame.

Table 7 shows how skills of immigrant entrants who moved to Silicon Valley changed over the same time period.

Table 7: Prime Working Age Immigrant Entrant Skill Levels in Silicon Valley

Educational Attainment

2002-2004

2012-2014

Percentage Point Change

Less than High School Degree

19.7%

13.6%

-6.0

High School Degree

15.8%

12.9%

-2.9

Some College, Associate’s Degree

18.0%

12.0%

-6.0

Bachelor's, Master's, Professional, Doctorate Degree

46.6%

61.4%

14.9

 

The proportion of immigrants entering Silicon Valley with at least a Bachelor’s degree increased at an even faster rate than it did nationwide. Over the ten-year time period, the proportion of immigrants entering with at least a Bachelor’s degree increased by 14.9 percentage points from 46.6 percent to 61.4 percent. Just like for the entire United States, the proportion of prime working age immigrants entering with low- and middle-skill levels decreased.

Tables 6 and 7 indicate that immigrant workers entering the country are already well situated to increase the high skilled labor force. However, they are not well positioned to fill low- and middle-skilled jobs. This is particularly important in Silicon Valley, where we found that in 2020 there would be a surplus of workers with Bachelor’s degrees and a shortage of workers with some college or an Associate’s degree.

Once in the Country, Few Immigrants Upgrade their Skills

Recent data suggest that once immigrants arrive in the United States and Silicon Valley, they do not upgrade their skills as frequently as native residents. This is particularly true for middle-skilled education, as immigrants obtain some college or an Associate’s degree at a slower rate than any other education level. Using data from the Survey of Income and Program Participation (SIPP), we analyze how immigrants and non-immigrants enhanced their education levels from 2008 to 2013.[13]

Table 8 reveals what happened among all immigrants in the United States.[14]

Table 8: Immigrant Learning in United States

Education Attainment

2008

2013

Percentage Point Change

Less Than High School Degree

30.2%

23.2%

-7.0

High School Degree

20.7%

26.5%

5.8

Some College, Associate’s Degree

23.9%

24.1%

0.2

Bachelor's, Master's, Professional, Doctorate Degree

25.2%

26.2%

1.0

 

From 2008 to 2013, the proportion of immigrants who completed some college or an Associate’s degree was virtually unchanged, only increasing by 0.2 percentage point. The proportion with at least a Bachelor’s degree only increased 1 percentage point.

Table 9 reveals that meanwhile non-immigrants upgraded their skills more rapidly.

Table 9: Non-Immigrant Learning in United States

Education Attainment

2008

2013

Percentage Point Change

Less Than High School Degree

13.6%

7.0%

-6.6

High School Degree

24.0%

26.6%

2.5

Some College, Associate’s Degree

34.6%

36.1%

1.4

Bachelor's, Master's, Professional, Doctorate Degree

27.7%

30.3%

2.6

 

From 2008 to 2013, the proportion of non-immigrants in the United States with some college or an Associate’s degree increased by 1.4 percentage points and the proportion with at least a Bachelor’s degree increased by 2.6 percentage points.

Table 10 reveals that immigrants in Silicon Valley are not upgrading their skills much more rapidly than immigrants in the entire United States.

Table 10: Immigrant Learning in Silicon Valley

Education Attainment

2008

2013

Percentage Point Change

Less than High School Degree

27.7%

17.9%

-9.8

High School Degree

19.6%

27.2%

7.5

Some College, Associate’s Degree

26.9%

27.1%

0.2

Bachelor's, Master's, Professional, Doctorate Degree

25.8%

27.8%

2.0

 

Over the five-year time period, the proportion of immigrants in Silicon Valley with some college or an Associate’s degree only increased by 0.2 percentage point. The proportion of immigrants with at least a Bachelor’s degree grew by 2 percentage points, which was quicker than the increase that occurred nationally.

Table 11 indicates that non-immigrants in Silicon Valley upgraded their skills at rates similar to the entire nation.

Table 11: Non-Immigrant Learning in Silicon Valley

Education Attainment

2008

2013

Percentage Point Change

Less than High School Degree

15.9%

9.5%

-6.4

High School Degree

21.8%

24.9%

3.1

Some College, Associate’s Degree

33.8%

35.1%

1.3

Bachelor's, Master's, Professional, Doctorate Degree

28.4%

30.4%

2.0

 

By 2013 the proportion of non-immigrants in Silicon Valley with some college or an Associate’s degree increased by 1.3 percentage points. In addition, the proportion with at least a Bachelor’s degree increased by 2 percentage points. This means that in Silicon Valley, while immigrants obtain a Bachelor’s degree at the same rate as non-immigrants, they still lag behind in the some college or Associate’s degree category.

Policy Implications

While trends in the last ten years show that immigrant workers are well situated to fill jobs that require a Bachelor’s degree, our projections suggest that the United States and particularly Silicon Valley will face substantial shortfalls in workers with low- to middle-education levels. As a result, in order to use immigration as a tool to reduce worker shortages, particularly at the middle-skill level, changes in immigration policies and patterns would be useful. Four ways policy could increase middle-skilled workers in the United States and Silicon Valley are: (1) broad immigration reform that ensures employers’ labor needs are met; (2) adjust the U.S. visa mix so that a larger portion of immigrant workers entering the United States have middle education levels; (3) increase the educational and skills attainment of immigrant workers that are already in the United States; and (4) enable immigrants in the United States to obtain credit for their work, experience, and credentials gained from abroad.

Broad Immigration Reform

Employers in the United States and Silicon Valley need a reliable supply of workers to maximize their productivity. Congress should enact broad immigration reform policies that ensure that future labor needs are met through the legal immigration system. This includes solutions that allow immigrant workers to upgrade their skills through efficient and effective education and workforce programs and for employers in the United States and Silicon Valley to access these workers. Further, the employment-based visa program needs to be reformed to reflect the different workforce needs of employers and our country and to include flexibility for the visa quotas to adjust to economic realities. 

Adjust the U.S. Visa Mix

Over a ten-year period, the proportion of working age immigrants entering the United States with at least a Bachelor’s degree increased substantially. In particular, in 2013 over 50 percent of working age immigrants entering the United States and over 60 percent entering Silicon Valley had at least a Bachelor’s degree. These trends are likely largely explained by the fact that the U.S. visa system is designed to primarily grant legal residence to immigrant workers with very high skills and has given more and more preference to those with at least a Bachelor’s degree over time.

Examining worker admissions under permanent worker visas (aka green cards) in Table 12, it is easy to see how the U.S. visa system over time has given more preference to high skilled workers.

Table 12: Permanent Admissions for Workers

Permanent Worker Visa

2004[15]

2013[16]

Percentage Point Change

Total Admissions

72,407

72,203

-

Priority Workers (EB-1)

18.4%

22.5%

4.1

Second Preference (EB-2)

21.9%

43.1%

21.2

Third Preference (EB-3)

57.0%

27.7%

-29.3

Skilled

27.1%

11.9%

-15.2

Professional

2.1%

1.9%

-0.2

Unskilled

27.8%

14.0%

-13.8

Fourth Preference (EB-4)

2.6%

2.4%

-0.3

Fifth Preference (EB-5)

0.1%

4.3%

4.2

 

Between 2004 and 2013, the total number of workers admitted to the United States on a permanent worker visa has remained virtually unchanged at about 72 thousand. However, during that time period, the percentage of workers admitted with visas that require at least a Bachelor’s degree, EB-1 and EB-2, increased substantially. In particular, the percent of permanent workers admitted with an EB-1 visa rose 4.1 percentage points from 18.4 percent to 22.5 percent and the proportion admitted with an EB-2 visa rose a dramatic 21.2 percentage points from 21.9 percent to 43.1 percent. As a result, visas that require a high level of educational attainment now account for two-thirds of all permanent worker admissions in the United States.

The consequence of this, however, is that fewer visas are being made available for workers with middle skills. In fact, the proportion of permanent workers admitted to the United States with the visa that allows for middle skills, EB-3, has declined significantly. The proportion of permanent workers admitted with an EB-3 visa dropped 29.3 percentage points from 57 percent to 27.7 percent. Within EB-3, the visa under the “skilled” category requires at least two years of training and most closely aligns with an Associate’s level education.[17] However, the proportion of permanent workers admitted with a “skilled” EB-3 visa declined 15.2 percentage points from 27.1 percent to 11.9 percent.

Table 13 shows the distribution of workers admitted under temporary employment visas, such as the H-1B.

Table 13: Temporary Admissions for Workers[18]

Temporary Worker Visa

2004

2013

Total

675,647

1,613,868

Temporary workers in specialty occupations (H1-B) (Bachelors required)

57.3%

29.4%

Agricultural workers (H-2A)

3.3%

12.7%

Nonagricultural workers (H-2B)

12.9%

6.5%

Workers with extraordinary ability or achievement (O-1)

4.0%

4.1%

Workers accompanying and assisting in performance of O1 workers (O-2)

0.9%

1.3%

Chile and Singapore Free Trade Agreement aliens (H-1B1)

0.0%

0.0%

Registered nurses participating in the Nursing Relief for Disadvantaged Areas (H-1C)

0.0%

0.0%

Trainees (H-3)

0.3%

0.3%

Internationally recognized athletes or entertainers (P-1)

6.0%

5.3%

Artists or entertainers in reciprocal exchange programs (P-2)

0.6%

0.8%

Artists or entertainers in culturally unique programs (P-3)

1.5%

0.6%

Workers in international cultural exchange programs (Q-1)

0.3%

0.2%

Workers in religious occupations (R-1)

3.2%

0.9%

North American Free Trade Agreement (NAFTA) professional workers (TN)

9.8%

38.0%

CNMI-only transitional workers (CW-1)

0.0%

0.1%

 

Overall, the number of workers admitted with temporary worker visas has spiked over the last ten years from 675,647 in 2004 to 1,613,868 in 2013. Despite the number of temporary worker admissions increasing under most visa categories (particularly for NAFTA workers), employment admissions with a nonagricultural visa (H-2B), which is intended for low-and middle-skill workers, remained flat at about 100,000 per year. As the table shows, this resulted in nonagricultural worker visa admissions falling from 12.9 percent of temporary worker admissions in 2004 to only 6.5 percent in 2013.

Since we project that Silicon Valley will have a substantial shortage of workers with some college or an Associate’s degree in 2020, the region could benefit from expanding visas that target immigrants with an Associate’s level education or creating new temporary and permanent work visas that are specifically for workers at this skill level.

Increase Educational and Skills Attainment of Immigrants Already in the United States

Federal and state educational policies could encourage and enable immigrant workers already living in the United States and Silicon Valley to obtain an Associate’s degree or comparable industry-recognized credential to fill middle-skills gaps. As indicated above, once in the United States, immigrants do not upgrade their education as frequently as non-immigrants. This is particularly true when it comes to obtaining some college or an Associate’s degree, as the proportion of immigrants at those education levels remained virtually unchanged in both the United States and Silicon Valley from 2008 to 2013.Encouraging immigrants to obtain an Associate’s degree or comparable industry-recognized credential in the United States not only helps meet employers’ demands for middle-skill workers, but also can benefit immigrants themselves. A study of 4,000 immigrant professionals by the World Education Services, IMPRINT, and George Mason University’s Institute of Immigration Research found that immigrants “who had invested in additional U.S. education were more likely to be employed and successful than those who had only received education abroad.”[19]

The Workforce Innovation and Opportunity Act, enacted on July 22, 2014, presents an opportunity to improve education and workforce services for all job seekers, including immigrants. With the help of this bill, federal, state, and local officials can promote effective integration of education (adult education, community college, four-year institutions) and workforce support services. Integrating these services can assist immigrant workers accelerate language acquisition and concurrently attain educational or other industry-recognized credentials needed for the growing industries and occupations in their local or regional economies. 

Coordinating education and workforce support services is particularly important for immigrants because they frequently face cultural, linguistic, and socioeconomic barriers to successfully advancing their education and achieving their employment goals. In particular, immigrants may face unique barriers such as lack of professional networks and limited English language abilities.[20] In Silicon Valley, more than half of working adults speak a language other than English in the home, of which 58 percent are fluent in English and 42 percent are English Language Learners (ELL).[21] Of those ELLs in the Silicon Valley, 57 percent have a high school diploma or less.[22] Coordination and alignment of education and workforce support programs may help ensure that immigrants who seek to improve their employability can do so efficiently.

One potentially successful strategy to prepare immigrant and non-immigrant job seekers for employment is career pathways.[23] The federal government, states, philanthropic organizations, and local services are investing in career pathways initiatives, which align adult basic education, occupational training, postsecondary education, and supportive services to provide comprehensive and flexible education and training programs.[24] Career pathways programs aim to meet the needs of working students, including immigrants and English language learners.[25] Further research is needed on 1) how career pathways can help immigrant workers, including ELLs, enroll in and successfully complete educational and training programs, including those leading to an Associate’s degree or other industry-recognized credentials, and 2) how career pathways can build an immigrant workforce for growing, non-traditional industries and occupations to meet employer demand.

Employers can also play an important role in improving immigrant worker skills to fill middle-skill job openings. In a survey of 340 organizations of different sizes, industries, and locations, the Association for Talent Development noted that in 2013, organizations, on average, spent $1,208 per employee on training and development.[26] For example, many employers offer several types of development opportunities to staff such as a combination of mentoring, rotations, and on-the-job training.[27] Employers also partner with community colleges and postsecondary institutions to develop and offer customized virtual or classroom training for staff.[28] Immigrant workers may benefit from contextualized English literacy classes or supportive services, and benefit from employers that partner with organizations such as adult education and community-based organizations to provide these programs.[29]

Further, employers can incentivize staff to upgrade their skills by offering monetary bonuses, additional time off, and other rewards upon successful completion of training. Many workers participate in tuition assistance programs, where the employer contributes to part or all of the employees’ cost to earn postsecondary degrees and credentials.[30]

Credit for Work, Experience, and Credentials from Abroad

Immigrant workers bring to the United States diverse expertise and skills developed through employment and other experiences from abroad. To reduce educational redundancies and improve education and training efficiency for immigrant workers, state educational policies could enable immigrants to more frequently receive credit for work, experience, and credentials attained from abroad.

Prior learning assessments (PLA) is one model that postsecondary institutions are using to allow students to gain academic credit for knowledge developed outside of the classroom, such as through employment, professional training, volunteer activities, or military training. PLAs have helped adult learners accelerate their degree completion. In a study of 62,475 students across 48 postsecondary institutions, the Council for Adult and Experiential Learning (CAEL) found that PLA students had a much higher and faster degree-earning rate and were more persistent in accumulating credit towards a degree than non-PLA students. In particular, 13 percent of PLA students earned an Associate’s degree compared to 6 percent of non-PLA students; PLA students saved an average of 1.5 to 4.5 months of time towards the Associate’s degree.[31] Also, Hispanic students who earned credit through PLAs had higher graduation rates and required less time to complete degree programs then non-PLA students.[32] Further, since PLAs can save students time and money, they can incentivize immigrant workers to upgrade their skills through educational attainment.

Additionally, states could consider how immigrant professionals can gain credit for training and work experience gained abroad when applying for licensure in the United States, while maintaining public health and safety standards. This can ensure that skilled professionals are more readily able to contribute their expertise to the benefit of employers in the United States and economy at large. 

Conclusion

We find that by 2020 both the entire United States and Silicon Valley specifically will face substantial worker shortages. Silicon Valley in particular will have a significant shortage of workers for jobs that will require middle-level skills, such as an Associate’s degree. One way to address these potential skill shortages is through immigration. If policymakers were to increase the growth rate of immigrant workers at all skill levels by 25 percent, the overall worker shortages would decline considerably. Given recent immigration trends, however, this effort would require substantial policy changes, such as reforming our immigration visa program to account for the nation’s need for workers with middle skills or enabling more immigrants in the United States to advance their education.

 

Appendix

Table A1: Projected Shortfall in the United States in 2020 by Industry

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

1,160,748

253,654

160,712

-101,221

1,473,894

Financial Activities

188,544

484,103

626,174

70,719

1,369,539

Healthcare Services

233,132

-377,690

-819,317

1,539,612

575,736

Information Services

-21,715

247,788

307,475

-301,303

232,244

Leisure and Hospitality

718,223

-414,696

-298,635

238,684

243,576

Manufacturing

769,906

368,923

477,177

-220,408

1,395,597

Natural Resources

371,465

17,430

-444,331

-421,458

-476,894

Personal Services

449,146

168,936

46,775

-33,579

631,278

Professional and Business Services

534,863

-18,104

262,587

-311,362

467,984

Transportation and Utilities

165,624

-179,771

284,597

-281,186

-10,736

Wholesale & Retail Trade

702,014

683,779

-480,175

698,417

1,604,036

Government and Education

23,406

1,536,399

3,159,652

-4,232,218

487,239

Total

5,295,356

2,770,750

3,282,692

-3,355,303

7,993,496

Total Private (Excluding Gov and Education)

5,271,950

1,234,352

123,040

876,915

7,506,256

 

Table A2: Projected Shortfall in Silicon Valley in 2020 by Industry

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

9,316

9,280

2,344

-67

20,873

Financial Activities

890

5,019

2,091

5,559

13,560

Healthcare Services

2,294

6,872

6,097

-9,745

5,518

Information Services

-190

1,768

2,878

-4,719

-264

Leisure and Hospitality

11,790

3,775

9,554

-17,368

7,751

Manufacturing

5,982

9,143

247

-8,482

6,889

Natural Resources

3,705

1,571

-1,289

-2,330

1,656

Personal Services

5,134

3,841

3,784

-6,824

5,935

Professional and Business Services

5,009

3,809

12,824

-30,172

-8,530

Transportation and Utilities

462

8,377

4,608

-9,523

3,924

Wholesale & Retail Trade

8,520

13,397

-11,949

5,256

15,224

Government and Education

1,186

22,528

40,969

-62,435

2,248

Total

54,097

89,378

72,159

-140,851

74,783

Total Private (Excluding Gov and Education)

52,911

66,851

31,190

-78,416

72,536

 

Table A3: U.S. Shortfalls in 2020 by Industry if Immigrant Workers Grow 0%

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

1,144,174

312,330

215,142

-49,596

1,622,050

Financial Activities

164,987

466,693

634,263

321,258

1,587,202

Healthcare Services

280,838

-225,571

-638,229

2,043,000

1,460,038

Information Services

-24,010

239,564

310,756

-251,446

274,865

Leisure and Hospitality

716,468

-293,955

-211,826

280,343

491,030

Manufacturing

691,859

372,945

491,623

-113,609

1,442,817

Natural Resources

362,098

83,279

-429,567

-418,764

-402,954

Personal Services

458,923

263,902

175,202

114,238

1,012,266

Professional and Business Services

600,906

74,552

403,699

205,770

1,284,928

Transportation and Utilities

243,959

-36,871

336,110

-155,515

387,684

Wholesale & Retail Trade

645,806

916,288

-355,833

789,470

1,995,731

Government and Education

117,546

1,526,770

3,199,163

-3,846,564

996,916

Total

5,403,555

3,699,927

4,130,505

-1,081,415

12,152,572

Total Private (Excluding Gov and Education)

5,286,009

2,173,156

931,342

2,765,149

11,155,656

 

Table A4: Silicon Valley Shortfalls in 2020 by Industry if Immigrant Workers Grow 0%

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

8,989

7,042

1,856

168

18,056

Financial Activities

478

4,052

1,710

8,110

14,350

Healthcare Services

2,112

6,558

9,269

5,387

23,326

Information Services

-175

1,481

4,160

353

5,819

Leisure and Hospitality

8,649

4,118

10,088

-15,400

7,455

Manufacturing

4,848

10,952

9,718

-6,953

18,565

Natural Resources

2,239

1,894

-1,462

-1,962

708

Personal Services

2,929

4,081

5,574

-3,779

8,806

Professional and Business Services

8,372

7,434

14,954

-13,080

17,681

Transportation and Utilities

1,268

9,036

5,463

-5,999

9,768

Wholesale & Retail Trade

6,213

14,570

-10,960

4,430

14,254

Government and Education

519

22,068

40,094

-50,680

12,000

Total

46,441

93,286

90,466

-79,405

150,788

Total Private (Excluding Gov and Education)

45,922

71,218

50,372

-28,725

138,787

 

Table A5: U.S. Shortfalls in 2020 by Industry if Growth Rate of Immigrants Increases by 25%

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

1,156,623

238,337

145,886

-115,619

1,425,226

Financial Activities

183,280

479,876

624,122

-1,626

1,285,653

Healthcare Services

220,321

-420,348

-870,365

1,392,535

322,144

Information Services

-22,271

245,861

306,632

-314,975

215,246

Leisure and Hospitality

717,785

-446,801

-322,305

227,693

176,372

Manufacturing

750,909

367,915

473,500

-248,784

1,343,540

Natural Resources

369,142

-2,961

-448,721

-422,156

-504,696

Personal Services

446,677

143,140

9,013

-78,452

520,378

Professional and Business Services

517,667

-43,033

222,325

-462,187

234,771

Transportation and Utilities

143,498

-221,319

270,549

-318,419

-125,691

Wholesale & Retail Trade

688,362

619,234

-513,684

674,296

1,468,209

Government and Education

-5,860

1,534,032

3,149,234

-4,342,351

335,055

Total

5,166,133

2,493,933

3,046,187

-4,010,045

6,696,207

Total Private (Excluding Gov and Education)

5,171,993

959,900

-103,048

332,306

6,361,152

 

Table A6: Silicon Valley Shortfalls in 2020 by Industry if Growth Rate of Immigrants Increases by 25%

Industry

Less than High School Degree

High School Degree

Some College, associates

Bachelor's, Master’s, Professional, Doctorate

Total

Construction

9,235

8,764

2,225

-98

20,126

Financial Activities

809

4,796

1,997

5,156

12,758

Healthcare Services

2,249

6,794

5,233

-13,518

758

Information Services

-194

1,712

2,484

-6,259

-2,258

Leisure and Hospitality

11,061

3,688

9,418

-17,773

6,394

Manufacturing

5,706

8,666

-2,667

-8,846

2,859

Natural Resources

3,386

1,481

-1,330

-2,421

1,116

Personal Services

4,657

3,780

3,285

-7,662

4,059

Professional and Business Services

4,053

2,778

12,245

-34,613

-15,536

Transportation and Utilities

242

8,205

4,378

-10,518

2,306

Wholesale & Retail Trade

7,987

13,097

-12,202

4,950

13,833

Government and Education

1,040

22,417

40,757

-65,480

-1,266

Total

50,232

86,178

65,823

-149,302

52,930

Total Private (Excluding Gov and Education)

49,192

63,761

25,066

-91,603

46,415

 



[1] Bureau of Economic Analysis, U.S. Department of Commerce, Interactive Data, http://bea.gov/itable/index.cfm

[2] American Community Survey 5 Year Estimates, Census Bureau, http://www.census.gov/

[3] Defined by San Mateo and Santa Clara Counties

[4] American Community Survey 5 Year Estimates, Census Bureau, http://www.census.gov/

[5] Defined by San Mateo and Santa Clara Counties

[6] Employment Projections, Employment and Development Department, State of California, http://www.labormarketinfo.edd.ca.gov/data/employment-projections.html

[7] Economic and Employment Projections, Bureau of Labor Statistics, http://www.bls.gov/news.release/ecopro.toc.htm

[8] Current Population Survey, 2014 Annual Social and Economic Supplement and 2004 Annual Social and Economic Supplement, U.S. Census Bureau, obtained online at the National Bureau of Economic Research, http://www.nber.org/data/current-population-survey-data.html

[9] Recovery: Job Growth and Education Requirements Through 2020, Center on Education and the Workforce, Georgetown University, June 2013, https://cew.georgetown.edu/report/recovery-job-growth-and-education-requirements-through-2020/

[10] Since no Census Bureau data set for Silicon Valley’s demographics in 2003 was available, we used the Census demographic data from the closest year, which was 2000. Likewise, we used 2012 demographic data for 2013, since Census has yet to publish the 2013 Silicon Valley demographic data.

[11] In measuring private sector skill gaps, we exclude all shortages or surpluses in government and education.

[12] Author’s analysis of 2014 and 2004 CPS March Supplements.

[13]Survey of Income and Program Participation 2008 Panel, U.S. Census Bureau, retrieved from the National Bureau of Economic Research, http://www.nber.org/data/survey-of-income-and-program-participation-sipp-data.html

[14] Author’s analysis of SIPP data.

[15] Author’s Analysis of Table 5 in “Yearbook of Immigration Statistics: 2004 Immigrants,” Office of Immigration Statistics, U.S. Department of Homeland Security, http://www.dhs.gov/publication/yearbook-immigration-statistics-2004-immigrants

[16] Author’s analysis of Table 7 in “Yearbook of Immigration Statistics: 2013 Lawful Permanent Residents,” Office of Immigration Statistics, U.S. Department of Homeland Security, http://www.dhs.gov/publication/yearbook-immigration-statistics-2013-lawful-permanent-residents

[17] “Employment-Based Immigration: Third Preference EB-3,” United States Citizenship and Immigration Services, Department of Homeland Security, http://www.uscis.gov/working-united-states/permanent-workers/employment-based-immigration-third-preference-eb-3

[18] Author’s analysis of Table 25, “Yearbook of Immigration Statistics: 2013 Temporary Admissions (Nonimmigrants),” Office of Immigration Statistics, U.S. Department of Homeland Security, http://www.dhs.gov/publication/yearbook-immigration-statistics-2013-temporary-admissions-nonimmigrants

[19] Amanda Bergson-Shilcock and James Witte, “Steps to Success: Integrating Immigrant Professionals in the United States,” World Education Services, September 2015, pg. 2, http://www.imprintproject.org/stepstosuccess/

[20] Ibid.

[21] “English Language Learner Adults in Silicon Valley: Community Assets, Gaps, and Career Pathways,” Silicon Valley ALLIES Innovation Initiative, April 2015, pg. 14.

[22] Ibid, pg. 16.

[23] The definition of career pathways is codified in the Workforce Innovation and Opportunity Act, Pub. L. No. 113-128, §3(7), 128 Stat. 1430. 

[24] “Career Pathways Toolkit: A Guide for System Development,” Employment and Training Administration, U.S. Department of Labor, September 2015, https://blog.dol.gov/2015/09/03/building-better-career-pathways/

[25] Ibid.

[26] Laurie Miller, “2014 State of the Industry Report: Spending on Employee Training Remains a Priority,” Association for Talent Development, November 2014, https://www.td.org/Publications/Magazines/TD/TD-Archive/2014/11/2014-State-of-the-Industry-Report-Spending-on-Employee-Training-Remains-a-Priority

[27] “First Findings from the EQW National Employer Survey,” National Center on the Educational Quality of the Workforce, Department of Education, 1995, http://files.eric.ed.gov/fulltext/ED398385.pdf

[29] Allene Gus Grognet, “Planning, Implementing, and Evaluating Workplace ESL Programs,” Center for Adult English Languge Acquisition, June 1996, http://www.cal.org/caela/esl_resources/digests/planningQA.html

[30] Gigi Jones, “Who Benefits from Section 127? A Study of Employee Education Assistance Provided Under Section 127 of the Internal Revenue Code,” Society for Human Resource Management, 2010, http://i.bnet.com/blogs/tax-break-college.pdf

[31] Rebecca Klein-Collins, “Fueling the Race to Postsecondary Success: A 48 Institution Study of Prior Learning Assessment and Adult Student Outcomes,” Executive Summary, The Council for Adult and Experiential Learning, February 2010, http://www.cael.org/pdfs/pla_executive-summary

[32] Rebecca Klein-Collins, “Underserved Students Who Earn Credit Through Prior Learning Assessment (PLA) Have Higher Degree Completion Rates and Shorter Time-to-Degree,” Research Brief, The Council for Adult and Experiential Learning, April 2011, http://www.cael.org/pdfs/126_pla_research_brief_1_underserved04-2011


Introduction

The United States is home to the majority of the world’s new drug developers. With this rapidly expanding industry constantly offering Americans new health care treatments, Congress passed legislation at the turn of the 20th century to protect citizens from “quackery” and ensure the safety and effectiveness of all new pharmacological treatments. Since then the Food and Drug Administration (FDA) has expanded rapidly and become a major influence on the pharmaceutical industry.

The FDA Drug Approval Process

Since the passage of the Food, Drug, and Cosmetics Act (FDCA), the FDA has an increasingly large role in approving products intended for human consumption. Drug approvals specifically are managed by the Center for Drug Evaluation and Research (CDER).[1] The CDER must examine and approve all applications for new molecule entities (novel drug compounds), generic drugs, and over the counter (OTC) drugs. It takes these products, on average, about 10 years from discovery to reach the market.[2]

Once a new drug is discovered, there is a specific series of steps it must go through to acquire FDA approval. Before any applications are sent to the FDA, the drug developer must run pre-clinical laboratory tests, typically using animal samples, to rule out any significant adverse effects.[3] This process takes 3 years on average. Only 1 in 1,000 drugs will make it past this stage.[4]

After pre-clinical testing is finished, an Investigational New Drug (IND) application must be submitted to the FDA/CDER explaining the results of previous trials, how new studies will be conducted, how volunteer subjects will be chosen, the chemical structure of the new drug, how it is believed to work within human biochemistry, how it is manufactured, and any known or anticipated side effects.[5] If this application is not rejected within 30 days, it is considered approved. If approved, any clinical trials that are carried out must first be submitted to and approved by an Institutional Review Board (IRB)—a group of scientists, academics, and subject matter experts approved by the Department of Health and Human Services (HHS) who approve and oversee experiments done on human subjects to ensure they are performed ethically.[6]

Phase I clinical trials involve a small sample group of 30-80 typically healthy volunteers. The purpose is to determine safe dosages, absorption rates, duration of the drug’s effects, and any side effects. This stage typically takes about 1 year.[7]

If Phase I trials show the drug to be relatively safe, Phase II trials begin. These involve 100-300 patients and controlled studies to test minimum and maximum dosages and their effectiveness. This stage takes an additional 2 years.[8]

The final stage of pre-market testing, Phase III, is a randomized, controlled study of 1,000-3,000 patients testing safety and effectiveness of the drug. This process takes approximately 3 years.[9]

Once clinical testing has finished, the drug developer must submit a New Drug Application (NDA) including all relevant data and information about the drug – these applications often reach 100,000 pages. Approval or denial of these applications currently takes an average of 12 months. An NDA may be denied with explanation, approved outright, or approved on the condition that the manufacturer makes minor changes, such as labeling adjustments.[10]

All drugs with NDAs that make it to market are subject to Phase IV, or post-marketing studies, to track the voluntary reporting of adverse side effects. Less than 3 percent of approved drugs are ever recalled based on Phase IV reporting.[11]

Speeding Up the Process

Speeding up the process of drug approvals has been a major concern for pharmaceutical companies and patient advocacy groups, particularly since the beginning of the AIDS epidemic in the 1980’s. Since then, several approaches to fast-tracking access to new drugs have emerged.

In 1992, Congress passed the Prescription Drug User Fee Act (PDUFA), which requires drug companies submitting new drugs (and biologics) for FDA approval to pay a fee to support the CDER specifically; providing additional resources to facilitate faster drug approval.[12] Fees are also applied to up to 2 years of post-marketing studies of approved drugs to aid in quality control. PDUFA has been widely considered a success. As a compromise between companies paying the fees and the FDA, the CDER has used PDUFA funds to decrease the average approval time of NDAs from 30 months to 12 months. Since PDUFAs reauthorization in 2012, CDER has a new goal to lower the average approval time to 10 months for a standard NDA.[13]

There is another PDUFA-related drug approval fast track for priority drugs that treat orphan diseases, rapidly spreading diseases with no other effective treatment, chronic or widespread illnesses with no more effective treatment alternative, and qualified antibiotics.[14] Fast-track approval provides drug companies with the opportunity for more frequent communication with the FDA regarding trial design and data collection, priority review, and rolling submission of NDA materials as they become available. Under PDUFA, priority review drugs typically receive a decision from CDER within 6 months of submission.[15]

Compassionate Use programs (also known as Right to Try) occasionally allow terminally ill individuals to try an experimental drug that is still going through clinical trials if the drug has been through Phase I and is not considered dangerous.

Post Approval Use of Drugs

Once a drug is approved by the FDA for any indication, its prescription and use is then left to the discretion of health care providers’ medical and ethical judgments. Off-label drug use is common and includes any use that is different from the intended and labeled indication, dose, mode of administration, patient age or gender, or duration of use.[16]

Although up to 70 percent of off-label drug use lacks scientific backing,[17] it is still incredibly prevalent, and in some cases an off-label use may even become the primary reason a drug is prescribed.[18] For example, pediatric antipsychotic drugs are prescribed off-label over 77 percent of the time.[19] Off-label use is especially common among specialties where patients are rare or unable to give consent, therefore making it difficult to secure a large enough volunteer base to run clinical trials. For example, oncology, pediatrics, geriatrics, and obstetrics all rely heavily on off-label drug uses.[20]

At first, drug manufacturers were prohibited from promoting a drug’s off-label uses by the FDA. However, through the FDCA Congress has allowed drug companies to distribute “enduring materials” such as peer-reviewed journals which discuss off-label use under certain conditions including a promise to seek FDA approval.[21]

Economic factors may discourage drug companies from seeking FDA approval. In many instances the patient population for off-label use is too small to produce statistically significant results, or to justify their expense for a drug whose exclusivity period is already running. To address this, Congress has granted an extension of exclusivity where a new use is approved, but this is not guaranteed to recover trial costs, nor is the new approval itself guaranteed.[22]

In the 1990’s, federal courts further expanded drug manufacturers’ interests in off-label uses by holding that promoting off-label uses is a constitutionally protected right, but that the company’s financial interest and lack of FDA approval must be disclosed.[23] Subsequent FDA guidance has attempted to better delineate the extent to which knowledge of off-label drug use may be promoted. That guidance, though on questionable constitutional ground, has not yet been challenged in the courts.


[1] http://www.fda.gov/AboutFDA/CentersOffices/OfficeofMedicalProductsandTobacco/CDER/ucm082585.

[2] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf

[3] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf

[4] http://www.medicinenet.com/script/main/art.asp?articlekey=9877.

[5] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf

[6] http://www.fda.gov/RegulatoryInformation/Guidances/ucm126420.htm

[7] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf; http://www.medicinenet.com/script/main/art.asp?articlekey=9877

[8] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf; http://www.medicinenet.com/script/main/art.asp?articlekey=9877

[9] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf; http://www.medicinenet.com/script/main/art.asp?articlekey=9877

[10] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf

[11] http://www.phrma.org/sites/default/files/pdf/rd_brochure_022307.pdf.

[12] http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm119253.htm.

[13] http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm119253.htm.

[14] http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm119253.htm

[15] http://www.fda.gov/ForIndustry/UserFees/PrescriptionDrugUserFee/ucm119253.htm

[16] http://www.consumerreports.org/cro/2012/05/off-label-drug-prescribing-what-does-it-mean-for-you/index.htm.

[17] http://archinte.jamanetwork.com/article.aspx?articleid=410250.

[18] http://healthaffairs.org/blog/2013/12/17/managing-off-label-drug-use/.

[19] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3538391/.

[20] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3538391/

[21] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2836889/

[22] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2836889/

[23] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2836889/


Introduction

Puerto Rico’s governor Alejandro García Padilla has declared that the Commonwealth’s debts are “not payable” and that the Island is confronting a “death spiral.” This has generated greater media and policymaker attention on the economic and fiscal conditions confronting Puerto Rico, and possible solutions to those challenges.

The Commonwealth’s difficulties pose interesting challenges to federal policymakers. A default by the Commonwealth on its debts would directly impact its creditors and investors. However, it would also impose hardship on nearly 4 million Puerto Rican American citizens. Should the Commonwealth’s struggles persist, the current outmigration of the Puerto Rican population to the U.S. mainland will continue or accelerate, and economic conditions on the island will deteriorate.

This primer seeks to describe the economic and budgetary context; specifically the historical and recent trends as well as the projected trajectory for the Commonwealth’s economy and budget. This is an essential first step in order to properly evaluate the policy proposals that could ameliorate Puerto’s Rico outlook.  An examination of the economic and budgetary woes confronting the Commonwealth reveals that its challenges have been long in coming and cannot be assigned to any specific federal policy such as shuttering of federal facilities or changes in highly preferential tax policies. Such an examination also reveals that while the island’s challenges are substantial, fundamental structural changes at a local level and sensible changes from the federal government to allow for economic growth would go a long way to promote a Puerto Rican recovery and a sustainable budgetary trajectory. In a forthcoming piece, we will examine policy prescriptions for the island.

Economic Outlook

The Puerto Rican economy has been in steady decline for over a decade, which has and will continue to exacerbate the Commonwealth’s fiscal challenges. The strict mathematical and practical reality is that any sustainable solution to Puerto Rico’s must involve an economic growth component that boosts output, employment rolls, and income.

Economic Output

Figure 1: Historical and Projected Output

Puerto Rico has seen real output plummet since 2005, while Moody’s predicts essentially stagnant territorial income for the next decade. This growth assumption incorporates an expectation of the inability of the Commonwealth to strike the proper balance between fiscal consolidation and economic growth. Pure austerity – poorly targeted tax increases and indiscriminate cessation of services – would have anti-growth effects and could contribute to tepid growth in the future, while more innovative fiscal consolidation approaches could be buttressed by enhanced revenues from stronger growth.

Figure 2: Unemployment

The unemployment peaked in 2010, with an increase of nearly 50 percent from 2005. While this high rate of joblessness has attenuated, it is projected to remain at persistently high levels – averaging nearly 13 percent over the next 10 years.

Figure 3: Labor Force Trends

A key contributor to Puerto Rico’s economic challenge has been net emigration. Over 300,000 Puerto Ricans have left since 2005, with net emigration and population decline expected to persist for the next decade, albeit at a slower pace. This has significantly curtailed the Commonwealth’s labor force as a whole. Despite a declining population, the labor force participation rate has declined apace, highlighting structural weakness in labor markets beyond net emigration.

Figure 4: Employment

Echoing other indicators of the labor markets, payrolls have declined precipitously in the past ten years, while a policy-agnostic outlook assumes flat employment levels over time. Stable payrolls however, compared to recent experience, are a positive sign and could provide policymakers with a predictable wage and tax base in the contemplation of future fiscal policies.

Figure 5: Total Personal Income

Personal Income growth has remained under 3 percent since 2008 and is projected to average about 2 percent over the next 10 years. While far from ideal, stable income growth paired with steady payrolls could again provide a predictable wage and tax base around which to design pro-growth, fiscal consolidation.

Summary of Economic Outlook

Every major economic indicator of Puerto Rico’s economic wellbeing shows a Commonwealth in a protracted retrenchment.  However, there are indications that major emigration flows and steep economic declines may have passed, leaving a trajectory of tepid though predictable growth. How Puerto Rico addresses its fiscal challenges, however, could radically alter this outlook for good or for ill.

Budget Outlook

By any objective measure, the budget outlook for Puerto Rico is troubling. The Commonwealth, through policy choices compounded by economic difficulty, has seen its debt obligations grow inexorably over the past decade and is projected to continue.  Addressing these dual challenges is essential to an improved budgetary trajectory. This necessity must also reconcile immediate concerns over liquidity with the need for longer-term structural reforms.

Historical and Recent Developments

Puerto Rico has maintained structural deficits for some time, driven by economic conditions, policy choices, and unrealistic revenue and expenditure estimates.[1] Accounting for timing shifts and non-recurring budget events, structural deficits have been and will remain considerable.

Figure 6: Structural Deficits

Figure 7: General Fund Debt Service Burden

With persistent borrowing, debt service has grown as a share of general fund expenditures. More indicative of a borrower’s ability to repay debt however, is debt service as a share of revenues. This simple metric incorporates many elements that signal a borrower’s wherewithal to manage debt: interest reflects not only past borrowing, but the terms and credit worthiness of the borrower demonstrated by the interest rate underlying debt service costs; revenue reflects the strength of the economy and the ability of a government to harness national resources through tax policy. A common bright-line for identifying distressed sovereign borrowers is when interest exceeds 10 percent of revenues. The Commonwealth reached this level as of March 2015.

Figure 8: Public Debt

General fund debt has risen rapidly, but is only part of the problem. Puerto Rico’s debt portfolio is largely driven by entities other than the central government. The Commonwealth’s public sector debt has more than tripled over the last 15 years, rising to over $72 billion today.[2] The composition of this debt however, is instructive, and reveals the complicated nature of financing the Commonwealth’s expenditures.

Table 1: Composition of Public Debt[3]

As of June 2015, the public sector debt for Puerto Rico stood at $71.1 billion. Over $48 billion of this total is attributed to debt issued by Puerto Rico’s public corporations[4].

Table 2: Debts of Public Corporations

These entities provide public and business-like services to Commonwealth residents, with the bulk of the indebtedness comprised of debt of the Sales Tax Financing Corp (COFINA), which in itself was originally created to finance existing debts; the Island’s power utility (PREPA), which has recently agreed to a restructuring of some of its mature obligations; the Highway and Transportation Authority (PRHTA); the Aqueduct and Sewer Authority (PRASA); and the Public Buildings Authority (PBA).

Table 3: Bond Ratings of Public Corporations

The major ratings agencies have continuously downgraded the general obligations of the Commonwealth as well as issuances of Puerto Rico’s principal public corporations, which are rated below investment grade by Fitch, Moody’s and S&P.[5]

Near Term Challenges and Outlook

Prospectively, the Commonwealth faces significant challenges in both the near and long term. Of most immediate concern is the claim that Puerto Rico is facing a liquidity crisis, and will be unable to meet its payment obligations beginning in November of this year.[6] The financial advisor to the Government Development Bank for Puerto Rico estimates that this cash constraint will persist through December before improving until June.  At that point Puerto Rico is again expected to face a liquidity challenge. It is important to note that this advisor also expects that the Commonwealth will be able to manage the immediate challenge. Moreover, the June challenge does not appear insurmountable, as the cash flow analysis that underpins the June shortfall assumes certain payments that should not take priority over existing obligations such as deferred tax refunds and “economic development fund” expenditures. Given these near term challenges, taking action to address the budgetary challenges and economic growth on the island is imperative.

Longer Term Challenges

As noted above, the GDP outlook is poor, with growth remaining depressed over the foreseeable future. The Commonwealth’s future financing gap is also a source of serious concern. 

Figure 9: Projected Borrowing Needs

According to a recent estimate by former first deputy managing director of the International Monetary Fund, Anne Krueger and others, under current policy and accounting for other factors, the Commonwealth faces a financing gap of over $64 billion over the next 10 years.[7] Under current conditions, capital markets are highly unlikely to supply this financing to the Commonwealth. Accordingly, long-lasting and meaningful policies must be pursued to confront the duel challenge of tepid growth and an unsustainable fiscal outlook and assure future access to capital markets.

Conclusion

The Commonwealth of Puerto has long struggled with tepid economic growth, while continuing to issue debt at multiple layers of administration. These factors have conspired force the Commonwealth into a series of reform initiatives that have as yet proven inadequate to fundamentally alter the island’s flat growth prospects and deteriorating labor force trends, nor the structural imbalances in Puerto Rico finances, which if left to persist indefinitely would require unrealistic future borrowing. It is in this context that policy options for the Commonwealth must be evaluated. Policy options that would fundamentally put Puerto Rico on a sustainable path and promote much-needed growth may be difficult to implement, but they will be necessary to get the island back on its feet financially.


  • Since 2010, regulators have imposed $76.6 billion in regulations on the HVAC industry

  • Energy regulations could increase the price of consumer goods by more than $2,000

  • Since 2007, DOE has finalized rules with $8.2 billion in annualized regulatory costs, with a net present value impact exceeding $158 billion

In the regulatory world, generally the Environmental Protection Agency (EPA) receives the lion’s share of criticism and scrutiny. Sometimes the scrutiny is from industry and business groups, and in other instances, from progressives and environmentalists for alleged lax regulation. As much as EPA is in the headlines, the Department of Energy (DOE) is typically buried somewhere in the classifieds of the regulatory arena. After examining the data on the regulatory costs, consumer impacts, and employment, that needs to change.

Since 2007, DOE has finalized rules with $8.2 billion in annualized regulatory costs, with a net present value impact exceeding $158 billion. The burdens are often justified by the agency since the purported benefits are said to exceed the costs. Yet, there have been few retrospective reviews analyzing whether the benefits of the energy savings exceed the costs to the manufacturer, and eventually, the higher prices to the consumer, such as $464 more for a new water heater.

This study (using publicly available DOE cost-benefit analyses) examines the cumulative impact of DOE regulations since 2007, including effects on consumers, various states, and industries. It looks specifically at the industry most often targeted by DOE rules, air conditioning and heating, and determines whether a past air-conditioning rule delivered on its promised benefits. The American Action Forum (AAF) found wide disparities between DOE’s projected level of product shipments versus actual figures, calling the agency’s benefit figures into question.

Cumulative Burdens

The Office of Information and Regulatory Affairs (OIRA) acknowledges that DOE has imposed the third-highest cost burden from 2002 to 2012, behind only EPA and the Department of Transportation. Given the recent push by the Obama Administration to increase energy efficiency across the economy, in an effort to curb emissions of greenhouse gases (GHG), the pace of DOE rules has increased substantially.

The chart below details the number of “major” DOE rules that OIRA has approved from 2007 to 2014, with the corresponding net present value (NPV)(unadjusted for inflation) published cost of the measures.

Year

Major Rules

NPV Cost (in millions)

2007

2

$504

2008

2

$92

2009

5

$22,736

2010

2

$32,776

2011

4

$38,351

2012

1

$5,033

2013

2

$6,561

2014

8

$37,400

Totals:

26

$143,455


As the chart displays, DOE has imposed substantial burdens on the manufacturing sector and consumers who must eventually pay higher prices. The above figure even excludes significant final rules from 2015. The agency is now averaging 3.25 major regulations annually since 2007 (compared to five a year from EPA). On an annual basis, all rulemakings (proposed and final) from the agency from 2007 to 2015 have imposed more than $9.5 billion in economic costs. This compares to an estimated $32 billion in benefits, but both figures are subject to a large amount of uncertainty on an ex ante basis (before-the-fact). The eight major DOE rules approved in 2014 was a record, according to OIRA, and there does not appear to be a slowdown anytime soon. The latest Unified Agenda outlined 11 new major rules from DOE that could be completed before 2016. For comparison, the Clinton Administration approved just six major DOE measures during its eight years in office.

Consumer Impact

Imposing $9 billion in annual economic costs since 2007 might sound like a striking headline figure, but what does that portend for the average consumer? It means, as DOE often concedes: higher prices and fewer choices. In 2014, AAF issued “The Consumer Price of Regulation,” detailing how 36 recent regulations could increase prices for everyday Americans by more than $11,000. Although corporations are often viewed as the targets of federal rules, the costs imposed must be borne by someone, and typically, consumers pay higher prices.

Energy regulations featured prominently in last year’s report and the administration routinely concedes that prices will rise from regulation. For example, in its 2011 rule requiring more efficient refrigerators, the administration noted that the average price could increase by $83. In addition, in its recent proposal for hearth products (heating equipment), the agency admitted per unit prices could escalate by $101. Here is the agency’s standard language: “Customers affected by new or amended standards usually incur higher purchase prices and lower operating costs.”

However, most of DOE’s analysis presumes an average homogenous consumer who is comfortable with a higher purchase price in exchange for keeping the product long enough to reap potential energy savings. As Sofie Miller of the George Washington Regulatory Studies Center has found, however, adjusting discount rates can turn a rule with benefits into a measure with net costs for society. Due to the higher purchase price, these efficiency regulations can have regressive effects, disproportionately burdening low-income households.

Looking broadly, a sample of the ten largest DOE rules since 2009 reveals that consumers could bear $2,380 in higher prices because of regulation. Below are the largest rules with the corresponding consumer impacts and links to the agency’s regulatory impact analyses:

Regulation

Annual Cost (in millions)

Price Increase

Refrigerator Efficiency Standards

$1,569

$83

Water Heater Efficiency Standards

$1,285

$464

Fluorescent Lamp Efficiency Standards II

$841

$12

Fluorescent Lamp Efficiency Standards I

$700

$3

Electric Motor Efficiency Standards

$517

$313

Walk-In Cooler Efficiency Standards

$511

$1,086

Lamp Ballast Efficiency Standards

$363

$10

Residential Furnace Fan Efficiency Standards

$358

$75

Small Electric Motor Efficiency Standards

$263

$247

Commercial Refrigerator Efficiency Standards

$261

$85

Totals:

$6,666

$2,380


At more than $2,300 in escalating prices, the demands in regulation often result in a lighter wallet for consumers. Granted, few consumers will purchase all of the products outlined above, but a hypothetical purchase of a refrigerator, furnace fan, and water heater could easily equal a regressive “regulatory tax” of more than $620. In most instances, the consumer would have no knowledge that federal regulations drove up the price of the item.

Employment Impact

Beyond the cumulative impact and higher consumer prices, there are significant impacts on industry employment. DOE routinely admits that its rules could cause industry employment to decline and result in substantial outsourcing. In one recent air conditioning rulemaking, the administration wrote, “It is possible the small manufacturers will choose to leave the industry or choose to be purchased by or merged with larger market players.” In another proposed rule, this one for furnaces, DOE noted conversion costs would total 18 percent of revenue for small businesses and just three percent for large businesses. As a result, some entities “may re-evaluate the cost-benefit of staying in the MHGF [mobile home gas furnaces] market.” It is only because these statements are buried in hundreds of pages of regulatory analysis that their implications are not spread across the country for the public to learn. The result is that many small entities will go out of business – and jobs will be lost – because of a federal rule.

Quantifying these statements is often difficult, but occasionally, the agency will put a number to these words. In one efficiency standards rule for hearth products, DOE predicted industry employment could drop by 51 to 908 employees. This might seem like a paltry number, but consider that overall employment in the hearth industry is projected to be 1,565 employees by 2021.

In a proposed rule for commercial refrigeration equipment, the administration outlined five industry employment scenarios. The best cases resulted in either no job losses or moderate gains of 253 jobs. In all other possibilities, employment could decline by 3,672 jobs if “all existing production were moved outside of the United States.” Indeed, outsourcing is also a common theme in DOE regulatory impact analysis. And although one regulation rarely leads to 3,600 job losses immediately, DOE is keenly aware that its rules can have profound employment implications.

One theme through many of the rules highlighted here is the target industry: heating, ventilation, and air conditioning, or HVAC. This is a broad portfolio with more than 125,000 American jobs, but one that is subject to frequent DOE regulation. An examination of its employment levels during the last decade reveals that some factor, or combination of factors, has severely cut into its domestic labor totals. AAF took the following figure directly from the Bureau of Labor Statistics (BLS):

Since 2001, the HVAC and commercial refrigeration equipment industry has shed 55,572 jobs, or more than 30 percent of its total. Even more striking, the decline began well before the Great Recession, with substantial losses between 2001-2005, when the economy was growing. Furthermore, despite the economic recovery (albeit tepid), the industry has not witnessed a strong rebound in employment.

Undoubtedly, regulations are a factor in the employment declines. Since 2010, regulators have imposed $4 billion in final rule annualized costs on the industry, and this is solely the agency’s reported cost. It likely excludes secondary costs, the monetary impact of employment losses, and the burden of hiring regulatory compliance officers. Thus, DOE imposes a “regulatory tax” on the industry of at least $1 billion each year. That’s one billion dollars in new rules each year for an industry that generates $6.5 billion in annual wages for employees. Although the industry isn’t writing a check for this amount, someone must pay for these burdens: employees, shareholders, or consumers through higher prices. The NPV burden for final rules since 2010 is even more staggering, at $76.6 billion for the HVAC industry.

That’s hardly the end of new regulations on these companies, however. In proposed form, during the last two years, the administration projects $1.2 billion in additional annual burdens from just six new rules. On an NPV basis, this could add another $22.6 billion in costs to the industry. Tallying both final rules and measures still in their proposed form, DOE has imposed roughly $5.3 billion in annual burdens and nearly $100 billion in NPV costs on the HVAC industry. Given the president’s commitment to regulation and energy efficiency, it is likely these numbers will escalate causing a combination of lower worker pay, diminished shareholder returns, or fewer employees.

A Retrospective: Questionable Assumptions

Every administration touts the benefits of its regulatory agenda. This is typically accomplished by adding the monetized benefits of the largest major rules (measures with annualized costs or benefits exceeding $100 million) and comparing that sum to monetized costs. The Obama Administration’s new Social Cost of Carbon (SCC) offers another tool to justify expensive regulation. For example, depending on the discount rate, its SCC calculation for the “Clean Power Plan” varies from $6.4 billion annually to $61 billion. With a few rare exceptions, most new standards proclaim that benefits always exceed costs. However, these figures are a prospective estimate of benefits and costs. Rarely do agencies or outside scholars dig through the actual, post-implementation data to determine if the projections are accurate. For two rules, the 2001 standard to raise air conditioning efficiency by 30 percent and a 2009 conservation standard for microwaves, AAF found significant discrepancies between agency projections and actual results.

It is difficult to untangle the effect of federal regulation on the economy after implementation, which is one reason why prospective estimates are widespread and retrospective studies are relatively few. However, a recent study has offered some evidence that energy efficiency programs fail to deliver the promised benefits. The study on home weatherization programs found, “[T]he costs still substantially outweigh the benefits; the average rate of return is approximately -9.5% annually.” However, it does not appear that these findings are giving regulators pause.

Rather than examine the macroeconomic impact of these efficiency rules, AAF examined estimates of product shipments. For example, if DOE projected 16 million shipments of new energy efficient microwaves, but for various reasons, either because the regulation increased the price of the product or other macroeconomic forces, shipments were actually below nine million, the benefits to the economy would be far less. This is due to consumers holding their “inefficient” products longer, reducing new sales, and cutting the possible energy saving and environmental benefits of the newer, more efficient products. Regulators are fully aware that regulations raise the price of goods, incentivizing consumers to purchase fewer products, but it appears that agencies routinely discount this effect, lowering the actual benefits of regulation.

Air Conditioning Rule

The 2001 efficiency rule for air conditioners went through a winding road on its way to boosting standards by 30 percent. The original 2001 rule raised efficiency by 30 percent, but a 2002 amendment set the achievable limit at 20 percent. After court action, the more stringent standards were adopted and set for implementation in 2006 (see footnote 216).

The 2006 standards claimed that they would save three quads (three quadrillion BTUs) of energy over the lifetime of the rule. Additional standards in a 2011 rule claimed to save up to 4.22 quads of energy by 2045. According to initial figures from the Energy Information Administration (EIA), however, residential cooling savings have been mixed, partly because the number of newer units is lower than what the agency originally predicted.

The 2006 standards helped to create a sharp drop in the number of air conditioning shipments. The agency anticipated a slight drop of 130,000 shipments. Instead, shipments declined by more than 1.55 million, according to agency and industry estimates. Thus, the energy required for residential cooling use likely didn’t decline as expected between 2007 and 2010; it increased. See below.

From 2007 to 2010, energy use for residential cooling increased from 0.87 quadrillion BTUs to 0.92 quadrillion BTUs, or 5.7 percent. This, despite the economic recession. For comparison, total U.S. energy use fell five percent from 2008 to 2009. During the horizon listed above, residential cooling has declined slightly. Although it is difficult to attribute all of the decline to the two major regulations, one cannot ignore, however, a slight increase in residential cooling usage from 2007 to 2010 that should at least invite scrutiny about the initial benefits of the rule, especially when projected shipments fell so precipitously.

The initial DOE analysis conceded that consumers would “forgo the purchase of more efficient air conditioners and heat pumps due to their higher purchase price.” The extent of this decline, and the shaky assumptions from DOE are illustrative. See below.

The data above compares DOE’s analysis of its 2011 air conditioning rule to the 2006 standards. In the former, DOE included historical data on shipments of air conditioners. AAF compared this data to the initial projections from the earlier 2006 standards. DOE was hardly accurate with its projections. In 2005, there was a tremendous surge in purchases, the year before the new measures took effect. That year, purchases eclipsed 5.9 million, a record since 2000. On the contrary, DOE initially projected just 3.7 million.

As noted, the agency predicted a slight drop in shipments when the rule was scheduled to take effect, but only a decline of 2.1 percent. What happened in reality? A decline of 26.1 percent, at a time when the average unemployment rate hovered between 4.4 and 4.8 percent. The following year, from 2006 to 2007, when the economy was still strong, shipments fell by another ten percent, compared to DOE’s projection of a two percent increase in shipments. For perspective, from 2005 to 2009, the agency projected new energy efficient air conditioner shipments would increase by 2.6 percent. Instead, they declined by 2.8 million shipments, or 47.2 percent.

This data should call into question the assumptions of one of the most prolific regulators in the nation, an agency that has added $158 billion in cumulative costs since 2007 alone. Consumers shouldn’t be blamed for forgoing the purchase of more efficient, but far more expensive air conditioners. The initial DOE rule projected price increases ranging from $144 to $213, with the expectation that the average consumer would keep the new unit for 18.4 years.

What does the extreme drop in shipments mean for the overall benefits of the regulation? According to OIRA, the 2006 standards will impose $1.1 billion in annual costs and just $1.2 billion in annual benefits. Thus, it wouldn’t take too many erroneous assumptions for the costs to easily trump the benefits of the regulation. Take 2008 and 2009 as examples. Between those two years, average shipments were 13.6 percent lower than projections. A crude way of addressing the benefits suggests that a 13 percent decline in shipments would yield just $1.07 billion in benefits during that time. If costs were as projected, or even 5 percent lower, then they likely exceeded benefits from 2008 to 2009. In other words, DOE’s shipment projections could easily turn a rule that barely had net benefits for society into a regulation that imposes more costs than benefits.

Microwave Rule

In 2009, DOE finalized a rule covering various consumer products, including dishwashers, dehumidifiers, microwaves, ranges, and ovens. Although the rule’s annual costs and benefits were less than $100 million and thus not economically significant, the benefit-to-cost ratio was projected to be a positive 2:1. However, for the microwave portion of the rule, DOE’s initial estimates on shipments of newer, more efficient, machines were off the mark.

The following chart compares DOE’s projection of microwave oven shipments from 2006 to 2017. As detailed below, the agency’s projection, compared to industry data on shipments, is drastically different.

In 2009, the year of the rule, DOE projected 14.4 million microwave shipments. On the contrary, that year there were just 9.6 million shipments, a difference of 33 percent. In 2014, manufacturers were projected to ship 14.8 million efficient microwaves, compared to the actual amount of roughly ten million, a difference of 48 percent. Examining the history of microwave shipment projections versus reality yields an average disparity of 34 percent.

What does this mean for benefits? Although the rule didn’t divide its original cost-benefit analysis among all of the product classes, it’s difficult to believe the original benefit claims are true if shipments are significantly lower than projections. However, if shipments among all regulated products were 34 percent lower than DOE’s original estimate, it’s not difficult to believe an actual cost-benefit ratio closer to 1:1, or half of the agency’s original projection.

Conclusion

Whether it’s air conditioning units or microwaves, actual data on deliveries reveal that DOE incorporates several false assumptions into its estimates, significantly over-counting benefits. Beyond the agency’s assumptions, there are real consequences from the cost side of the ledger. At more than $155 billion in total costs in recent years, these burdens have a profound impact on manufacturers’ employment and consumer prices.


  • Costs between $306.6 billion and $1.9 trillion per year to provide 16 weeks of paid family leave

  • $61.4 billion of the costs covered by the new payroll tax

  • $12,900 average cost per worker who takes 16 weeks of paid leave

  • 56.7 percent of benefits go to people making over $52,000 per year

Key Findings

The Washington D.C. Council recently introduced legislation to provide workers with up to 16 weeks of paid family leave. Hailed by some, it may serve as a model for national paid leave. But if the D.C. proposal were implemented nationwide, the costs and deficits would be immense. AAF analysis found it would cost between $306.6 billion and $1.9 trillion per year to provide 16 weeks of paid family leave nationwide. For each worker who takes 16 weeks paid leave, it would cost the government on average $12,900. Perversely, the higher income the worker, the larger the benefit received; 56.7 percent of benefits from the policy would go to workers who individually earn over $1,000 per week or $52,000 per year.

Meanwhile, the financing would be a payroll tax on employers between 0.5 percent and 1 percent of wages and salaries. Implementing that tax nationwide would only raise about $61.4 billion in revenue.

What exactly is the D.C. Council proposing?

The bill would provide anyone who has held a job in the District (either full-timer or part-time) within the past year with up to 16 weeks of paid time off to care for an infant, recover from an illness, readjust after military deployment, or assist an ill family member. For those 16 weeks, the district would provide benefits equal to 100 percent of earnings for those making up to $1,000 per week or $52,000 per year. Workers who make more than that would receive $1,000 per week plus 50 percent of their additional pay. The maximum weekly benefit would be $3,000.

To receive paid leave benefits, workers in D.C. would submit claims and supporting documentation through an online portal that is run by the local government. Within 10 days, the Mayor’s office would determine whether a worker is eligible, when the paid leave period would start, the maximum duration of the leave, and the weekly benefit amount. Determinations could then be appealed if the worker is unsatisfied with the decision by the Mayor’s office.

To cover the cost of this expansive paid leave policy, the bill establishes a Family and Medical Leave Fund that would primarily receive revenue from a payroll tax on D.C. employers.[1] The law stipulates that if the fund’s balance does not exceed a year of projected expenses, then the tax on employers will equal 1 percent of all worker salaries. But, if the fund’s balance does exceed annual expenses, then the tax would be based on worker earnings. In particular, the tax would range from 0.5 percent of the annual pay of workers making $10,000 to $20,000 per year to 1 percent of the pay of workers making $150,000 or more per year. Employers would not have to pay a tax on worker salaries that are less than $10,000.

The Cost of Implementing D.C.’s Paid Leave Program Nationwide

Providing paid family and sick leave is supported by the White House, and many policymakers would like to make paid family leave available across the entire United States. Using the D.C. program as a model, it is useful to examine the costs of a paid leave program nationwide.

The cost of a paid family leave program depends primarily on how many workers actually take paid time off and for how long. Since the goal is to identify the rough order of magnitude of the proposal, we provide a range of cost estimates using data from the Current Population Survey March 2015 Annual Social and Economic Supplement. For our lower bound estimate, we assume that 16 percent of workers each year would take 16 weeks of paid family leave; a take-up rate that matches the percent of covered workers who took unpaid leave under the Family and Medical Leave Act in 2012. We consider this a lower bound estimate because it seems likely that the take-up of paid leave would exceed that of unpaid leave. The upper bound is simple; we calculate the cost if all workers in the United States took 16 weeks of paid time off. We do not actually expect all workers to take time off and the cost to reach the upper bound estimate. But, we consider it to be the total cost exposure of the paid leave program.

Table 1 illustrates how much it would cost the government to provide 16 weeks of paid leave by weekly pay range.

Table 1: Cost of 16 Weeks Paid Leave

Weekly Pay Range

Average Leave Pay Per Worker

Lower Bound

Upper Bound

Below $1000

$8,200

$132.7 billion

$829.4 billion

$1000 to 5000

$22,000

$162.5 billion

$1,015.4 billion

$5000 up

$48,000

$11.4 billion

$71.4 billion

Average/Total

$12,900

$306.6 billion

$1,916.2 billion

Overall, we estimate that it would cost between $306.6 billion and $1.9 trillion per year to provide 16 weeks of paid family leave nationwide. For each worker who takes 16 weeks paid leave, it would cost the government on average $12,900. Moreover, the benefit is somewhat perversely targeted as high income workers would receive the largest benefits. Those making between $1,000 and $5,000 per week would receive $22,000 in benefits on average and those making over $5,000 per week would receive $48,000. As a result, 56.7 percent of benefits from the policy would go to workers who individually earn over $1,000 per week or $52,000 per year.

Taken at face value, the scale of the program implies that the employer tax proposed by the D.C. Council would not come close to covering the program’s national expenses.

Table 2 contains the estimated annual revenue from each tax bracket proposed by the law.

Table 2: Tax Rates and Revenue

Individual Annual Salary

Tax

Revenue

$0.01 under $10,000

0%

$0

$10,000 under $20,000

0.5%

$1.7 billion

$20,000 under $50,000

0.6%

$12.6 billion

$50,000 under $150,000

0.8%

$30.8 billion

$150,000 and over

1%

$16.3 billion

Average/Total

0.62%

$61.4 billion

We estimate that implementing D.C.’s proposed employer tax structure nationwide would only raise about $61.4 billion in revenue. That is only 20 percent of the lower bound cost estimate and 3.2 percent of the upper bound cost estimate in Table 1. Even if the payroll tax were 1 percent for each bracket, which would occur in D.C. if the Family and Medical Leave Fund were not properly financed, it would still only raise a total of $79.9 billion in revenue. To just cover the lower bound cost estimate of $306.6 billion, the tax rate would need to be almost 4 percent at all pay levels.

Of course, the D.C. program allows the Mayor’s office to scale requests below 16 weeks, and it may be the case that not all applicants would ask for the full benefit. However, at the full 1 percent rate for everyone, the receipts cover only about 25 percent of lower bound benefits. Put differently, the average length of the leave could only be 4 weeks.

Conclusion

The paid family leave bill proposed by the Washington D.C. Council is a new entitlement program. Nearly everyone would pay in to the program and anyone who qualifies would be entitled to its benefits. Like our current entitlement programs, however, implementing this one for the entire United States could be extremely expensive. In particular, we estimate that it would cost between $306.6 billion and $1.9 trillion for the federal government to provide 16 weeks of paid leave. Moreover, D.C.’s employer tax of 0.5 percent to 1 percent of worker salaries would at most only cover one-fifth of the program’s expenses. In order to fund 16 weeks of paid family leave, the government would actually need to impose a nearly 4 percent payroll tax.



[1] According to the law, the fund could also receive revenue from the D.C. government, eligible individuals who are unemployed or self-employed and still want to participate in the program, interest earned from money in the fund, and “all other money received from any other source.”

Related Research
  • If the U.S. was not subjected to the crude export oil ban 40 years ago, gross revenues realized could have exceeded $1 trillion dollars.

  • Before sanctions were set in 2012, Iran was grossing over $100 billion a year in oil revenue.

Summary

  • If the U.S. was not subjected to the crude export oil ban 40 years ago, gross revenues realized could have exceeded $1 trillion dollars.
  • Before sanctions were set in 2012, Iran was grossing over $100 billion a year in oil revenue.
  • Lifting Iranian sanctions would have a severe impact on U.S. domestic oil production as the world market would become oversaturated resulting in even lower oil prices.

The ban vs the deal

The crude oil export ban, also referred to as the Energy and Policy Conservation Act, was designed to conserve domestic supplies and reduce the nation’s reliance on foreign oil. The act was in response to the 1973 oil embargo that saw oil prices rise from $3 to $12 practically overnight and was seen as the first time that oil prices “[had] a lasting economic toll on the country”[1] since the Great Depression. While the ban was considered an appropriate policy tool 40 years ago, today we live in a much different world. Simply, the ban has passed its expiration date.

A combination of different events has positioned this issue to be at the forefront of domestic policy right now; one in particular is the Iranian nuclear deal.  Under the deal brokered by the White House, Iran would be allowed to begin exporting their oil again on the world market. For the past 3 years, Iran has exported around 1.3 million barrels a day[2] or over $115 billion in gross revenue, down from 2.5 million barrels in 2012. Pre-sanctions, Iran was grossing over $280 million a day or just over $100 billion a year. When the U.S. imposed sanctions on the country it devastated the nation’s ability to sell. These figures have crippled the oil dependent Iranian economy creating unemployment spikes and increasing inflation.

If Iran can export oil and reap the benefits once the Iran deal is final, why can’t the U.S.? An oil export ban for the United States is detrimental to the economy. The low price of oil is already creating tension both domestically and globally.

Turning the Tables

Let’s say for a moment that the U.S. was given the same luxury of exporting oil, but instead of a 3 year hiatus from the market let’s use a 40 year benchmark. Using the average price of oil for the last 40 years with inflation factored in, if the U.S had been allowed to export 1 million barrels a day for the last 40 years the gross figure would be roughly $790 billion dollars since 1975. Using the same methodology, on the low end of the spectrum, 500,000 barrels per day would have generated more than $395 billion per year and assuming 1,500,000 barrels per day the U.S. would have seen $1.2 trillion per year in gross revenue. To see the 40 year calculations based on barrels per year adjusted with March 2015 inflation rates click here.

According to a study released by the American Petroleum Institute[3] removing the ban would “lead to further increases in domestic oil production, resulting in lower gasoline prices while supporting nearly 1 million additional jobs at the peak . . . It would lead to a total of $746 billion in additional investment during the study period (2016-2030) and an average of 1.2 million barrels per day (M b/d) would be produced.” 

The Uncertain Future of Oil Prices

Of course, exporting oil from the U.S. is less advantageous when the price of Brent crude is low, as it is now.  Putting more supply on the market will lower prices, impacting the gross revenues that the U.S. is poised to realize. Likewise, if Iran is allowed to begin exporting again on the world market, increased supply may very well push prices down. The market reacts with a lag and hence, changes in oil prices do not always happen immediately. We can try to speculate the impact of Iran’s oil entering the market by looking at other instances when there were similar surges in supply as evidenced by the following:

  • Between November and December 2014, there was a jump in supply by 0.6M b/d, and an increase in consumption of 3.5M b/d, and a price drop of roughly $17/barrel (about a 21 percent decrease in price).
  • Between February and March 2015, supply increased by 0.8M b/d, consumption decreased by 3.2M b/d, and prices dropped by about $2.30/barrel (about a 4 percent price drop).
  • Between September and October of 2015, supply increased by 0.9M b/d, consumption increased by 1.2M b/d, and prices dropped in the neighborhood of $9.60/barrel (about a 10 percent price drop)[4].

The data indicates that prices will drop when Iran’s oil enters the market. Some estimates indicate that it would fall by as much as $10 a barrel by next year.

Conclusion

Lifting the ban on oil exports would have a beneficial impact on the U.S. economy. As policymakers consider ending the ban, they should consider these benefits as well as the impact and consequences that removing the sanctions on Iran will have on U.S. production. Lifting the crude oil export ban before Iranian sanctions are removed would give U.S. producers the opportunity to export on the global market without the added burden of an even lower price of oil. These two sets of export restrictions won’t coexist harmoniously. One will improve our domestic economy and strengthen our geopolitical relationships and the other will weaken our economy and negatively affect domestic production.


[1] http://www.history.com/this-day-in-history/opec-enacts-oil-embargo

[2] http://www.rferl.org/content/iran-could-produce-million-barrels-oil-without-sanctions/27052897.html

[3] http://www.api.org/~/media/files/policy/exports/economic-studies-crude-oil-exports.pdf

[4] EIA.gov

Related Research

Summary

  • If the U.S. was not subjected to the crude export oil ban 40 years ago, gross profits realized could have exceeded $1 trillion dollars.
  • Before sanctions were set in 2012, Iran was grossing over $100 billion a year in oil revenue.
  • Lifting Iranian sanctions would have a severe impact on U.S. domestic oil production as the world market would become oversaturated resulting in even lower oil prices.


The ban vs the deal

The crude oil export ban, also referred to as the Energy and Policy Conservation Act, was designed to conserve domestic supplies and reduce the nation’s reliance on foreign oil. The act was in response to the 1973 oil embargo that saw oil prices rise from $3 to $12 practically overnight and was seen as the first time that oil prices “[had] a lasting economic toll on the country”[1] since the Great Depression. While the ban was considered an appropriate policy tool 40 years ago, today we live in a much different world. Simply, the ban has passed its expiration date.

A combination of different events has positioned this issue to be at the forefront of domestic policy right now, one in particular is the Iranian nuclear deal.  Under the deal brokered by the White House, Iran would be allowed to begin exporting their oil again on the world market. For the past 3 years, Iran has exported around 1.3 million barrels a day[2] or over $115 billion in gross revenue, down from 2.5 million barrels in 2012. Pre-sanctions, Iran was grossing over $280 million a day or just over $100 billion a year. When the U.S. imposed sanctions on the country it devastated the nation’s ability to sell. These figures have crippled the oil dependent Iranian economy creating unemployment spikes and increasing inflation.

If Iran can export oil and reap the benefits once the Iran deal is final, why can’t the U.S.? An oil export ban for the United States is detrimental to the economy. The low price of oil is already creating tension both domestically and globally.

 Turning the Tables

Let’s say for a moment that the U.S. was given the same luxury of exporting oil, but instead of a 3 year hiatus from the market let’s use a 40 year benchmark. Using the average price of oil for the last 40 years with inflation factored in, if the U.S had been allowed to export 1 million barrels a day for the last 40 years the gross figure would be roughly $790 billion dollars  since 1975. Using the same methodology, on the low end of the spectrum, 500,000 barrels per day would have generated more than $395 billion per year and assuming 1,500,000 barrels per day the U.S. would have seen $1.2 trillion per year in gross profits. To see the 40 year calculations based on barrels per year adjusted with March 2015 inflation rates click here.

According to a study released by the American Petroleum Institute[3] removing the ban would “lead to further increases in domestic oil production, resulting in lower gasoline prices while supporting nearly 1 million additional jobs at the peak . . . It would lead to a total of $746 billion in additional investment during the study period (2016-2030) and an average of 1.2 million barrels per day (b/d) would be produced.” 

The Uncertain Future of Oil Prices

Of course, exporting oil from the U.S. is less advantageous when the price of Brent crude is low, as it is now.  Putting more supply on the market will lower prices, impacting the gross revenues that the U.S. is poised to realize. Likewise, if Iran is allowed to begin exporting again on the world market, increased supply may very well push prices down. The market is reactionary and hence, changes in oil prices do not always happen immediately. We can try to speculate the impact of Iran’s oil entering the market by looking at other instances when there were similar surges in supply as evidenced by the following:

  • Between November and December 2014, there was a jump in supply by 0.6M b/d, and an increase in consumption of 3.5m b/d, and a price drop of ~$17/barrel (about a 21% decrease in price).
  • Between February and March 2015, supply increased by 0.8M b/d, consumption decreased by 3.2M b/d, and prices dropped by ~$2.30/barrel (about a 4% price drop).
  • Between September and October of 2015, supply increased by 0.9M b/d, consumption increased by 1.2M b/d, and prices dropped by ~$9.60/barrel (about a 10% price drop)[4].

The data indicates that prices will drop when Iran’s oil enters the market. Some estimates indicate that it would fall by as much as $10 a barrel by next year.

Conclusion

Lifting the ban on oil exports would have a beneficial impact on the U.S. economy. As policymakers consider lifting the ban, they should consider these benefits as well as the impact and consequences that lifting the sanctions on Iran will have on U.S. production. Lifting the crude oil export ban before Iranian sanctions are lifted would give U.S. producers the opportunity to export on the global market without the added burden of an even lower price of oil. These two sets of export restrictions won’t coexist harmoniously. One will improve our domestic economy and strengthen our geopolitical relationships and the other will weaken our economy and negatively affect domestic production.


[1] http://www.history.com/this-day-in-history/opec-enacts-oil-embargo

[2] http://www.rferl.org/content/iran-could-produce-million-barrels-oil-without-sanctions/27052897.html

[3] http://www.api.org/~/media/files/policy/exports/economic-studies-crude-oil-exports.pdf

[4] EIA.gov

Executive Summary

  • Global insurers play a vital role in the international economy. Their depth and breadth enable them to provide specially-tailored products to their clients.
  • Regulation of global insurance is fragmented with different standards in different jurisdictions. Global insurers must comply with the varying regulatory regimes of each jurisdiction in which they operate.
  • The International Association of Insurance Supervisors is finalizing new sets of standards, but it does not have the authority to implement or enforce those standards. They still must be adopted by the individual jurisdictions. 

Introduction

With American headlines rife with financial news from China, Greece and other countries halfway around the world, there is no doubt that the globalized economy is propagating risks more than ever. For many, globalization brings great fear and uncertainty – the fact that international market forces affect financial stability is worrisome in and of itself. But in the insurance world, global insurers actually have an advantage. Though globalization increases interconnected risks, it allows global insurers to operate under the law of large numbers and mitigate those risks by creating expansive portfolios of diversified risks across the globe.

At the same time, as global insurers serve clients in hundreds of international jurisdictions, they are forced to operate under hundreds of regulatory environments. This fragmented regulatory regime raises costs, adds complexity, and otherwise limits market expansion for insurance groups trying to meet cross-border needs. Such hindrances reduce foreign direct investment (FDI), trade and growth in the global economy – the effects of which are particularly detrimental to emerging markets and international growth opportunities for American businesses. However, the benefit is that as local regulations have developed over the course of many years, countries and local entities are better enabled to tailor insurance requirements to the desires of their individual populations.

For insurance regulation to be truly effective, regulators must take into account the reality of globalization before considering the need to efficiently disperse capital, diversify risks, and curtail costs. If global insurers are able to operate under more a more efficient and appropriately-tailored regulatory regime, they will be better equipped to carry out their role in the international economy: providing companies of all sizes in all parts of the world with customized protection from interconnected risks. 

Current Regulatory Environment for Global Insurers

The United States is the largest insurance market in the world, capturing 27 percent of the international market at $1.3 trillion in premiums in 2013. While insurance companies operating in the U.S. are primarily regulated at the state level, global insurers that operate in multiple jurisdictions abroad must comply with each country’s regulatory structure. For companies like Prudential, which operates in over 30 countries, MetLife, which operates in 50 countries, and AIG, which operates in 95 countries, regulatory compliance is no easy task. 

The push to harmonize global insurance standards started in the aftermath of the financial crises when industry leaders called for insurance-specific solutions to the issues being discussed at the G20’s meeting on April 2 that year. Shortly thereafter, the G20 made global financial stability its top priority, determining that internationally active insurance groups (IAIGs) play an important role in achieving that stability. As a result, the Financial Stability Board (FSB) was formed and was charged with “monitor[ing] and assess[ing] vulnerabilities affecting the global financial system and propos[ing] actions needed to address them.” The FSB tasked the International Association of Insurance Supervisors (IAIS) with developing a global regulatory framework for IAIGs. 

FSB and IAIS have proposed revised capital standards, methods for identifying risks to financial markets, and generally streamlined insurance regulation. These initiatives aim to achieve three goals: enhancing financial stability, providing more effective and efficient jurisdictional coordination, and establishing consistent best practices. Many of the initiatives are similar to domestic efforts created in Dodd-Frank, but are targeted specifically at global insurers, IAIGs, and global systemically important insurers (G-SIIs). Each of the initiatives is summarized in the table below.

 

Source: American Action Forum

IAIGs are defined by their size and their global footprint. Insurance groups with written premiums in at least three jurisdictions and with more than $50 billion in assets are considered IAIGs. Currently, IAIS has recognized over 50 IAIGs, only a handful of which are American companies.

On the flip side, G-SIIs are defined by the systemic risk they pose. Nine companies have been identified as being sufficiently systemically risky to qualify as a G-SII. Three of those (AIG, Prudential, and MetLife) are American companies and have been simultaneously classified as systemically important financial institutions (SIFIs) by the Financial Stability Oversight Council (FSOC), a product of Dodd-Frank. The IAIS is expected to revise its G-SII assessment methodology by the end of this year, which could increase the number of insurance groups designated as systemically important.

While IAIS is responsible for setting and advocating for the standards discussed above, it is important to note that IAIS is just that: a standard-setting body. It has no legal authority to implement or enforce any standards in member jurisdictions. Each individual country must independently implement any final agreement under its own legal framework. In the United States, for example, without an act of Congress, state insurance commissioners are the only ones legally authorized to require insurance companies to follow IAIS standards. Even if IAIS ultimately proposes a comprehensive set of uniform standards that would regulate international insurers at the group level, these companies could continue to face fragmented regulation until those standards are adopted by each jurisdiction in which they serve. 

The Role of Global Insurers

Global insurers face risks of their own when it comes to regulation: ineffective capital standards reduce insurers’ ability to diversify risk; redundant regulations raise costs of compliance and eventually raise customer costs; and regulations that fail to incentivize international engagement hinder insurance companies’ necessary role in foreign direct investment (FDI). The ability to operate efficiently helps global insurers play an increasingly vital role in the international economy. That role is vital because of the benefits it provides. Those can be broken out into four key areas: 1) The insurers’ larger and more diverse balance sheets, which create a better ability to distribute risk; 2) The ability to handle interconnected risks; 3) The ability to offer products specifically tailored to a particular market; and 4) Global insurers’ role in FDI.

Global insurers have larger and more diverse balance sheets 

Unlike banks, insurance companies, particularly global insurers, are driven by the law of large numbers in which diverse and unconnected risks are aggregated. Simply put, the larger the sample size, the more likely it is that actual losses reflect expected losses. Global insurers are best situated to take advantage of the law of large numbers due to their sheer ability to capture customers in markets that domestic insurers don’t have access to. 

Natural disasters provide a good illustration of the benefits of global insurers’ size and diversity. Take, for example, Hurricane Andrew. In the aftermath of the 1992 hurricane in South Florida, more than 930,000 policyholders lost coverage after eleven insurance companies went bankrupt, as a result of those companies’ balance sheets being too heavily weighted towards property insurance in the storm zone. Each of those eleven companies was a domestic insurer. The global insurers that served the area were able to stay afloat because of the law of large numbers and their diversification of interests. 

Global insurers are able to handle interconnected risks

Not unlike the logic behind the law of large numbers is the idea that global insurers are most capable of mitigating globally-interconnected risks. A recent study by Bloomberg found that 89 percent of executive level decision makers and 91 percent of business risk managers believe that the global risk landscape will become more closely intertwined over the next several years. These interconnected risks are not always easy to predict. That’s where global insurers’ expansive reach comes in handy. Their ability to use a global network to not only have experts on the ground to assess the risk environment but also to help companies set up reinsurance captives creates the optimal breadth and depth of coverage to protect their clients’ business activities around the world. 

To anyone paying attention, the 2010 volcanic eruption in Iceland didn’t come as much of a surprise. Nor did the airline industry’s $1 billion revenue hit, after months of ash spewing into one of the world’s busiest air spaces. What did catch the world off guard was the ripple effect on other industries. With thousands of flights grounded each day, companies shipping perishable goods around the world were unable to make deliveries on time and were left with inconsumable products and lost profits. Volcanic ash from Iceland affected food and flower companies as far away as Brazil and Japan. This is interconnected risk, and it was the insurers with a footprint in multiple jurisdictions affected by such world events that were able to keep a bad situation from getting much worse.

Global insurers can offer products specifically tailored to a particular market

Having “boots on the ground” in the communities they insure gives global insurers the ability to gain an intrinsic familiarity with specific markets. Insurance is a tedious business that requires products specific to the businesses they insure. Oftentimes local insurers in high-risk areas aren’t equipped to offer coverage for certain political or supply chain risks. Global insurers have the double advantage of not only offering those types of products but also being able to have agents assess the exact needs of the customer within a particular community. Sometimes referred to as “sleep easy coverage,” global insurers are able to provide coverage even after limits on a local policy have been exhausted. The advantages that come from a macro company taking a micro focus of its clientele enable FDI, which helps to facilitate trade and market expansion.

Global insurers’ encourage foreign direct investment

Foreign direct investment is a “category of cross-border investment made by a resident in one economy with the objective of establishing a lasting interest in an enterprise that is resident in an economy other than that of the direct investor.” FDI helps solve the age-old policymaker conundrum of wanting to protect the domestic economy from financial risks abroad, while encouraging foreign investment here at home in order to spur job creation and economic growth. FDI allows both goals to be achieved. 

Global insurance helps to promote FDI by offering direct investors each of the advantages discussed above. Additionally, global insurers offer direct investors the opportunity to establish their own captive insurance in an effort to enhance risk diversification not only across jurisdictions but also across different types of policies – property/casualty, life, etc. A captive is an insurance company that is usually wholly-owned by the individual or company that it is insuring. Captives are formed to specifically manage and insure the owner’s business risks and may retain shares of the underwriting profits and, at times, pool or reinsure that corporate risk. This additional layer of risk mitigation provides direct investors with the protection they need to effectively develop their businesses abroad, and thereby jumpstart economic development and job creation.

Looking Forward

It’s clear that global insurers play an important role in the global economy, and it’s becoming increasingly clear that there are a number of regulatory issues that must be overcome as IAIS moves forward with its recommendations. Although many of the initiatives are still years away from adoption, Congress has begun to take an interest in the policies affecting insurance companies both in the U.S. and overseas. Policymakers and regulators are in uncharted territory as globalization increases and the role of insurers in the global economy is growing right along with it. Perhaps the biggest looming question is not whether there will be a new regulatory structure, but which regime will make the rules. If the U.S. is first to create a set of regulatory standards, will the international community follow? Or vice versa? As long as industry works with regulators and regulators work with each other, we can be optimistic that the regulatory environment will be one in which insurers can continue to provide the global services that are so vital to the international economy.

Related Research

Generally, regulators exist to address a fundamental market failure. However, American Action Forum (AAF) research has found since 2001 regulators have been forced to make 1,829 corrections to federal rulemakings, an average of 130 annually. In the context of all federal rules, the “regulatory error rate” exceeds 3.5 percent.

Analysis

Almost every week, the Federal Register contains documents that address past rulemakings by issuing corrections, sometimes typographical and often substantive. AAF has previously highlighted regulatory mistakes here and here, documenting the failures of Dodd-Frank and Affordable Care Act implementation. In one instance, the National Credit Union Administration (NCUA) erroneously stated it would impose 43 billion hours of paperwork. After AAF exposed the error, the agency corrected the number to 9.9 million hours, an embarrassment that languished in the regulatory system for months. However, the problem is endemic to the government as a whole, not just with health care or financial regulation.

The chart below tracks the total number of corrective documents contained in the Federal Register from 2001 to 2014.

These raw figures merely tally the number of final rules in the Federal Register that correct previous rulemakings. AAF also examined the data by investigating the “real regulatory error percentage.” That is, what is the number of final rules, less final corrections, divided by the number of regulatory corrections? This method focuses on the actual number of substantive rulemakings and compares them to the regulatory errors annually.

The graph below tracks the error rate over time.

This approach controls for the volume of rules issued annually and simply divides the number of errors by the number of substantive rulemakings. The rate has been fairly constant over time, with an average of 3.5 percent. There was a sharp uptick in 2014, with 141 total corrections and an error rate of 4.1 percent. The Obama Administration (2009-2014) has averaged an error rate of 3.3 percent, compared to 3.6 percent during the Bush Administration (2001-2008).

That regulators make mistakes is probably not a surprise to many in the policy world. That they continue to make more than 100 annually might shock some. Given the talk of how “ossified” and lengthy the rulemaking process is today, one would imagine that a four-to-five year process might help to eliminate these mistakes. Regulators offer their rulemakings up to public comment for roughly two to three months, submit the measures for review to the Office of Information and Regulatory Affairs (OIRA), and then present for public inspection before official publication.

Some of the mistakes contained in corrective documents are typographical, but others are substantive. For example, the administration issued 16 corrections to the health exchange final rule. One section noted, “However, one sentence implies that any licensure standards for Navigators would cause Navigators to be agents and brokers, which is inaccurate.” Obviously, the exchanges had other problems during implementation, beyond these corrections.

In addition, there are significant consequences, both legal and economic, for errors in regulatory and statutory text. Indeed, the Clean Power Plan rests on legal footing that contains a drafting error from the 1990 Clean Air Act amendments. For a multi-billion regulation that affects 48 states, hundreds of power plants, the electricity grid, and every consumer in the nation, getting it right the first time is paramount.

Conclusion

Perhaps 1,800 corrections should be viewed as a positive development. Regulators were able to identify and correct errors in substantive regulation within the calendar year. Yet, when one out of every 30 rules contains some sort of error, private entities, Congress, and taxpayers should take note. With 141 corrections last year, there are no signs the error rate is declining.

At a congressional hearing last week one of the most troubling arguments was made in opposition to the Department of Labor’s (DOL) proposed fiduciary rule. Many retirement savers will be forced to pay adviser fees for investment products and services for which they already have paid a commission-based fee. Considering that a majority (51 percent) of retirement accounts have balances less than $25,000, having to pay duplicative fees particularly hurts those that need saving and investment advice the most.

According to the Census Bureau, there were 115.6 million American households in 2013. Of those households, 49.2 percent had a retirement account. That leaves us with 56.9 million retirement accounts containing assets totaling $7.3 trillion. DOL’s Regulatory Impact Analysis argues that the proposed rule would put an end to conflicts of interest that it claims come with commission-based retirement advice and argues that, as a result, the fiduciary rule would save investors an estimated $17 billion each year. That couldn’t be further from the truth. 

Of that $7.3 trillion in retirement assets, 86.2 percent is in a commission- or transactional-based account, meaning that, instead of paying high fees directly to the adviser simply for his or her advice, the adviser is taking a smaller fee that is a portion of the gains in the account. This translates to roughly $6.3 trillion in assets in retirement accounts on which a commission-based fee has already been paid. If DOL’s fiduciary rule is enacted as proposed, each of those accounts will be moved to a fee-based account (assuming they won’t be closed entirely for lack of a profitable minimum balance), thereby forcing retirement savers to pay an adviser fee on top of the commission they’ve already paid.

Even with a fee of just 1.2 percent (The lower the balance in the account, the higher the fee, and vice versa. So with 91 percent of retirement accounts containing less than $250,000, chances are the average adviser fee would be much higher.) that’s $75.6 billion in duplicative fees on American retirement accounts, or an average of over $1500 per household account. These people saving for retirement shouldn’t be forced into fee-based accounts that they don’t want, and they certainly shouldn’t be forced to pay twice for those services – especially when that second payment is $1500 or more.

The Medical Loss Ratio

A Medical Loss Ratio (MLR) is a calculation used to loosely gauge the efficiency and profitability of a health insurance plan. The measurement determines what portion of the money consumers pay in premiums is spent on providing health care services or improving the quality of care delivery. A higher MLR is thought to indicate a higher quality insurer because a larger portion of the company’s funds are spent on providing care. However, this is not necessarily the case if an insurer succeeds in keeping a healthier-than-expected risk pool.

The Affordable Care Act (ACA) defines the MLR as the share of adjusted premium dollars spent on medical claims and quality improvement activities. Insurers are also required by law to report their annual MLR, and any insurer that falls below the mandated minimum will be required to rebate the difference to payers or deduct the difference from the subsequent year’s premiums.[1]

The ACA imposed an MLR of 80 percent on individual and small group health insurance plans, and 85 percent on large group plans. The purpose of this rule was ostensibly to control premiums by regulating profits rather than prices by limiting insurers’ allowable revenue and administrative costs.

Calculating MLR

An MLR is calculated as claims (payments made by or on behalf of policyholders for provision of medical treatment) plus quality improvement (activities designed to increase the likelihood of desirable health outcomes in ways that can be objectively measured), divided by premiums less taxes and fees.

Medical Loss Ratio =

(claims + quality improvement expenditures)

[premiums - (tax + fees)] 

The MLR does not include loading costs such as administrative expenses or profit. This means that insurers’ profits are limited to 20 percent (or less) of total premiums, minus administrative expenses and overhead such as staff, offices, supplies, etc. While most insurers are capable of operating on these thin margins, the profit cap minimizes the incentive for insurers to offer innovative products covering new benefits or geographic regions, and may squeeze out insurers that have difficulty minimizing administrative costs.

Effects

Prior to the passage of the ACA, the majority of insurers were within the 80 percent MLR range. However, roughly one-third of insurers in the individual market were required to make some changes in order comply with the new rule. Between 2011, when the rule went into effect, and 2013, the median MLR in the individual market increased by 2.4 percent, while in the small and large group markets the median MLRs increased by 0.4 percent and 0.1 percent, respectively. This shows a moderate shift in spending away from administrative expenses and profit, and towards provision of care in the individual market to comply with the rule, but less of an impact in the small and large group markets which were largely already in compliance. This could be an indication that individual market plans previously offered insurers exceptionally high profits, that these insurers found ways of limiting administrative expenses, or that the insurers who were unable to adapt to the rule left markets where their profit margins would be substantially impacted by the rule.

The MLR rule puts pressure on insurers to limit administrative overhead in order to preserve profit without running afoul of the MLR minimum. Insurers may achieve this new goal by reducing expenditures on fraud protection measures, limiting “quality improvement” activities to those that satisfy the ACA requirements, limiting networks to reduce administrative burden, or shifting to more managed care-style plans which require care providers to take on responsibility for more administrative duties.

The inherent limitation on corporate revenues could also contribute to consolidation in the insurance market by discouraging investors and new entrants. Without the ability to rapidly recoup start-up costs, competition with already-established players in a market is less likely to prove financially worthwhile. The reduced number of insurance carriers entering new markets since 2011 may, in part, be attributed to this market restriction.

Conclusion

The MLR is an attempt by Congress to cap revenue for insurance providers by mandating how each dollar of income is spent. However, the rule creates unintended incentives that can have deleterious market impacts. As one economist put it, “The Medical Loss Ratio is an accounting monstrosity, a convolution of data from myriad products, distribution channels, and geographical regions that enthralls the unsophisticated observer and distorts policy discourse.”[2]


[1] Tom Baker, Health Insurance, Risk, and Responsibility after the Patient Protection and Affordable Care Act, 159 U. Pa. L. Rev. 1577, 1612-1614 (section on the MLR requirements, including a brief example of how the rules work).

[2] J.C. Robinson, Use and abuse of the medical loss ratio to measure health plan performance, Health Affairs 16, no. 4 (1997), 176-86.


Key Findings:

·         61 percent of community college students fail to earn a degree or credential in 6 years.

·         Only $32 billion of the $80 billion combined federal and state investment for President Obama’s proposed free college program would result in a degree or credential. The majority - $48 billion- would be a loss.

Introduction

In this year’s State of the Union Address President Obama announced an initiative to provide two years of free community college. States would be required to opt into the proposed program and commit 25 percent of the necessary funding. Schools receiving the dollars would be required to adopt evidence-based reforms to improve student outcomes as well as create programs that provide occupational training or fulfill transfer requirements to 4-year schools. In return the federal government would pick up the remaining 75 percent of funding for tuition and fees. The program is estimated to cost the federal government $60 billion ($20 billion for states) over 10 years.

Analysis

In general, federal investments in higher education are made in an effort to increase the degrees and credentials needed to ensure a productive workforce with lower unemployment rates and higher wages that support greater consumer spending and investment. Yet in November of 2014, just two months before the president unveiled his plan, the National Student Clearinghouse released an in-depth study that found the public 2-year colleges at the heart of the president’s plan typically fail to get students over the finish line with a degree or credential needed to reap any economic benefits.

Chart 1. Six-Year Outcomes by Starting Institution Type

 

 

Source: National Student Clearinghouse Research Center

As the figure shows, after six years only about 39 percent of public community college students end up completing a degree.[1] In other words, the president’s free college proposal would spend $36 billion of a $60 billion investment on up to 5.4 million students who will likely never receive a degree or credential. Add the $20 billion that states would be required to invest on top of the federal match and the total potential loss on an $80 billion federal and state investment could be close to $48 billion.

It’s important to note that the federal taxpayer’s $60 billion investment is not directed towards student services such as remedial support or counseling or even childcare. This proposal quite plainly provides grant dollars to students regardless of whether they actually receive a degree. It provides aid whether the student is part of a family of four that lives below the poverty line or are part of a family that earns just under $200,000 per year.

Conclusion

Federal and state budgets simply don’t have the surplus capacity to absorb billions of dollars of economic loss. College affordability is important, but should be addressed with efficient programs.

There are better ways to spend $48 billion that can improve affordability and graduation rates, such as the federal Pell Grant program. Pell grants target federal grant aid to needy students but are portable, so that students can pursue more advanced degrees at 4-year colleges and universities where there is an improved chance of receiving a degree.


[1] Shapiro, D., Dundar, A., Yuan, X., Harrell, A. & Wakhungu, P.K. (2014, November). Completing College: A National View of Student Attainment Rates – Fall 2008 Cohort (Signature Report No. 8). Herndon, VA: National Student Clearinghouse Research Center.


Introduction

Biopharmaceutical medications, commonly referred to as biologics, are a rapidly growing sector of specialty drugs. While the concept of using living cells for specialized medical treatment is not new—live vaccine treatments have been around for some time—recent innovations in genetic engineering have precipitated more complex and specifically tailored drugs that are grown, rather than manufactured, and advanced the landscape of modern medicine. However, these innovations require large investments in both research and production.[1] Recent estimates indicate that research and development costs for biopharmaceuticals totaled $140 billion worldwide in 2014 alone.[2] A biologic treatment course can cost between $10,000 and $50,000 over the course of a year, with some treatments exceeding $100,000, and both public and private insurance programs are struggling to maintain access in the face of high price tags and increasing demand for specialty drugs.[3] In 2013, the IMS Institute for Healthcare Informatics estimated that 28 percent of prescription drug expenditures in the US were spent on biologic drugs,[4] and the market share of biologics globally is projected to continue growing at an annual rate of 10.6 percent through 2019.[5]

Traditional, small-molecule pharmaceuticals face aggressive competition from generic drug manufacturers after patent protections on the brand-name drugs expire, which has led to $734 billion in savings for payers and patients over the past decade.[6] However, the existing regulatory pathway for small-molecule generic drugs is not sufficient to accommodate the complexity of new biologics because identical biologics cannot be made like small molecule products. In 2010, Congress passed the Biologics Price and Innovation Act (BPCIA) as a part of the Affordable Care Act (ACA) to provide the FDA with new authority to approve follow-on biologic products—also known as “biosimilars”.  These products are “highly similar” to the biologic reference product, but not identical copies as is the case with generic pharmaceuticals. The FDA has issued draft guidance on many aspects of the approval process, including quality and scientific considerations related to biosimilarity, the clinical data necessary to demonstrate biosimilarity, and the relevant exclusivity period for brand-name biologics. In March 2015, the FDA approved the first U.S. biosimilar, filgrastim, through the new pathway. But due to a lack of final guidance from the agency, uncertainty remains on a number of key policy issues such as naming, extrapolation, and interchangeability, which will impact the entry of biosimilars to the market.

Studies indicate that savings associated with biosimilars will not reach the historically high level achieved in the small-molecule generic market. Unlike traditional generic drugs, it is expected that the relatively low discounts and high investment cost expected with biosimilars will create competition between biosimilars and brand-name biologic drugs that is more similar to competition between brands than between a brand and a generic. [7] Through an analysis of possible scenarios, the American Action Forum (AAF) estimates that biosimilar drugs approved through the BPCIA could save between $5.1 and $37.8 billion in total national biologic drug expenditures in the US over ten years. The federal government, which paid for roughly 36 percent of all prescription drug expenditures in 2013, has a significant stake in these savings.[8] In this paper, we further discuss the competitive implications and possible savings of the future biosimilar market.

Data and Methodology

Using publicly available Securities and Exchange Commission (SEC) filings, we collected information on $41.7 billion worth of biologic sales in the United States in 2013. Our data covers 37 brand-name drugs sold by 9 different pharmaceutical companies. Where applicable, we collected U.S. sales revenue for every year between 2009 and 2013. In 2009, 28 of the 37 drugs we examined were in production and accounted for $27 billion in U.S. revenue. Table 1 displays the 12 drugs that grossed over $1 billion in 2013. These 12 drugs accounted for 81 percent of all 2013 sales revenue in our data.

Table 1 - Biologic Drugs with over $1 Billion in US sales (2013)

Drug Name

Exclusivity and Patent Expiration[9]

2013 US Sales (millions)

Humira

2016

5,240

Remicade

2018

4,400

Enbrel

2028

4,260

Neulasta

2015

3,450

Rituxan

2013

3,410

Avastin

2019

2,640

Epogen/Procrit

2013

2,630

Avonex

2026

1,900

Herceptin

2019

1,830

Lucentis

2020

1,730

Prolia, Xgeva

2022

1,230

Neupogen

2013

1,170

 

Our methodology for estimating baseline biologic drug spending is an extrapolation of growth trends over the past 5 years. We compare the average growth in sales revenue for the first 12 years a drug is on the market to the average growth in sales revenue for drugs that have been on the market for longer than 12 years. This division attempts to capture the effects of competition from other drug brands—the FDA grants exclusivity for the first 12 years of most biologic drug approvals—as well as the general slowdown in sales over time as modern medicine develops new treatments that make older ones less valuable. We find that the average year-over-year growth in U.S. revenue for biologic drugs in our sample is 12.1 percent for the first 12 years and 8.9 percent thereafter. We use these rates to estimate the growth of total sales revenue for a given biologic drug over the next ten years.

In order to estimate potential savings from biosimilars, we estimate two scenarios of biosimilar penetration, a low-savings and high-savings scenario, which are determined by assumptions on the average price of biosimilar drugs relative to the brand-name counterparts and the penetration of biosimilars into the total number of biologic prescriptions.

The Federal Trade Commission (FTC) estimates that producers of biosimilars will likely be limited to manufacturers that already produce biologic drugs of some kind and that discounts will likely be between 10 and 30 percent rather than the 70 percent typical of small-molecule generics.[10],[11] Producers of traditional, small-molecule generics are able to exploit relatively cheap and easy-to-replicate chemical processes to realize steep discounts from the price of a reference product. However, the same economic efficiencies do not apply to biosimilars.  Therefore the complexity of the manufacturing process remains a significant barrier to entry for biosimilar manufacturers. High start-up costs are likely to reduce the size of the discount a biosimilar manufacturer will be able to offer relative to the brand-name biologic. In AAF’s high-savings scenario, we assume that biosimilars offer an average discount of 30 percent relative to the reference brand-name product, and we assume that biosimilars offer an average discount of 10 percent in the low-savings scenario.

Biosimilars also differ from traditional generics through the very nature of “similarity.” A biosimilar cannot be manufactured—or grown—to be identical to the reference drug and relies on a determination by the FDA that the product is “highly similar,” and that switching between the reference product and the biosimilar does not lead to any “clinically meaningful differences in any given patient.” The issues of similarity and interchangeability will have a substantial effect on the ability of biosimilars to gain market share. Currently, generic drugs account for 86 percent of all prescriptions, which has led to large savings and increased access to small-molecule prescription drugs.[12] But the success of generic drugs has depended on the confidence of physicians, pharmacists, patients, and payers that those drugs were substitutable. In contrast, there is research to suggest that substitution between biosimilars and reference biologics may have unintended impacts on patients and clinical outcomes.[13] Physicians understand the uncertainty surrounding biosimilar treatments and may hesitate to substitute away from a successful biologic treatment in exchange for modest savings. In a recent study by the Biotrends Research Group, 86 percent of rheumatology and gastroenterology physicians surveyed indicate they would prevent biosimilar substitution for some, if not all, prescribed biologic medicines.[14] In our high-savings scenario, we assume that biosimilars enter the market with an average market share of 10 percent of the relevant prescriptions and grow to a maximum 30 percent market share over the first 5 years on the market.[15] In our low-savings scenario, we assume that biosimilars will capture an average of 10 percent of the relevant prescriptions throughout the analysis period.

Biosimilars in Europe

In 2003, the European Union (EU) became the first jurisdiction to establish a robust biosimilar approval pathway. Unlike the FDA, the European Medical Agency (EMA) does not have a mandate to determine whether a biosimilar is “interchangeable” with a reference product.[16] The EMA only determines whether the product is biosimilar. The key to generating savings is left to national payers at the country level and national policymakers through substitution policies. Authorities in each EU member country may allow automatic substitution between biologics and biosimilars, but generally patients currently on biologics are not automatically substituted. The market for biosimilars and biologics in the EU also differs from the US in that European member nations generally have greater negotiating power to lower drug prices, in part due to heavier price regulations and payer structures.[17]

In total, biosimilars of six different molecules have been approved through the European pathway.[18] Biosimilar penetration over the past decade has been modest, accounting for only 6.6 percent of relevant sales according to one 2009 study.[19] The slow start in Europe is a result of exclusivity restrictions, uncertainty among physicians, and limited discounts—leading to limited data by which to estimate the impact of biosimilars on health care budgets. Analysts are still optimistic about total savings, projecting between €11.8 and €33.4 billion in potential savings across the continent by 2020, but documentation of those savings so far has been sparse.[20] In Germany, one biosimilar manufacturer claims the generic versions of epoetin—for which there are 5 approved biosimilars—have saved €551 million, a small portion of projected national savings of between €4.3 and €11.7 billion by 2010.[21],[22] However, the success of biosimilars in the EU is highly variable across therapeutic categories and member nations, and pharmaceutical markets and regulations are still adapting.[23]

Potential Savings in the United States

Savings in the United States may accrue more rapidly than in Europe simply because biosimilar technology has advanced substantially in the last decade, and patent protection for many biologic drugs are set to expire over the next few years. However, approximations of the magnitude of potential savings are highly uncertain—demonstrated by widely varying estimates. Some industry projections are as high as $250 billion over the next ten years, while the Congressional Budget Office estimated a ten year savings of about $25 billion.[24], [25] AAF estimates two savings scenarios over the next decade: a low-savings scenario characterized by relatively smaller discounts and prescriptions rates, and a high-savings scenario characterized by relatively larger discounts and prescription rates. These two scenarios encompass the range of savings that could be achieved, depending on the acceptance of biosimilars in the U.S. by regulators, health care providers, patients, and payers.

Table 2 - Change in Total Spending on Biologic Drugs in the United States (billions)

                     

2015 - 2024

 

2015

2016

2017

2018

2019

2020

2021

2022

2023

2024

Low-Savings Scenario

0

-0.2

-0.3

-0.4

-0.5

-0.6

-0.7

-0.7

-0.8

-0.9

-5.1

High-Savings Scenario

0

-0.7

-1.2

-2.0

-3.1

-4.4

-5.2

-6.1

-7.1

-8.0

-37.8

 

The low-savings scenario might occur if the FDA places significant requirements on biosimilar manufacturers to perform independent clinical trials to support claims of “similarity.” Such regulations would increase the fixed costs of developing biosimilars and likely reduce the discount that a manufacturer would be able to offer relative to the reference product. This scenario could also be the case if the market lacks significant competition, and biosimilar manufacturers find it more profitable to set prices close to those of the reference product. The response of insurers, physicians and patients will also have a significant impact on possible savings. The low-savings scenario reflects low enthusiasm among both physicians in prescribing and filling biosimilar scripts and payers in providing insurance coverage for biosimilars.

Alternatively, the high-savings scenario would be characterized by less burdensome requirements for data supporting treatment indications and little restriction on a pharmacists ability to automatically substitute biosimilars for the reference product. These regulations might encourage widespread adoption of biosimilars but come at the cost of oversight and quality control that preserve physician and patient confidence.

Conclusion

Biologics, as well as traditional specialty drugs, are stimulating an important policy debate over maintaining access to high cost medications for populations in need, while preserving the incentives to innovate new treatments. Unfortunately, biosimilars cannot bring the same degree of savings and access to biologics that generic small-molecule drugs have for traditional pharmaceuticals. Regardless of the favorability of the regulatory environment, the prices of biosimilar treatments will likely be measured in the thousands and far exceed the maximum out-of-pocket expenses of any commercial or public health insurance product, which are capped at $6,850 per person in 2016. The high prices limit the incentives for patients to seek out biosimilars as pocket-book relief.

However, biosimilars do present an opportunity to establish meaningful competition and downward pricing pressure for many of the most expensive drugs, which could be instrumental in mitigating rising prescription drug costs in the future. Health insurance providers are eager to find ways to combat the rising cost of prescription drugs, and biosimilars may find success through favorable treatment by payers. Savings estimates exceeding $100 billion are improbable, but if the pathway is established clearly and provides an environment through which physicians can safely and confidently prescribe biosimilars, AAF expects savings of at least $5.1 billion and up to $37.8 billion over the next decade. Advancing policy that will seek to maximize biosimilar savings will depend on policymakers balancing the need to establish confidence in biosimilar products and the goal of reducing overall healthcare costs.


[3] Johnson, Judith, “FDA Regulation of Follow-On Biologics,” Congressional Research Service, April 26, 2010, available at: https://primaryimmune.org/advocacy_center/pdfs/health_care_reform/Biosimilars_Congressional_Research_Service_Report.pdf

[4] “Medicine Use and Shifting Costs of Healthcare: a review of the use of medicines in the United States in 2013,” IMS Institute for Healthcare Informatics, April 2014

[6] “Generic Pharmaceuticals Saved $734 Billion over Last Decade,” Generic Pharmaceutical Association, accessed on July 16, 2015, available at: http://www.gphaonline.org/gpha-media/press/generic-pharmaceuticals-saved-734-billion-over-last-decade

[7] “Emerging Health Care Issues: Follow-on Biologic Drug Competition,” Federal Trade Commission Report, June 2009.

[9] “Biosimilar Land Grab Molecules,” Credit Suisse Research Report, December 2, 2014

[10] “Emerging Health Care Issues: Follow-on Biologic Drug Competition,” Federal Trade Commission Report, June 2009.

[11] “Health Cost Containment and Efficiencies,” National Conference of State Legislators, Briefs for State Legislators, June 2010, available at: http://www.ncsl.org/portals/1/documents/health/GENERICS-2010.pdf

[12] “Medicine use and shifting costs of healthcare—A review of the use of medicines in the United States in 2013,” IMS Institute for Healthcare Informatics, April 2014, available at: http://www.imshealth.com/deployedfiles/imshealth/Global/Content/Corporate/IMS%20Health%20Institute/Reports/Secure/IIHI_US_Use_of_Meds_for_2013.pdf

[13] Mellstedt, H. et al, “The Challenge of Biosimilars,” Annals of Oncology, Volume 19, Issue 3, available at: September 14, 2007, http://annonc.oxfordjournals.org/content/19/3/411.full.pdf+html

[14] “Vital Specialized Biopharmaceutical Insights and Analytics for Experts from Experts,” Biotrends Research Group, 2014

[15] id.

[16] Blank, Tobias, “Safety and toxicity of biosimilars—EU versus US regulation,” Generics and Biosimilars Initiative Journal, July 2013, available at: http://gabi-journal.net/safety-and-toxicity-of-biosimilars-eu-versus-us-regulation.html

[17] Danzon, P. and Furukawa, M., “Price and Availability of Pharmaceuticals: Evidence from Nine Countries,” Health Affairs, October 2003, available at: http://content.healthaffairs.org/content/early/2003/10/29/hlthaff.w3.521.short

[18] “Biosimilars approved in Europe,” Generics and Biosimilars Initiative, November 2014, available at: http://www.gabionline.net/Biosimilars/General/Biosimilars-approved-in-Europe

[19] Rovira, Joan et al, “The impact of biosimilars’ entry in the EU market,” Andalusian School of Public Health, January 2011

[20] Haustein, Robert, “Saving money in the European healthcare systems with biosimilars,” Generics and Biosimilars Initiative Journal, October 2012

[21] id.

[22] “Biosimilars Can Help Lower Costs And Increase Access,” Sandoz Biopharmaceuticals, available at: http://www.sandoz-biosimilars.com/biosimilars2/importance.shtml

[23] “Assessing Biosimilar Uptake and Competition in European Markets,” IMS Institute for Healthcare Informatics, October 2014, available at: http://www.imshealth.com/imshealth/Global/Content/Corporate/IMS%20Health%20Institute/Insights/Assessing_biosimilar_uptake_and_competition_in_European_markets.pdf

[24] “The $250 Billion Potential of Biosimilars,” Express Scripts Insights, April 2013, available at: http://lab.express-scripts.com/insights/industry-updates/the-$250-billion-potential-of-biosimilars

[25] “S. 1695 Biologics Price Competition and Innovation Act of 2007,” Congressional Budget Office Cost Estimate, June 2008, available at: http://www.cbo.gov/sites/default/files/s1695.pdf


Recent air pollution data and regulatory actions by the Obama Administration demonstrate that Americans are paying more for less. Between 2005 to 2009, the nation experienced a decline of 11,116 days of moderate to hazardous air pollution (across all jurisdictions). During the Obama Administration, this decline slowed to 3,897 fewer days of moderate to hazardous pollution, despite the economic recession and billions of dollars more in regulatory costs. From 2009 to present, EPA regulations, primarily to reduce air emissions, have added more than $295 billion in net present value costs, while air pollution’s decline is not nearly as pronounced as in the past. That’s roughly $75 million spent for each day of cleaner air.

Methodology

The Environmental Protection Agency (EPA) tracks air quality of a variety of pollutants, including the six criteria pollutants and greenhouse gas emissions. The EPA’s categorization of days with either “good” to “hazardous” air pollution includes concentrations of ground-level ozone and fine particulate matter. The American Action Forum (AAF) gathered air pollution data from 2005 to 2014, including days that EPA labeled air pollution moderate, unhealthy for sensitive groups, unhealthy, very unhealthy, and hazardous for any jurisdiction in the United States (defined as “unclean air”). AAF also collected data on the major air quality regulations from EPA during this time. Then, AAF compared the cost of reducing moderate to hazardous air pollution days from 2005 to 2014. Complete data on 2015 will be made available by early next year.

Findings

Not surprisingly, air quality has increased as the number of unclean air days has steadily declined; the number of “good” days increased from 196 days per jurisdiction to more than 251, a 28 percent increase during just one decade. This finding isn’t startling because EPA strictly regulates the amount of particulate matter and ground-level ozone in the U.S. The number of moderately bad days declined by more than 20 percent, from an average of 84 days per jurisdiction to less than 67. These “moderate” days are not necessarily unhealthy, as EPA cautions that “air quality is acceptable,” but “a very small number of people who are unusually sensitive” might have problems with the air.

Extreme, China-like days of air pollution are mostly a relic of decades past. The average jurisdiction saw just 1.4 “unhealthy” days in 2005 and now that figure has dropped to 0.77, a decline of almost 50 percent, but unhealthy days were already rare. For “very unhealthy days,” which EPA describes as “health warnings of emergency conditions,” the nation has not improved. In 2005, there were 46 “very unhealthy” days in the entire U.S. (not just for the average jurisdiction); in 2014, there were also 46 “very unhealthy days.” The conditions that contribute to these health emergencies revolve largely around local factors. For example, only four jurisdictions (located in Arizona, California, Nevada, and New Mexico) experienced more than one “very unhealthy” day in 2014. Nevertheless, despite billions of dollars in investments, there has been no improvement in these health emergency events.

Diminishing Returns for Obama Administration

During the last decade, Americans are breathing healthier air, but are the current investments as beneficial as previous actions? The graph below tracks the number of unclean air days per jurisdiction and the rate of change. From 2005 to 2009, the rate of unhealthy days per jurisdiction declined 20.7 percent. Compare this to the recent decline during the Obama Administration: 9.2 percent. The black line below is a linear trendline.

The air pollution investments, despite growing more expensive, are yielding fewer returns. Between 2005 to 2009, Americans enjoyed a total of 11,166 fewer days of unclean air. However, during the Obama Administration, there were only 3,897 fewer days of unclean air. In 2005, the average jurisdiction could expect roughly 100 unclean air quality days (moderate to hazardous). By 2009, that figure dropped to 78 unclean days, a decline of more than 20 percent. During the Obama Administration, by contrast, the decline has been just 9.2 percent.

The growth in U.S. air quality has slowed and Americans are paying more for these reduced gains. In other words, there are diminishing returns for increased investments, often in the billions of dollars, for reducing the six criteria pollutants. According to the Office of Information and Regulatory Affairs, EPA’s office of Air and Radiation finalized 14 “economically significant” regulations from 2005 to 2009. During the Obama Administration, EPA has finalized 22 economically significant measures. Despite this steep increase, air quality gains have actually slowed.

To monetize these investments, AAF looked at five of the most significant air quality regulations of the last decade by their effective date:

  • 2006 Particulate Matter Rule: $5.4 billion in annual cost;
  • 2011 Heavy-Duty Truck Efficiency Rule: $600 million in annual cost;
  • 2012 Mercury Air Toxics Standard (MATS): $9.6 billion in annual cost;
  • 2013 Particulate Matter Rule: $350 million in annual cost; and
  • 2014 Tier 3 Fuel Sulfur Rule: $1.5 billion in annual cost.

Combined, these measures have imposed $17.4 billion in annual costs to achieve air pollution goals. However, Obama Administration regulators have imposed $12 billion of this figure or 69 percent. Yet, the rate of air pollution decline continues to stagnate. This list of five major air regulations is hardly exhaustive. Indeed, in EPA’s recent ozone proposal the agency listed roughly a dozen major air regulations that have contributed to lower particulate matter and ground-level ozone. However, there is little doubt that regulatory activity at EPA has increased substantially and Americans are paying more to achieve only slight improvements in air quality.

Some may argue that it is too soon to judge the effectiveness of these rules, but it is important to note how quickly they can take effect. Although EPA’s cost-benefit analyses often examine the environmental impact decades later, affected entities generally adjust their practices even before the rule is finalized. For instance, even in light of the Supreme Court decision challenging MATS, experts agree companies have already largely implemented the regulation, regardless of its ultimate legal standing. Rules of this magnitude should demonstrate some noticeable additional benefits within a relatively short time.

One curious trend in the Obama Administration’s air pollution record is particulate matter. According to some in the administration, there is no safe amount for human inhalation, although this is hotly debated. With virtually every EPA air measure, the “co-benefits” result from reducing fine particulate matter or PM 2.5. For example, in EPA’s MATS rule, PM 2.5 was responsible for 99 percent of the benefits. In addition, for the proposed “Clean Power Plan,” which is supposed to directly regulate greenhouse gases, PM 2.5 “co-benefits” reductions contributed $24 billion to $56 billion in ancillary benefits or roughly 56 percent of all benefits.

Yet, with the other criteria pollutants continuing to fall, PM 2.5 is increasingly the number one air pollutant. The graph below displays this trend.

In 2005, PM 2.5 was the main pollutant 110 days annually for the average jurisdiction. By 2014, that figure rose 29 percent, to 142.5 days. As a result, this pollutant continues to be an attractive target for regulators looking to find potential benefits. Generally, if a regulation results in burning less fossil fuel, there will be resulting PM 2.5 reductions. The rise of particulate matter as the dominant source of air pollution continues to give regulators an attractive option for boosting benefits.

However, the current concentration of PM 2.5 is technically already legal, generally meeting EPA guidelines. The following chart displays the weighted average particulate matter concentrations in the U.S.

Particulate matter is declining, as it should do under EPA regulations. Generally, all jurisdictions meet EPA’s guidelines because the law requires it. Despite this legally “clean air,” the number of days when PM 2.5 is the main pollutant continues to increase. It’s no surprise regulators routinely use this pollutant to justify unrelated regulatory measures.

Conclusion

Americans are paying more than $17.4 billion annually for air pollution benefits, with at least $12 billion from this administration alone. However, the benefits are becoming increasingly more expensive under the Obama Administration. Gains in air quality are slowing, yet the pace of regulation continues to accelerate. With a major rule pending on ozone and the implementation of the Clean Power Plan, Americans can expect to pay more for the few remaining air quality benefits.

On the heels of President Obama’s $8.4 billion “Clean Power Plan,” policymakers have started offering more details about their energy and regulatory plan. One plan calls for a 700 percent increase in solar power, from about 20 gigawatts (GW) of projected generation by 2020, to 140 GW. This seven-fold increase in solar capacity won’t be cheap. According to American Action Forum (AAF) calculations, it will cost up to $240 billion to install this additional solar capacity by 2020. The climate benefits, always uncertain and dependent on the discount rate, vary between $6.1 billion and $18.3 billion annually.

Methodology

Some policymakers have called for a massive expansion of solar build-out, in addition to the likely gains made by the “Clean Power Plan.” First, AAF found Energy Information Administration (EIA) data for the amount of projected solar capacity in GW in the year 2020 under current policy: 27.57 GW. To reach a goal of 140 GW of solar capacity, the nation would need to increase capacity by roughly 112 GW. But what would this solar build-out cost? Next, with data from the EIA, AAF used the cost of solar power per megawatt hour ($/MWh) of $114.3, including the subsidized rate. Finally, AAF examined recent solar projects and installations for real-world prices.

Findings

Assuming solar remains at $114.3 $/MWh and the subsidies continue, the cost of a 140 GW goal could reach $246 billion. On an annual basis, this is approximately $67 billion. For perspective, that’s the cost equivalent of passing another “Clean Power Plan” during a four-to-five year time horizon and perhaps far more. If this were a federal regulation, and not a budget appropriation, it would easily rank as one of the most expensive rules of the Obama Administration.

Unlike natural gas or coal generation, solar is not entirely cost dependent on the amount of time it generates energy. Much of the costs are up front, through manufacturing and installation. A recent example is illustrative. In Kenya, a Canadian solar firm plans to build a 1 GW solar project for $2.2 billion. Assuming that price, 112 GW will cost $246 billion or $67 billion annualized at a seven percent rate. This might seem like an incredible figure, but consider that the Solar Energy Industries Association recorded 6.2 GW of solar installations in 2014; this was at a cost of $13.4 billion. Using that $/GW installation rate yields a cost of $242 billion for 112 GW of new solar, almost identical to the $/GW price of the Kenya project.

Another factor to consider: unless the subsidy increases, the price of solar will probably escalate if demand increases by 700 percent from today’s levels. It would take an unprecedented government intervention or technological breakthrough to stabilize or decrease prices in light of such a demand surge. Further complicating matters is the time frame. This installed capacity would occur during a four-to-five year window, a short horizon for so much capacity to reach the power grid and for long-term capital investments. Part of the transmission challenge is already built into the cost of solar. For example, the “transmission investment” of solar is nearly three times that of advanced natural gas, which solar would presumably replace for generation.

The nation must also consider the incredible amount of land required to build 112 GW of new solar capacity. For example, the Topaz Solar Farm will produce 550 MW of solar power in an area covering 9.5 square miles. If this project were a representative sample, 112 GW of solar power will consume an area of land roughly 1.5 times larger than Rhode Island. It’s debatable if the nation can easily free up that much land to devote to solar in the next four to five years.

The map below depicts the amount of land required for a 700 percent increase in solar capacity.

This new solar capacity is likely to replace advanced natural gas that might be brought online or retired by 2020. According to EIA, the estimated levelized cost of advanced natural gas is roughly 42 percent cheaper than solar without the subsidy and 36 percent with the subsidy. These cost disparities will surely be a factor in the debate if this policy proposal receives more widespread scrutiny. 

Benefits

Despite costs of $240 billion for increasing solar capacity by 700 percent and the daunting transmission, capital, and installation challenges, there will be benefits. Here, AAF focuses primarily on the climate benefits, using the administration’s “Social Cost of Carbon (SCC).” AAF uses the central figure, discounted at three percent ($40 per ton) and for illustrative purposes, the 95th percentile from all administration models at a three percent rate ($120). The former represents “higher-than-expected impacts from temperature change.”

This gives the figure per ton, but there is the question of how many tons of carbon dioxide (CO2) 112 GW of solar would replace. For that, AAF used EPA’s eGrid data, which unfortunately just has the latest information from 2010. Using eGrid, AAF sorted the most efficient natural gas plants by nominal heat rate (Btu/kWh). (The results from annual pounds of CO2 per MWh were similar). From there, AAF determined the number of facilities required to generate 112 GW of nameplate capacity: 240. These plants serve a representative sample of future generation because they are the most efficient in the nation, with an average CO2 emissions rate of 773, almost identical to EPA’s overall goal for natural gas by 2030 (771 pounds of CO2 per MWh). Finally, AAF collected data on the annual CO2 emissions of these facilities: 169.6 million tons. 

Assuming $40 per ton of CO2, climate benefits from ensuring efficient natural gas isn’t built or is retired are approximately $6.7 billion annually. This figure is actually less than the cost of installing that solar, which again, could reach $67 billion annually. Even if we assume the $120 SCC figure, climate benefits would reach $20.3 billion, still less than the installed cost of solar.

AAF does not attempt to incorporate the air quality “co-benefits” or other ancillary benefits of replacing natural gas with solar. However, assuming EPA continues to enforce its regulations, future air would already meet the agency’s legal requirements. Furthermore, natural gas facilities are far cleaner than coal. AAF’s sample of natural gas facilities emit about 52,000 tons of NOx and SO2, compared to the most efficient equivalent number of coal plants: approximately 2.2 million tons. AAF also does not attempt to calculate higher solar costs if the subsidies expire or what the potential cost would be if demand increases by 700 percent.

Conclusion

There is little doubt that seeking a 700 percent expansion in solar is an ambitious target. The issue is what are the costs and what are the benefits. It appears the cost of this solar expansion could reach $240 billion. The climate benefits are likely far less. More importantly, can the world manufacture enough solar and can the nation install 112 GW in four to five years? History would suggest not.

  • Since 1964, productivity has increased 180 percent and real compensation has grown 161 percent

  • The assertion that growth in productivity has vastly outpaced growth in real compensation is simply not true

  • Real compensation has grown closely with labor productivity over the last fifty years

Overview

It is frequently asserted that growth in labor productivity is outpacing growth in compensation. This suggests that the economy is not properly compensating workers, and is one of the key reasons many believe it necessary to raise the minimum wage, expand overtime pay coverage, mandate paid family leave, and increase union membership. We examine this assertion and find that it is based on faulty statistical analyses that employ misleading tactics by comparing labor market data that are not directly comparable. In particular, these claims are based on an analysis that:

• Compares labor productivity of the entire economy to compensation for private sector production and nonsupervisory workers

• Uses two different price indexes, one to measure real productivity and another to measure real compensation.

After accounting for these issues, we find that real compensation has grown closely with labor productivity over the last fifty years.

Introduction

While advocating for policies like raising the minimum wage and expanding overtime pay coverage, many assert that compensation has not grown with labor productivity. They claim that since the 1970s there has been a stark divergence between the growth in worker pay and the growth in labor productivity. According to the Economic Policy Institute (EPI), productivity has grown eight times faster than worker compensation in recent years, as productivity rose 64.9 percent and hourly compensation only grew 8.2 percent from 1979 to 2013.[1] Taken at face value, these numbers are alarming. They suggest that rewards from productivity growth are going almost exclusively to company profits and those workers at the top, while regular workers are neglected. Fortunately, the putative trend is based on a number of analytical flaws, such as comparing labor productivity of the entire economy to compensation for private sector production and nonsupervisory workers. The result is a story that simply is not true.

Comparing the Purported Trend to Reality

Over the last few years, charts similar to Figure 1 have become increasingly prominent in labor market policy discourse.

Figure 1 is a reconstruction of a graph featured in a 2014 EPI paper.[2] Graphs similar to Figure 1 have been featured prominently by EPI, unions, and a presidential campaign, which use it to justify policies like raising the minimum wage, expanding overtime pay coverage, and increasing union membership. Figure 1 compares the growth in real hourly compensation to real labor productivity over time. The former represents what workers receive from the economy and the latter illustrates the value of what they produce in it every hour. According to the chart, the two metrics began to diverge during the 1970s and growth in labor productivity has been outpacing growth in hourly compensation ever since. As a result, since 1964 real labor productivity has increased 114.5 percent and real hourly compensation has only risen 15.8 percent.

Using official productivity and compensation data in the Bureau of Labor Statistics (BLS) multifactor productivity tables, we compared the growth in real productivity to the growth in real compensation since 1964.[3] As shown in Figure 2, the official data tell a completely different story.

While Figure 1 suggests that growth in real productivity and real compensation diverged during the 1970s, official data show that real compensation has followed productivity quite closely. In fact, a noticeable divergence does not occur until around 2005, which results in a much smaller productivity-compensation gap. From 1964 to 2013, real labor productivity of private sector workers grew 180.4 percent while growth in the real compensation received by those same workers tracked closely behind at 161.1 percent. 

How could two measures of the same phenomenon be so different? In the following, we show step-by-step how the illustration of labor productivity and compensation in Figure 1 differs from official labor statistics.

Step 1: Figure 1 compares productivity for the entire economy to compensation for workers in the private sector

EPI calculated the growth in labor productivity for the entire economy and compared it to growth in compensation for only workers in the private sector. It did this by analyzing unpublished BLS productivity data. If the goal is to compare compensation and labor productivity in the private sector, then it is much more appropriate to compare growth in private sector compensation with growth in private sector productivity. Figure 3 below inserts into Figure 1 the official BLS measure of real private sector labor productivity (output per hour) located in the multifactor productivity tables.

It is important to note that official labor productivity data do not account for the depreciation of capital. This can create an imperfect comparison between productivity and compensation: the replacement of capital as it loses value contributes to overall output, but it does not contribute to any rise in national income or translate into higher compensation for workers.[4] Therefore, we likely overestimated the actual growth of labor productivity in the private sector. EPI’s measure of labor productivity, on the other hand, does account for capital depreciation, which actually reduces the real growth in labor productivity over time. So by not accounting for capital depreciation, the official private sector data in Figure 3 actually widen the gap between compensation and productivity growth. Since 1964, labor productivity net depreciation grew 114.5 percent and total labor productivity in the private sector grew 180.4 percent. From this point on, we use the official BLS measure of private sector labor productivity.

Step 2: Figure 1 Compares Labor Productivity of All Workers to Compensation of Production and Nonsupervisory Workers

After replacing total economy labor productivity with private sector labor productivity in Figure 3, we run into another problem. In particular, Figures 1 and 3 compare growth in productivity for one group of workers to growth in compensation for another group of workers. Official productivity data, shown in Figure 3, represent labor productivity of all private sector workers, including highly paid managers. Compensation growth in Figure 3, however, only represents compensation for a portion of private sector workers. In particular, it represents compensation for production and nonsupervisory workers, who account for about 82 percent of the private sector workforce. The excluded 18 percent of the workforce consists of more highly compensated workers and supervisors. Thus, Figure 3 compares productivity for all private sector workers to a measure of compensation that excludes highly paid workers and has a downward bias.

As can be seen in Figure 4, tracking growth in real compensation for all private sector workers significantly closes the gap between productivity and compensation.

As shown above, replacing real compensation for production and nonsupervisory workers with real compensation of all private sector workers, available in the BLS multifactor productivity tables, substantially raises the growth in compensation since the 1970s. Since 1964, real compensation for all private sector workers rose 70.3 percent, compared to just 15.8 percent for private sector production and nonsupervisory workers.

EPI asserts that it uses compensation for production and non-supervisory workers to analyze how typical workers’ earnings are growing relative to labor productivity.[5] If this is indeed the goal, it would then be more appropriate to compare compensation for production and nonsupervisory workers to labor productivity for the same group. However, since there is no official labor productivity data available for production and nonsupervisory workers, the best way to make a fair comparison is to use data representing the entire private workforce for both compensation and productivity.

Step 3: Figure 1 uses two different price indexes to adjust productivity and compensation for inflation

Labor productivity for the entire economy (Figure 1) and the private sector (Figures 2, 3, and 4) is adjusted for inflation using the Implicit Price Deflator (IPD). In all of the charts, however, compensation is adjusted using the Consumer Price Index (CPI). In this context, it is misleading to use two different price indexes, one to measure real productivity and another to measure real compensation.[6] Since CPI generally estimates higher inflation than IPD, the two price indexes are not comparable. As a result, even after using average compensation for the entire private sector workforce, the real growth in compensation measured in Figure 4 likely overstates the gap between growth in productivity and compensation. In order to capture the actual gap, it is best to use one measure of inflation when comparing real growth in compensation to real growth in productivity.

So which price index should we use?  The fundamental difference between IPD and CPI is that IPD measures changes in the prices of goods and services produced by businesses and CPI measures the change in prices of goods and services that are consumed in America, including those that are produced outside of the United States. According to Martin Feldstein, President Emeritus of the National Bureau of Economic Research, a “competitive firm pays a nominal wage equal to the marginal revenue product of labor, i.e., to the marginal product of labor multiplied by the price of the firm’s product.”[7] In other words, a worker’s compensation is based on his or her benefit to the firm, which depends on the prices of the firm’s products not the prices of the goods consumed by all Americans. Thus, IPD is far more appropriate because the price changes of goods businesses sell have a more direct impact on employee pay than do price changes of other consumer goods. 

Figure 5 shows that simply adjusting compensation for inflation with IPD instead of CPI closes almost the entire remaining gap.

When adjusted with CPI, real hourly compensation for all workers in the private sector increased by 70.3 percent since 1964. However, when adjusting compensation with IPD, it grew by 161.1 percent. The blue and red lines in Figure 5 are the same shown in Figure 2.

Conclusion

Clearly, methodology matters in this discussion. When the private-sector growth of productivity and total compensation are compared using all private sector workers and the same output price deflator, it is apparent that compensation and productivity have grown hand-in-hand. As a result, the assertion that regular workers do not benefit from economic growth is based on a faulty analysis that essentially underestimates the growth in real hourly compensation during the last half century. Most troubling, these assertions inspire misguided policies, such as raising the minimum wage or expanding overtime pay, which have been shown to provide minimal benefit to those in need and often hurt the workers and families the policies aim to help.



[1] “Raising America’s Pay: Why It’s Our Central Economic Policy Challenge” Economic Policy Institute, available at http://www.epi.org/publication/raising-americas-pay/

[2] Ibid., We took data on productivity directly from the EPI paper. As in the EPI paper, compensation data are for production and nonsupervisory workers in the private sector. We calculated hourly compensation by multiplying average hourly earnings of production and nonsupervisory workers from the BLS Current Employment Statistics (CES) by a compensation-to-earnings ratio. We derived the compensation-to-wage ratio by dividing average total compensation by average wages and salaries from the Bureau of Economic Analysis (BEA) National Income and Product Account (NIPA) tables. The recreation is not exact. In particular, EPI’s chart begins in the 1940s. However, relevant data at the BLS are only available from 1964 onwards. Regardless, the overall trend is very similar as the one illustrated in the original EPI graphs.

[3] “Multifactor Productivity Tables,” Bureau of Labor Statistics, http://www.bls.gov/mfp/mprdload.htm

[4] James Sherk, “Productivity and Compensation: Growing Together,” The Heritage Foundation, July 2013, http://www.heritage.org/research/reports/2013/07/productivity-and-compensation-growing-together

[5] Lawrence Mishel, “Inequality is Central to the Productivity-Pay Gap,” July 2015, http://www.epi.org/blog/inequality-central-productivity-pay-gap/

[6] Martin S. Feldstein, “Did Wages Reflect Growth in Productivity?” National Bureau of Economic Research, pg. 2, http://www.nber.org/papers/w13953.pdf

[7] Ibid., pg. 3

 

Introduction

Innovation and technological advancement offer tremendous opportunities for future improvements in the nation’s standard of living. Recognizing this potential, the United States employs myriad federal policies to incentivize innovation, including support for basic research, patent protections, as well tax policies that subsidize research and development. However, the United States is hardly unique in its desire to support new and existing research and development. A number of other nations have begun enacting preferential tax treatment for income derived from intellectual property (IP), including patents. Known as “IP boxes” or “patent boxes,” these tax policy developments reflect not only nations’ interest in locating innovation within their borders, but also attracting firms with highly mobile IP.

Were the U.S. to design a patent box tax structure, several major determinations must be made about exactly how it would function and what rates and treatment certain types of companies and intellectual property would face. This paper examines current U.S. law as it relates to intellectual property and policy considerations in the design of a “patent box” in the United States.

Designing a U.S. Patent Box in a Global Economy:

Current U.S. Tax Policy for Innovation

The United States subsidizes technological innovation through several channels. Broadly, these channels are expenditures on research and development, grants of patent protection for qualifying innovations, and research subsidization through the tax code.[1] With respect to tax policy, the United States has two primary policies designed to incentivize technological innovation: a tax deduction for research and experimentation and a tax credit against incremental increases in research.

Expensing of research and experimentation

For typical assets with useful lives beyond the current year, costs associated with developing that asset must be capitalized and depreciated over the life of the asset. For research and experimentation however, taxpayers can deduct from their taxes these costs in the first year in which they occur. The ability to immediately expense these costs is more valuable to a firm than capitalizing and depreciating these costs over time. Examples of qualifying costs include salaries for those engaged in research or experimentation efforts, amounts incurred to operate and maintain research facilities (e.g., utilities, depreciation, rent), and expenditures for materials and supplies used and consumed in the course of research or experimentation (including amounts incurred in conducting trials). [2] The Office of Management and Budget estimates that this tax preference will cost the United States $75.5 billion over the next 10 years.[3]

Research Credit

Federal tax policy also incentivizes research through a tax credit equal to 20 percent of an incremental increase in qualified research expenses. The incremental increase is calculated relative to a “fixed base percentage” that reflects the taxpayers’ ratio of research expenditures as a share of gross receipts over a historical time period. There is also an alternative 14 percent credit that is more easily calculated.[4] This credit also has an interaction with the tax deduction for qualified research expenses. Taxpayers must reduce their deductions allowed by the full amount of any research tax credit they receive, or they may claim the full deduction and elect to claim a reduced research tax credit. Another feature of the research tax credit is its temporary nature. While the credit itself is over three decades old, it has routinely expired only to be reinstated on a short-term basis. The credit is currently expired and inapplicable for incremental research costs incurred after December 31, 2014 unless Congress acts to extend it. One recent proposal to simplify and make the credit permanent was estimated to cost $181.6 billion over the next 10 years.[5]

The Rationale for Tax Subsidization for Innovation[6]

Economic literature supporting intellectual property falls into one of four related categories: (1) technological advances and growth; (2) tax incentives and technological advances; (3) effectiveness of tax incentives for research to develop technology; and (4) tax incentives for income derived from technology (e.g. patent boxes).[7]

Technological progress and economic growth

Technological progress emerges as the main driver of long-run economic growth in most economic research.[8]  Researchers attribute the knowledge generated from research activities as the foundation for technological progress.  One important feature of knowledge is that one firm’s use does not preclude another firm from using the same knowledge, meaning that without patent laws and restriction on use, others can commercialize the technologies to their own benefit.

Because of this feature, economists believe that the social return to knowledge and technological progress often exceeds the private returns.[9]  This discrepancy in returns may cause firms to underinvest in research (relative to what is socially optimal).

Patent laws exist to address this feature and provide the exclusive right to commercialize the technological advance for a fixed period of time.  Economists believe that patents offer a temporary monopoly to allow firms to capitalize on the application of this knowledge and encouraging additional investment activities in technological research.[10]

Tax policy and innovation

Tax subsidies are a method of inducing firms to undertake additional research and development activities.[11]   As mentioned previously, the U.S. tax system generally offers two tax benefits for research activities: tax credits for research activity and current expensing of research-related expenditures.[12]  These two types of benefits each carry different incentives with potentially different effects on research activity.  For example, the research credit is incremental and only benefits the expansion of research expenditures over prior year levels.  To the extent that firms respond to tax credits (by lowering their costs), the research tax credit should increase research activities each year.  However, the present law research credit contains certain complexities and compliance costs that diminish its usefulness; thus making expensing of research costs preferable to incremental credits.

Effectiveness of R&D Credits

Most published studies report that research credits induce increases in research spending.[13]   Generally, review of these empirical studies of the research credit suggests that an additional dollar of the research credit generates an additional dollar of investment in research.[14]  However, these studies report a range of estimates of the price elasticity for research. 

One of the issues with evaluating the effectiveness of tax credits and deductions (or expensing) of research spending is that it focuses exclusively on the development costs.  Patent boxes differ from tax credits for research and development, because patent boxes operate on the “back end” of the production cycle while R&D credits operate on the “front end.”  Patent boxes apply after technologies are developed and are in place, by focusing on the sale and commercialization of existing IP assets.

Countries design patent boxes to stimulate research activities, maintain technological advances within their borders, stem the outflow of technology, and reap the benefits of increased productivity derived from domestic technological research.[15]

Research on Patent Box Effectiveness

While patent box tax regimes have been in place since 2001, widespread use of patent boxes has been limited to periods after 2007.[16]  This limited experience means that there is a limited amount of empirical evidence, which makes evaluating the policy’s efficacy difficult.  However, prior to the implementation of patent boxes, a number of economic studies considered the potential for tax benefits to influence the location of IP and since the implementation, a limited number of studies review the available evidence.

Prior to implementation of patent boxes, two studies considered the effect of tax policy to influence the location decisions of intellectual property.  These studies, by Griffith, Miller and O’Connel and Bohm, Karkinsky, and Riedel concluded that tax rate was an important aspect of the location choice.[17]  The authors focus their analysis on intellectual property and patent boxes, but Grubert had established previously the economic theory of taxes and multinational location choices for intellectual property.[18]

Hassbring and Edwall evaluated data from 21 OECD countries and concluded that patent box regimes have a positive effect on the number of patent applications to the European Patent Office.[19]  Their analysis found that domestic inventors had a 14.6 percent increase and foreign investors had a 20.6 percent increase in their propensity to patent.[20]

Evers, Miller, and Spengel incorporate the existing patent box regimes into a measure of the cost of capital and average effective tax rates.[21]  Their results indicate that regimes allowing a deduction for research expenses at the regular corporate tax rate (rather than the lower patent box rate) could result in negative average effective tax rates.  They believe that this feature creates a subsidy to unprofitable projects and affects firm decision making, particularly when countries have significant differences in their patent box regimes. 

Other recent empirical studies show that European firms’ intellectual property is more likely to be held in low-tax subsidiaries than tangible assets (Dischinger and Riedel) and that the location of patents is responsive to corporate income tax (Griffith, Miller, and O’Connel).[22]

As empirical evidence on firm location choices, patent filings, and tax revenues become available, it is likely to demonstrate that patent boxes continue to have a significant influence on multinational corporation behavior.  However, it is also likely that, without the proper design, countries may find that they are competing against one another to gain and retain firms holding the patents for intellectual property. The following sections identify the twelve existing patent box regimes in Europe and provide the framework for a U.S. system.

Summary of European Patent Box Regimes

The following table provides a side-by-side comparison of the current features of the existing European patent box regimes (prior to any changes to existing regimes).[23]

Table 1 – Patent Box Regimes

Country

 

What are the IP Box and Corporation Income Tax Rates?

What is the tax base?

In addition to patents, what IP qualifies for the reduced tax rate?

Is there a limit on the benefit?

Can the IP be contracted out to a third party outside the border?

Can the IP be acquired?

Does existing IP qualify?

Belgium

2007

6.80/33.99

Gross Income

Supplementary Protection Certificates (SPC), certain know-how closely linked to a patent of SPC.

Deduction limited to 100 percent of pretax income

Yes, with certain restrictions

No, unless further developeda

No

Cyprus

2012

2.50/10.00

 

Net Income

 

Secret formulas, designs, models, trademarks, service marks, client lists, internet domain names, copyrights (including software), and know-how.

No

Yes

Yes

No

France

2001, 2005, 2010

 

15.50/34.43

Net Income

 

SPC, patentable inventions, manufacturing processes associated with patents, improvements of patents.

After 2011, there is a limit on subcontracted expenses (€2m)

Yes, within the EU

Yes

Yes

Hungary

2003

9.50/19.00

 

Gross Income

Secret formulas and processes, industrial designs and models, trademarks, trade names, copyrights (including software), know-how, business secrets

Deduction limited to 50 percent of pretax income

Yes, no limitation

Yes

Yes

Liechtenstein

2011

 

.2.50/12.50

Net Income

 

Designs, models, utility models, trademarks, copyrights (including software)

No

Yes, no limitation

Yes

No

Luxembourg

2008

5.84/29.22

Net Income

SPC, designs, models, utility models, trademarks, brands, domain names, copyrights on software.

No

Yes

Yes

No, unless from a related company and acquired after 2007

Malta

2010

0.00/35.00

Not Applicable

Trademarks, copyrights (including software).

Not Applicable

Yes

Yes

No

Netherlands

2007, 2010

5.00/25.00

Net Income

 

IP for which R&D certificate has been obtained (includes inventions, processes, technical scientific research, designs, models, certain software)

No

Yes, within the EU

No, unless further developeda

No

Portugal

2014

15.00/30.00

Gross Income

Models and industrial designs, protected by IP rights (excludes explicitly trademarks and other IP)

No

Yes, with certain limits

Yes

Yes

Spain

2008

12.00/30.00

Net Income

 

Secret formulas and procedures, plans, models

Yes, 6 times the cost incurred to develop IP

Yes, within the EU or European Economic Area

Noa

Yes

Nidwalden, Switzerland

2011

8.80/12.66

Net Income

Secret formulas and processes, trademarks, copyrights (including software), know-how

No

Yes

Yes

Yes

United Kingdom

2013

10.00/23.00

Net Income before Interest

SPC, certain other rights similar to patents.

No

No

No, unless from related group that developed the IP and acquiring company must manage use of the patenta

Yes

a By limiting the acquisition of the IP, these regimes attempt to ensure that the tax break relates to real economic activity.

 

 Table 1, Continued – Patent Box Regimes

Country

Credit for withholding taxes for royalties received from abroad?

Can the firm perform R&D abroad?

How do the rules treat past R&D expenses associated with the IP?

Does qualifying income include embedded royalties?

Does qualifying income include sales of qualified IP?

Belgium

2007

Yes

Yes, but only at qualifying centers.

No Recapture

Yes

No

Cyprus

2012

Yes

Yes

No Recapture

Yes

Yes (four-fifths of the value)

France

2000

Yes

Yes, within the EU

No Recapture

No

Yes

Hungary

2003

Yes

Yes

No Recapture

No

Yes

Liechtenstein

2011

Not applicable (no WHT)

No

Recapture

No

Yes, tax exempt

Luxembourg

2008

Yes

Yes

Recapture (capitalized development costs)

Yes

Yes

Malta

2010

Not applicable

Yes

Income not eligible if R&D expenses previously deducted

Yes

Yes

Netherlands

2007

Yes, with limits

Yes, within EU; strict conditions apply to R&D IP

Recapture

Yes

Yes

Portugal

2014

Not available

Not available

Capitalization of development costs (regular tax system)

Yes

No

Spain

2008, 20

Yes

Yes, within EU

No Recapture

No

No

Nidwalden,

Switzerland 2011

Yes

 

No Recapture

Yes

Yes

United Kingdom

2013

Yes

No

R&D Expenses allocated to patent income overall.

Yes

Yes

 

Designing a Patent Box for the United States

The basic feature of a patent box tax regime is a preferential rate on IP-derived income, but additional issues, including the scope of the income, location, timing of the preferential treatment must also be considered. In additional to behavioral concerns, the revenue implications of a given policy can limit the generosity of the preferential treatment of IP-derived income.

Preferential Rate

Ultimately, the goal of a patent box is to offer preferential rates to IP-sourced income. As noted above, IP location is sensitive to tax rates, so determining the appropriate rate in the context of revenue constraints is important. Among European nations that have implemented patent box regimes, tax rates range from 0 (Malta) to 15.5 percent (France). Further these are effective rates that can be achieved through two separate approaches. The first applies a reduced tax rates on qualifying income (e.g. France, Netherlands, and the UK).  The second provides an exemption for a portion of revenues attributable to the IP (e.g. Belgium, Hungary, Luxembourg, Spain, and Cyprus).  While these approaches are different in technical terms, the effects of the regimes are quite similar.[24]  However, what does create significant differences in the effects of the patent box regime is the revenue base on which the tax rate (or exemption) applies, and existing patent box ranges do not define tax-preferred income uniformly across forms of IP.

Additional factors can determine the scope and the revenue implications related to the preferential rate. For example, determining whether taxpayers can deduct losses and expenses associated with the qualified IP at the maximum corporate rate instead of the preferential rate could have significant implications, particularly in the U.S. where a rate-sensitive deduction is one of two primary IP-related tax policies. Limiting losses and expenses to the lower rate would serve to mitigate revenue losses. Other limitations could involve restricting the benefit to income of some multiple of the R&D expense or some ratio of a firm’s total income.

Location

The location of the IP development is an important consideration in the design of a patent box. Notionally, a patent box could encourage domestic research and development, and any domestic firm with income derived from IP would benefit from preferential rates. However, some patent box regimes do not require that the research be conducted within the nation’s borders. Such a design is less concerned with the location of the research activity, as attracting multinational firms that would benefit from preferential rates on IP-derived income. Determining the scope of the patent box would necessarily require determining whether income from foreign-based IP could benefit from preferential rates.

Timing

Timing is an additional element that must also be considered in the design of any potential U.S. patent box system. Timing in this context relates to the maturity of the IP from which tax-preferred income is derived. For example, would a new patent box regime benefit income from existing U.S. patents? Or would the new regime only be granted prospectively? These design choices would have important behavioral and revenue consequences. Granting preferential treatment to existing IP-derived income could act as a windfall to older IP still enjoying patent protection, but could also preclude the relocation of that IP to more favorable tax climates. Limiting the patent box treatment to prospective IP-sourced income could incentivize greater innovation in the future, and could mitigate the revenue loss from a full grandfathering of all qualified income. However, to the extent that IP is mobile, “old” IP under this new regime would be disadvantaged.  Timing and location must also be considered with respect to past income earned abroad derived from U.S. IP that has not been repatriated.

Conclusion

It has been longstanding federal policy to subsidize innovation and technological advancement in the tax code. Current U.S. policy achieves this through deductions and credits for research, which generally increases research activity beyond what would otherwise occur, benefiting society at large. However, other nations are reforming their tax code to benefit from increased research activity, as well as the domiciling of IP- intensive firms within their borders. While these new tax regimes, “IP boxes” or “patent boxes,” have yet to register as complete policy successes or failures, indications from initial evidence suggests IP is sensitive to the key features of these new tax regimes. However in crafting a potential patent box for the United States these features, specifically related to the appropriate rate, as well as consideration of location of IP and the timing of the preferential treatment must be carefully weighed.



[1] Federal expenditures can include basic, applied, developmental, and acquisition of R&D facilities and equipment, https://www.whitehouse.gov/sites/default/files/omb/budget/fy2016/assets/ap_19_research.pdf

[7]  This section relies heavily on research conducted by the Joint Committee on Taxation.  For a more detailed discussion of these issues refer to Joint Committee on Taxation, Economic Growth and Tax Policy, (JCX-47-15), February 24, 2015.

[8]  Ibid. Cited as Francesco Caselli, Accounting for Cross-Country Income Differences, in Phillipe Aghion and Steven N. Durlauf (eds.), Handbook of Economic Growth, vol. 1A, North-Holland Publishing Co., 2005; and Charles I. Jones, Growth and Ideas, in Philippe Aghion and Steven N. Durlauf (eds.), Handbook of Economic Growth, vol. 1B, North-Holland Publishing Co., 2005.

[9]  Ibid. Cited as Bronwyn H. Hall, Jacques Mairesse, and Pierre Mohnen, Measuring the Returns to R&D, in Bronwyn H. Hall and Nathan Rosenberg (eds.), Handbook of the Economics of Innovation, vol. 1, North-Holland Publishing Co., 2010.

[10]  In addition to granting patent protection, governments have also addressed this market failure through direct spending, research grants, and favorable anti-trust rules.

[11]  The effect of tax policy on research activity is largely uncertain because there is relatively little consensus regarding the magnitude of the responsiveness of research to changes in taxes and other factors affecting its price.

[12]  As mentioned previously, the current tax credit expired in December of 2014 and has not yet been reinstated.  However this pattern – expiring and retroactive reinstating the credit – has persisted for many years.  For more detail on federal tax benefits for research activities, see Joint Committee on Taxation, Background and Present Law Relating to Manufacturing Activities Within the United States (JCX-61-12), July 17, 2012.

[13]  See Nirupama Rao, Do Tax Credits Stimulate R&D Spending? The R&D Credit in Its First Decade, March 8, 2014. Available online at: http://wagner.nyu.edu/files/faculty/publications/RD032014.pdf.

[14]  Refer to Joint Committee on Taxation, Economic Growth and Tax Policy, (JCX-47-15), February 24, 2015, cited Bronwyn Hall and John Van Reenen, How Effective Are Fiscal Incentives for R&D? A Review of the Evidence, Research Policy, vol. 29, May 2000; Bronwyn H. Hall, R&D Tax Policy During the 1980s: Success or Failure? in James M. Poterba (ed.), Tax Policy and the Economy, vol. 7, MIT Press, 1993; James R. Hines, Jr., On the Sensitivity of R&D to Delicate Tax Changes: The Behavior of U.S. Multinationals in the 1980s in Alberto Giovannini, R. Glenn Hubbard, and Joel Slemrod (eds.), Studies in International Taxation, University of Chicago Press 1993; Ishaq Nadiri and Theofanis P. Mamuneas, R&D Tax Incentives and Manufacturing-Sector R&D Expenditures, in James M. Poterba, (ed.), Borderline Case: International Tax Policy, Corporate Research and Development, and Investment, National Academy Press, 1997.

[15]  Atkinson and Andes (2011) provide a good summary of the history behind patent boxes.

[16]  France implemented patent box policies in 2001 and Hungary followed in 2003.

[17]  Refer to Griffith, Rachel, Helen Miller, and M. O’Connell, Corporate Taxes and the Location of Intellectual Property, CEPR Discussion Paper #8424, 2011 and Bohm, Tobias, Tom Karkinsky, and Nadine Riedel, The Impact of Corporate Taxes on R&D and Patent Holdings, presented to the Norwegian-German Seminar on Public Economics, June 1, 2012.

[18]  Refer to Grubert, H., Intangible income, intercompany transactions, income shifting, and the choice of location, National Tax Journal Part 2, Vol. 56, 2003.

[19]  Refer to Hassbring, P. and E. Edwall, The Short Term Effect of Patent Box Regimes: A Study of the Actual Impact of Lowered Tax Rates on Patent Income, Stockholm School of Economics, Department of Economics, Spring 2013.

[20]  Refer to Bohm, Tobias, Tom Karkinsky, and Nadine Riedel, The Impact of Corporate Taxes on R&D and Patent Holdings, presented to the Norwegian-German Seminar on Public Economics, June 1, 2012.

[21]  Refer to Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, International Tax and Public Finance, Volume 21, Number 3, June 2014.

[22]  Refer to Dischinger, M., and N. Riedel, Corporate taxes and the location of intangible assets within multinational firms, Journal of Public Economics, Vol. 95, 2011 and Griffith, Rachel, Helen Miller, and M. O’Connell, Ownership of intellectual property and corporate taxation. Journal of Public Economics, Vol. 112, 2014.

[23] For more detail see: “Patent Boxes, Technological Innovation and Implications for Corporate Tax Reform;” prepared for AAF by Quantria Strategies

[24] The partial exemption of revenues or the exclusion of a portion of IP income can result in increase in loss carry-forward, which means the firm may be able to benefit from the patent box regime in later periods. See: Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, Discussion Draft 13-070, Center for European Economic Research, 2013. 

This research was prepared for AAF by Quantria Strategies.

EXECUTIVE SUMMARY AND OVERVIEW

Technological innovation is a main driver of worldwide productivity growth. Intellectual property (IP) is an aspect of technological innovation for many multinational firms. Often the IP does not have a clear geographical location and is inherently mobile across borders. For this reason, multinational firms use this feature to relocate their IP (and the associated income and expenses) to low-tax jurisdictions. This relocation serves to reduce the overall tax liability of multinational firms.

Over the past 15 years, many countries recognized this mobility of IP and implemented tax systems that allow for reduced tax rates on income derived from IP. These tax systems are designed with the goal of retaining IP within the country, which increases international competiveness. Countries design patent boxes to stem this outflow and reap the benefits of increased productivity and sustained international competiveness brought on by domestic technological innovation.

A patent box, sometimes referred to as an “IP box”, is the main tool used to reduce the taxes paid on income from IP. Patent boxes differ from tax credits for research and development (R&D), because patent boxes operate on the “back end” of the production cycle while R&D credits operate on the “front end” of the production cycle. Patent boxes apply after technologies are developed and are in place, by focusing on the sale and commercialization of existing IP assets. Countries design patent boxes to stem this outflow and reap the benefits of increased productivity and sustained international competiveness brought on by domestic technological innovation.[1]

DESIGN OF EXISTING PATENT BOX REGIMES

A. Current Law Covering Intellectual Property   

Intangible property consists of things that do not necessarily have a physical form but can be commercially transferable, such as intellectual property or custom computer software. (All intellectual property is considered intangible property.) The purpose of intellectual property is to facilitate innovation and knowledge, while promoting fairness and certainty.

Intellectual property is a legal term to describe things that are a creation of the mind.[2] A creation of the mind is a broad term, and therefore the term will thus cover a vast array of individual categories and activities.  In the U.S., a person has a legal right to protect their IP.[3]  There are many ways to infringe on a person’s IP rights including but not limited to software piracy, plagiarism, licensing violations, and the stealing of corporate secrets. Of these, licensing violations is the most common infringement.[4] A person protects physical property with a lock or an alarm system, but a person protects IP with a copyright, trademark, trade secret, or patent.[5]

A patent provides the inventor with a limited-time monopoly over the use of the discovery in exchange for informing the public of the information, or invention.[6] They own the rights, profit, and determine how and in what manner it is sold. The rationale for patent law is a social contract between the individual and the public, and that society should compensate a person who has created a beneficial service.[7] Simply put, patent protection is about fairness.

The United States Patent and Trademark Office (USPTO) defines a patent as “the grant of a property right to the inventor” that gives the owner the power to “exclude others from making, using, offering for sale, selling, or importing the invention.” The USPTO will only grant patents for inventions that are: 1) new; 2) not obvious to the average person working in the field of the invention; 3) not momentary or a natural phenomenon; and 4) had some minimal utility.[8] There are three primary types of patents the USPTO will grant: utility patents, design patents, and plant patents.[9]

Recent legislation enacted in 2011, the Leahy–Smith America Invents Act (AIA), made both substantive and procedural changes to the U.S. patent process.[10] The act, which is more than 150 pages and 137 sections, has been described as “the most comprehensive revisions of U.S. patent law in more than 50 years,”[11] and even the USPTO has called it one of the most significant laws ever enacted by the U.S. since 1836.[12]

The reasoning behind the revision was to “promote harmonization of the United States patent system with… nearly all other countries with which the United States conducts trade and thereby promote greater international uniformity and certainly in the procedures used for securing the exclusive rights of invertors to their discoveries.”[13] Thus, it recognized that the U.S. was out-of-step with the rest of the world and U.S. patent holders were vulnerable to conflict with international patent enforcement provisions.

Another reason for the passage of the AIA was to address speculative patent litigation – trolling – by people that might attempt unwarranted allegations of patent infringement in order to seek monetary gain through the threat of enforcement.[14] Patent trolls are also known as a non-practicing entity, patent assertion entity, or patent holding company.[15]

Patent trolls typically rely on the complexity of patent laws against business who may have an inferior understanding.[16] Thus, patent trolls usually avoid suing larger companies. The Boston University School of law study found that small and medium-sized entities made up 90 percent of the companies sued and accounted for 59 percent of the defenses in 2011.[17]

However, the most important reform brought about by the AIA is the change in the right of priority from the “first-to-invent” to “first-inventor-to-file.”[18] The patent priority rights deal with the situation when two applicants file for a patent that for a nearly identical patent.[19] Therefore the USPTO must decide which applicant is first in line, and has the right to file for a patent. 

Before the AIA, the United States was the only country to follow the “first-to-invent” system for priority of patent applications. When there were conflicting patent claims, this system sought to establish the original and true inventor.[20] This was a complex system involving lots of fact-finding, testimony, and a great deal of uncertainty. By contrast, the first-to-file system grants priority to the first inventor to file a patent application with the USPTO.[21] This greatly reduced transaction costs and increased certainty in patent applications. Congress clearly stated that that is the first inventor to file with the USPTO, meaning a non-inventor applicant will not be granted a patent.[22]

Intellectual property rights have, and continue to have, an important role in facilitating entrepreneurial growth, furthering scientific and economic progress. It serves an important role in protecting the inventions of people and the work of businesses. Copyright, trademark, trade secret, and patent protections stand on the front line of protecting these intellectual property rights. The USPTO is one of the United States’ oldest regulatory entities and continues to be active in protecting inventors and businesses.

The regulatory and structural changes that the AIA imposes help reduce transaction costs, increase certainty, and increases access to patent applicants. AIA’s change from a first-to-invent to first-to-file process of patent filing brings the United States’ patent law in line with the rest of international patent law. Alignment of U.S. and international patent law will now better afford patent protection for inventors and business. The AIA continues to reflect the long trend in the United States of protecting domestic intellectual property both domestically and internationally.

While the AIA made significant improvements to align the regulatory and structural environment for patents with that of international patent law, it did not address the differences in the tax treatment of intellectual property. Tax policy influences the development and commercialization of intellectual property through the treatment of: (1) expenses and income associated with developing intellectual property and (2) multinational corporations that use intellectual property commercially. 

The first – domestic tax policy – influences the cost of innovation and the ability of researchers and innovators to recover their costs during (and after) the development phase.  The second – international tax policy – influences the location decision of many large multinational corporations during the commercialization phase.

1. Domestic Tax Policy Issues

U.S. tax policy offers only incentives for the costs of research and development of intellectual property. At this time, the deduction for current expenses is the only provision that is available to taxpayers. The tax credit (for current expenses) expired December 31, 2014 and has not yet been renewed by the Congress. However, this credit expires on a regular basis and is typically retroactively reinstated.

Additionally, patents are generally considered depreciable assets and eligible for preferential capital gains treatment when they are transferred (i.e., assigned or sold) to another party.

Research and Development Expenses – Taxpayers may deduct current research and experimentation costs under Internal Revenue Code section 174. The taxpayer may claim a deduction on their income tax return for the first tax year in which the costs are paid or incurred. Taxpayers must reduce their deductions allowed under section 174 the full amount of any research tax credit determined for the taxable year.[23] However, the taxpayer may claim the full deduction and elect to claim a reduced research tax credit.

Research Credit – Taxpayer may claim a research credit equal to 20 percent of the incremental increase in the taxpayer’s qualified research expenses (Sec. 41(a)(1)).[24] A 20-percent research tax credit also is available with respect to the excess of (1) 100 percent of corporate cash expenses (including grants or contributions) paid for basic research conducted by universities (and certain nonprofit scientific research organizations) over (2) the sum of (a) the greater of two minimum basic research floors plus (b) an amount reflecting any decrease in non-research giving to universities by the corporation as compared to such giving during a fixed-base period, as adjusted for inflation (Sec. 41(a)(2)), referred to as the basic research credit (Sec. 41(e)). Finally, a research credit is available for a expenditures of energy research consortiums (Sec. 41(a)(3)), referred to as the energy research credit. Unlike the other research credits, the energy research credit applies to all qualified expenditures, not just those in excess of a base amount.

To claim the credit, the research must satisfy the requirements of section 174 and must be undertaken for the purpose of discovering technological information. The use of this information must be intended to develop a new or improved aspect of the business. In addition, substantially all of the activities must constitute experimentation for functional aspects, performance, reliability, or quality of an aspect of the business activities.[25]

The research credit, including the basic research credit and the energy research credit, expires for amounts paid or incurred after December 31, 2014 and has not yet been extended for the current year. (Sec. 41(h)).

Eligible ExpensesQualified research expenses include:

  • in-house expenses of the taxpayer for wages and supplies attributable to qualified research;
  • certain time-sharing costs for computer use in qualified research; and
  • 65 percent of amounts paid or incurred by the taxpayer to certain other persons for qualified research conducted on the taxpayer’s behalf (so-called contract research expenses).

Qualified research expenses include 100 percent of amounts paid or incurred by the taxpayer to an eligible small business, university, or Federal laboratory for qualified energy research (after consideration of the limitation for contract research expenses).

Capital Gains Treatment – If a patent is sold to another party, it will generally be eligible for preferential tax treatment under Section 1235. However, the lower tax rates applicable for capital gains are only available to individuals and pass-through entities (e.g., S-corporations and partnerships). Corporations are generally taxed on capital gain income at a top rate of 35%.

To be eligible for capital gain treatment, substantially all of the rights accorded through the patent must be transferred to the buyer. If some of the intrinsic rights are retained, then the asset is generally considered a license and income from the sale of a license is treated as ordinary income.

2.  International Tax Issues

U.S. multinational corporations are subject to tax on their worldwide income.  This system, despite foreign tax credits, tends to impose a higher rate of tax on income derived from foreign sources (relative to other countries). 

The U.S. worldwide system means that income earned abroad may be subject to tax both in the country where the income is earned and the taxpayer’s country of residence.  The intent of the foreign tax credit is to provide relief from the potential double tax (i.e., the U.S. tax may be offset by taxes paid in the source country).  However, the foreign tax credit rules are complex and include a number of limitations.[26]

The complex foreign tax credit, combined with the relatively high U.S. corporation tax rates, often does not provide relief to taxpayers.  Consequently, U.S. companies typically do not repatriate active foreign earnings.[27]  Estimates suggest that the amount of accumulated foreign earnings of U.S. companies exceeds $2 trillion.[28]  A significant portion of these earnings were reinvested to expand the foreign operations of U.S. businesses to serve emerging markets. 

In addition, analysis of these earnings that are not repatriated suggests that a significant portion of these earnings are from intellectual property (with over 20 percent of the $2 trillion attributable to one U.S. technology firm).  This is consistent with the significant growth over the past decade of income derived from such intangible assets, as patents, knowhow, and copyrights.  The highly mobile nature of these assets means it is likely to be commercially viable in many global markets, which tends to increase its value.  

However, the increase in intangible assets and growth in international trade means that (1) income earned within a country’s borders is more difficult to measure and tax and (2) economic growth in other global markets makes available more capital and more capital mobility.[29]

Recent testimony by Pam Olsen states:

“Many foreign governments have recognized the global mobility of capital and intangible assets and have come to view business income taxes as a competitive tool that can be used to attract investment.  By reducing statutory corporate income tax rates, adding incentives for research and development, innovation, and knowledge creation, and adopting territorial systems that limit the income tax to activities within their borders, governments have sought to attract capital that will yield jobs, particularly high-skilled jobs for scientists, engineers, and corporate managers…U.S. international tax rules also are out of sync with the rest of the world…the vast majority of foreign governments have shifted their income taxes from a worldwide basis to a territorial basis that limits the tax base to income from activity within their borders; they have enacted anti-base erosion measures, but those measures are aimed at protecting their domestic tax base from erosion, not at preservation of a worldwide base.”[30]

A number of European countries adopted policies that subject income derived from intellectual property to a lower tax rate.  Countries implemented these policies, referred to as patent boxes or IP boxes, to ensure that future economic growth continues within their borders.  The following sections describe the economic literature regarding tax incentives for intellectual property as well as the existing patent box regimes.

B. Literature Review

Economic literature supporting intellectual property fall into one of four related categories: (1) technological advances and growth; (2) tax incentives and technological advances; (3) effectiveness of tax incentives for research to develop technology; and (4) tax incentives for income derived from technology (e.g. patent boxes).[31]

1. Technological progress and economic growth

Technological progress emerges as the main driver of long-run economic growth in most economic research.[32]  Researcher attribute the knowledge generated from research activities as the foundation for technological progress.  One important feature of knowledge is that one firm’s use does not preclude another firm from using the same knowledge, meaning that without patent laws and restriction on use, other can commercialize the technologies to their own benefit.

Because of this feature, economists believe that the social return to knowledge and technological progress often exceeds the private returns.[33]  This discrepancy in returns, may cause firms to underinvest in research (relative to what is socially optimal).

Patent laws exist to address this feature and provide the exclusive right to commercialize the technological advance for a fixed period of time.  Economists believe that patents offer a temporary monopoly to allow firms to capitalize on the application of this knowledge and encouraging additional investment activities in technological research.[34]

2. Tax policy and innovation

Tax subsidies are a method of inducing firms to undertake additional research and development activities.[35]   As mentioned previously, the U.S. tax system generally offers two tax benefits for research activities: tax credits for research activity and current expensing of research-related expenditures.[36]  These two types of benefits each carry different incentives with potentially different effects on research activity.  For example, the research credit is incremental and only benefits the expansion of research expenditures over prior year levels.  To the extent that firms respond to tax credits (by lowering their costs), the research tax credit should increase research activities each year.  However, the present law research credit contains certain complexities and compliance costs that diminish its usefulness; thus making expensing of research costs preferable to incremental credits.

3. Effectiveness of R&D Credits

Most published studies report that research credits induce increases in research spending.[37]   Generally, review of these empirical studies of the research credit suggests that an additional dollar of the research credit generates an additional dollar of investment in research.[38]  However, these studies report a range of estimates of the price elasticity for research. 

One of the issues with evaluating the effectiveness of tax credits and deductions (or expensing) of research spending is that it focuses exclusively on the development costs.  Patent boxes differ from tax credits for research and development, because patent boxes operate on the “back end” of the production cycle while R&D credits operate on the “front end.”  Patent boxes apply after technologies are developed and are in place, by focusing on the sale and commercialization of existing IP assets.

Countries design patent boxes to stimulate research activities, maintain technological advances within their borders, stem the outflow of technology, and reap the benefits of increased productivity derived from domestic technological research.[39]

4. Research on Patent Box Effectiveness

While patent box tax regimes have been in place since 2001, widespread use of patent boxes has been limited to periods after 2007.[40]  This limited experience means that there is a limited amount of empirical evidence, which makes evaluating the policy’ efficacy difficult.  However, prior to the implementation of patent boxes, a number of economic studies considered the potential for tax benefits to influence the location of IP and since the implementation, a limited number of studies review the available evidence.

Prior to implementation of patent boxes, two recent studies considered the effect of tax policy to influence the location decisions of intellectual property.  These studies, by Griffith, Miller and O’Connel and Bohm, Karkinsky, and Riedel concluded that tax rate was an important aspect of the location choice.[41]  The authors focus their analysis on intellectual property and patent boxes, but Grubert had established previously the economic theory of taxes and multinational location choices for intellectual property.[42]

Hassbring and Edwall evaluated data from 21 OECD countries and concluded that patent box regimes have a positive effect on the number of patent applications to the European Patent Office.[43]  Their analysis found that domestic inventors had a 14.6 percent increase and foreign investors had a 20.6 percent increase in their propensity to patent.[44]

Evers, Miller, and Spengel incorporate the existing patent box regimes into measure of the cost of capital and average effective tax rates.[45]  Their results indicate that regimes allowing a deduction for research expenses at the regular corporate tax rate (rather than the lower patent box rate) could result in negative average effective tax rates.  They believe that this feature creates a subsidy to unprofitable projects and affects firm decision making, particularly when countries have significant differences in their patent box regimes. 

Other recent empirical studies show that European firms’ intellectual property is more likely to be held in low-tax subsidiaries than tangible assets (Dischinger and Riedel) and that the location of patents is responsive to corporate income tax (Griffith, Miller, and O’Connel).[46]

As empirical evidence on firm location choices, patent filings, and tax revenues become available, it is likely to demonstrate that patent boxes continue to have a significant influence on multinational corporation behavior.  However, it is also likely that, without the proper design, countries may find that they are competing against one another to gain and retain firms holding the patents for intellectual property.  The following sections identify the twelve existing patent box regimes in Europe and provide the framework for a U.S. system.

C. Existing Patent Box Regimes

A number of countries have enacted patent box regimes.[47]  These tax regimes subject income attributable to intellectual property at a lower, preferential rate to promote domestic investment in research and development.  However, some of the patent box regimes adopted by countries do not require the company to develop IP in the country or acquire domestic IP.  This means that the benefits of the patent box is not encouraging domestic investment in research and development, but rather competing for multinational firms that have already commercialized their IP.

Policymakers in the European countries with patent box regimes often sought to implement this legislation to ensure that companies locate within their borders to influence future investments relating to the IP.  Much of the research on patent boxes divide patent boxes into two broad categories.  The first applies a reduced tax rates on qualifying income (e.g. France, Netherlands, and the UK).  The second provides an exemption for a portion of revenues attributable to the IP (e.g. Belgium, Hungary, Luxembourg, Spain, and Cyprus).  While these approaches are different in technical terms, the effects of the regimes are quite similar.  However, what does create significant differences in the effects of the patent box regime is the revenue base on which the tax rate (or exemption) applies. 

In addition, the partial exemption of revenues or the exclusion of a portion of IP income can result in increase in loss carry-forward, which means the firm may be able to benefit from the patent box regime in later periods.[48]  In contrast, the regimes that apply a specific rate to IP income do not result in the carry-forward of current losses.

Other features that affect (limit or expand the benefits) the patent box regime include the: (1) types of eligible IP; (2) definition of qualifying income; and (3) treatment of R&D expenses.

1. Belgium – In 2007. Belgium introduced the patent income deduction that allows a Belgian firm (including a Belgian permanent establishment) to deduct 80 percent of qualifying gross patent income.  This deduction means that the 33.99 percent corporation tax decreases to an effective tax 6.8 percent rate for qualifying income.

The regime applies specifically to qualifying patents, but does not apply to other rights (know-how, trademarks, designs, models, secret recipes or process, and information concerning experience with respect to trade or science).  However, if the firm can establish than any of these other rights are related closely to the patent, then they may apply the patent income deduction.

Unlike other countries, Belgium limits the deduction to patents developed entirely or partially by the Belgian firm (or permanent establishment).   Only improvements on acquired patents extend to patents developed outside of the country and in this case, a Belgian entity must own the research center, despite being located outside of the country.

In addition, the Belgian regime limits the ability of firms to

2. Cyprus – In 2012, Cyprus introduced measures to promote economic growth, including tax incentives to encourage intellectual property rights (IP box).  They took these steps to remain in sync with other European countries and to allow cross-border planning for highly mobile IP. 

These provisions apply to all expenditure for the acquisition or development of intangible assets held by businesses located in Cyprus.  Eligible assets include all categories of intellectual property.  In addition, these assets may be developed internally or acquired.

Cyprus allows a four-fifths deduction of profits derived from intangible property.  The law effectively excludes 80 percent of income after deducting the costs associated with earning that income.  Stated another way, only 20 percent of profits are subject to tax.  This means that the effective tax rate falls from 12.5 percent to 2.5 percent for profits derived from IP assets.

3. France The French patent box regime was first introduced in 2000 (effective in 2001) and amended twice, in 2005 and 2010.  France allows revenue or gain deriving from the license, sublicense, sale or transfer of qualified intellectual property to be taxed at 15.5 percent if it meets certain conditions.  The French regime differs from others in that it offers a different tax rate for income derived from IP.

Income from patents which have been granted in France, the United Kingdom, and by the European Patent Office or specified European countries is eligible for the preferential tax treatment.  If the invention would have been patentable in France (subject to the conditions of the European Patent Office), the foreign patent would also be eligible.  France does not include such intellectual property rights as trademarks, design rights and copyrights.  

French companies must own the intellectual property rights, or the French company must have full ownership of rights received under license agreements.   In other words, France allows both internally developed and acquired patents to qualify for the reduced tax rate – if the patent is held by a French company.         

4. Hungary – In 2003, Hungary introduced their patent box regime.  They allow a deduction of 50 percent of the royalties received, which reduces the corporate tax rate of 19 percent to an effective 9.5 percent tax rate.  In addition, Hungary limits the 50 percent deduction to 50 percent of the overall profits derived from IP.

Eligible intellectual property includes broadly patents, know-how, trademarks, business names, and copyrights.  The IP may be either developed internally or acquired.  However, the patent box regime does not apply to IP that is acquired when it is held for less than 2 years.  Hungary also has broad definitions of qualifying income and allows income earned from third-party licensing.

5. Liechtenstein – In 2011, Liechtenstein introduced a tax reform which included changes to the tax imposed on intellectual property.  The patent box regime allows a business to deduct 80 percent of the profits derived from IP when calculating corporation tax, reducing the rate from 12.5 to an effective 2.5 percent tax rate.

Liechtenstein has a broad definition of eligible IP, covering both internally developed and acquired IP.  The definition of income derived from IP is similarly broad, including income earned from group companies as well as third-party licensing. 

IP profits means IP income less all expenses connected with the IP (including amortization and similar deductions), regardless of when the firm incurred these expenses.           

 6. Luxembourg – Luxembourg provides an 80 percent tax exemption (resulting in a 5.76 effective tax rate) for the net income derived from qualified intellectual property rights.  These property rights may be developed internally or acquired after December 31, 2007.  Eligible IP includes patents, trademarks, designs, domain names, models and software copyrights.  (Know-how, copyrights not related to software, formulas and client lists do not qualify.)

The Luxembourg company own the intellectual property rights, and the rights must give the company exclusive exploitation rights.  Generally, existing IP acquired from a related company is eligible for the patent box regime, if it is acquired after 2007.          

7. Malta – In 2010, Malta began offering the most generous patent box policy in the EU, by fully exempting royalties and income from qualifying patents.  Apart from directly holding patents or other intellectual property, the Malta company may also own other corporate entities and maintain the tax benefit with respect to dividends received.

Both patents granted in Malta and those granted in another country are eligible (provided the same invention is considered patentable under Maltese Law or is the result of Fundamental Research, Industrial Research or Experimental Development).  Eligible patents do not have to be registered in Malta and the company is not required to conduct research, experimentation or development of the relevant invention in Malta.  

8. Netherlands – In 2007, the Netherlands introduced a patent box regime with a 10 percent tax rate.  They expanded the regime in 2010 to offer a reduced rate of 5 percent and changed the name to “Innovation Box.”  

Dutch resident companies and Dutch permanent establishments that are subject to tax in the Netherlands are eligible for the reduced tax on income derived from IP.  The Dutch company must be the economic owner of the intellectual property and bear the risks associated with that ownership.  The 5 percent rate applies to the income from a qualifying intangible to the extent the income exceeds certain expenses, including related research and development and amortization expenses.[49]

The innovation box covers income from both internally developed (requires a declaration from the Dutch government) and acquired patents.  However the Netherlands does not include trademarks, non-technical design rights and literary copyrights.  Losses from qualified intangible property are deductible at the full corporate tax rate, but must be recovered in future years before the lower rate applies.

9. Portugal – In 2014, Portugal introduced a new Corporate Income Tax Code, which include a patent box regime for qualified IP activities.  Portugal exempts 50 percent of the gross income derived from patents, industrial designs or models (or other IP rights which are protected) from corporate tax.  Portugal does not extend the tax treatment to trademarks or any other IP, except those mentioned above.

In addition, the Portuguese regime provides that companies may deduct all costs associated with the development of the IP.  And they apply the patent box regime to income paid from related parties, subject to the transfer pricing rules.  However, it does not apply to income if the transfer is from a country that Portugal deems a tax haven (or blacklisted jurisdiction).  The regime applies to income received from related and unrelated parties (with certain restrictions).  The IP must be self-developed and used for business activities.  In addition, if the licensee is a related company, the IP cannot be used to generate deductible expenses for the taxpayer.

For certain districts within Portugal (Madeira), the new corporation tax in was reduced to 5 percent until 2020, which means that the effective rate on income derived from IP is 2.5 percent for this region.  Generally, the corporation rate is 30 percent with a 15 percent effective rate applied to income generated from IP.

10. Spain – In 2007, Spain adopted a patent box regime that applies a reduced tax rate to corporate income derived from licensing the right to exploit intangible assets.  In 2013, legislation altered significantly the patent box regime to allow transfers and licensing activities, base the tax computation on net rather than gross income, and eliminate the limit on income that firms may exempt.[50]

Currently, Spain exempts 60 percent of net income derived from the license or transfer of the right to use qualifying intellectual property.  This reduces the corporate rate from 30 percent to a 12 percent effective tax rate on qualifying income.[51]

Intellectual property eligible for the preferential treatment includes patents, drawings or models, plans, secret formulas or procedures, and rights on information related to industrial, commercial, or scientific experiments.  The patent box regime does not distinguish between intellectual property income from foreign and domestic sources.

11. Switzerland – In 2011, the Swiss canton of Nidwalden introduced a patent box regime referred to as the “License Box.”[52]  The License Box exempts 80 percent of net license income and offers a net 8.8 effective tax rate on license income.  License income includes payments received for use of certain intellectual property, including copyright, patents, trademarks, design or model, plans, secret formulas or process, and information concerning industrial, commercial, or scientific experience.[53]

In addition, the net income calculation includes a deduction for a proportion of finance and administrative expenses, tax expenses, depreciation, and license payments to other companies.  R&D expenses are not included in the net income calculation, but they remain available for deduction against income subject to tax at the full corporate rate.

The preferential rate applies to existing as well as new IP, and to internally developed as well as acquired IP. 

12. United Kingdom – In 2013, the phase in of the U.K.’s patent box regime began.  The partial benefits begin to apply to profits of a U.K. company or a U.K. permanent establishment after April 1, 2013.[54]  The tax rate applied to income from patented inventions (and certain other innovations) is 10 percent.  The United Kingdom Intellectual Property Office or European Patent Office must grant the patent.

In some cases, certain know-how, trade secrets and some software copyrights that are closely associated with a qualifying patent may also be eligible for the 10 percent tax rate on income generated from the IP.  However, trademarks and registered designs are not eligible for the tax treatment.

To qualify, a company must have legal ownership of the patent or qualifying intellectual property right or acquire an exclusive license to the intellectual property.  The patent or product which incorporates the patent must have been developed by a related company (in the worldwide corporate group).  However, the U.K. does not require that the research and development occur in the United Kingdom or by a U.K. company.[55]

Following the introduction of the U.K. regime, the European Commission announced that it was investigating the various patent box regimes indicating that the schemes breached European Union codes of conduct for business taxation.[56]  However, since that time, the OECD and G20 member countries reached an agreement on a ‘modified nexus approach.  This agreement means that most existing patent box regimes will need to implement changes to remain compliant.  The European Commission dropped its investigation just before the OECD and G20 member countries announced the terms of the agreement.[57]

Generally, the nexus agreement relies on a ‘substantial activity’ requirement, which means that the income receiving tax benefits must relate directly to the activity contributing to the income.  In other words, the agreement seeks to link income from the IP to the research and development activity.  This essentially eliminates outsourcing of research and development activities, which many countries now allow.  While the new agreement has not yet been implemented, and there may be barriers to implementation (e.g. the agreement requires a track and trace system for the IP which could be costly and complicated).  Further, the agreement includes a grandfather period that may induce businesses to enter into existing patent box regimes to maximize the available benefits.

The following table provides a side-by-side comparison of the current features of the existing European patent box regimes (prior to any changes to existing regimes).

Table 1 – Patent Box Regimes

Country

What are the IP Box and Corporation Income Tax Rates?

What is the tax base?

In addition to patents, what IP qualifies for the reduced tax rate?

Is there a limit on the benefit?

Can the IP be contracted out to a third party outside the border?

Can the IP be acquired?

Does existing IP qualify?

Belgium

2007

6.80/33.99

Gross Income

Supplementary Protection Certificates (SPC), certain know-how closely linked to a patent of SPC.

Deduction limited to 100 percent of pretax income

Yes, with certain restrictions

No, unless further developeda

No

Cyprus

2012

2.50/10.00

 

Net Income

 

Secret formulas, designs, models, trademarks, service marks, client lists, internet domain names, copyrights (including software), and know-how.

No

Yes

Yes

No

France

2001, 2005, 2010

 

15.50/34.43

Net Income

 

SPC, patentable inventions, manufacturing processes associated with patents, improvements of patents.

After 2011, there is a limit on subcontracted expenses (€2m)

Yes, within the EU

Yes

Yes

Hungary

2003

9.50/19.00

 

Gross Income

Secret formulas and processes, industrial designs and models, trademarks, trade names, copyrights (including software), know-how, business secrets

Deduction limited to 50 percent of pretax income

Yes, no limitation

Yes

Yes

Liechtenstein

2011

 

.2.50/12.50

Net Income

 

Designs, models, utility models, trademarks, copyrights (including software)

No

Yes, no limitation

Yes

No

Luxembourg

2008

5.84/29.22

Net Income

SPC, designs, models, utility models, trademarks, brands, domain names, copyrights on software.

No

Yes

Yes

No, unless from a related company and acquired after 2007

Malta

2010

0.00/35.00

Not Applicable

Trademarks, copyrights (including software).

Not Applicable

Yes

Yes

No

Netherlands

2007, 2010

5.00/25.00

Net Income

 

IP for which R&D certificate has been obtained (includes inventions, processes, technical scientific research, designs, models, certain software)

No

Yes, within the EU

No, unless further developeda

No

Portugal

2014

15.00/30.00

Gross Income

Models and industrial designs, protected by IP rights (excludes explicitly trademarks and other IP)

No

Yes, with certain limits

Yes

Yes

Spain

2008

12.00/30.00

Net Income

 

Secret formulas and procedures, plans, models

Yes, 6 times the cost incurred to develop IP

Yes, within the EU or European Economic Area

Noa

Yes

Nidwalden, Switzerland

2011

8.80/12.66

Net Income

Secret formulas and processes, trademarks, copyrights (including software), know-how

No

Yes

Yes

Yes

United Kingdom

2013

10.00/23.00

Net Income before Interest

SPC, certain other rights similar to patents.

No

No

No, unless from related group that developed the IP and acquiring company must manage use of the patenta

Yes

a By limiting the acquisition of the IP, these regimes attempt to ensure that the tax break relates to real economic activity.

Table 1, Continued – Patent Box Regimes

Country

Credit for withholding taxes for royalties received from abroad?

Can the firm perform R&D abroad?

How do the rules treat past R&D expenses associated with the IP?

Does qualifying income include embedded royalties?

Does qualifying income include sales of qualified IP?

Belgium

2007

Yes

Yes, but only at qualifying centers.

No Recapture

Yes

No

Cyprus

2012

Yes

Yes

No Recapture

Yes

Yes (four-fifths of the value)

France

2000

Yes

Yes, within the EU

No Recapture

No

Yes

Hungary

2003

Yes

Yes

No Recapture

No

Yes

Liechtenstein

2011

Not applicable (no WHT)

No

Recapture

No

Yes, tax exempt

Luxembourg

2008

Yes

Yes

Recapture (capitalized development costs)

Yes

Yes

Malta

2010

Not applicable

Yes

Income not eligible if R&D expenses previously deducted

Yes

Yes

Netherlands

2007

Yes, with limits

Yes, within EU; strict conditions apply to R&D IP

Recapture

Yes

Yes

Portugal

2014

Not available

Not available

Capitalization of development costs (regular tax system)

Yes

No

Spain

2008, 20

Yes

Yes, within EU

No Recapture

No

No

Nidwalden,

Switzerland 2011

Yes

 

No Recapture

Yes

Yes

United Kingdom

2013

Yes

No

R&D Expenses allocated to patent income overall.

Yes

Yes

 

D. Designing a Patent Box for the United States

There are a number of features to consider for any patent box introduced in the United States.  The following list includes the primary features:

  1. Offer a reduced tax rate (ranging between 10 and 15 percent) on income derived from IP;
  2. Limit the patent box regime to commercialization activities conducted in the United States;
  3. Require that the patented products result from domestic R&D;
  4. Apply the preferential rate to existing patents issued by the USPTO, as long as they meet the domestic R&D requirement;
  5. Apply the patent box regime to worldwide income derived from the patent (developed domestically); and
  6. Allow firms to deduct losses and expenses associated with the qualified IP at the maximum corporate rate; and
  7. Provide careful definitions of the types of income eligible for the rate and the method for calculating the income.

In addition, design issues should consider the degree to which the patent or IP influences the ultimate product.  For instance, in the United Kingdom, income from the sale of items that incorporate qualifying IP are exempt from tax.  (For example, if a company sells a car that has qualifying IP, the revenue generated from the car sale qualifies for the patent box regime in the United Kingdom.)  Other countries place limitations on the degree of the contribution.  However, a balance between the scope of qualifying IP and the contribution of the IP to related products must be considered.

Provisions or modifications to the above features that would limit the initial revenue losses associated with a U.S. patent box regime include:

  1. Phase out the preferential rate, when the income derived from the IP exceeds a certain multiple of the R&D expenditures (e.g., 5 times the R&D expenses);
  2. Limit the application of the tax regime to patents issued by the USPTO, prospectively; and
  3. Allow firms to deduct losses and expenses associated with the qualified IP from the income generated from the IP (deducted from income subject to the lower rate).

In addition, any U.S. system would have to consider how to treat the past foreign earnings that U.S. companies did not repatriate that are attributable to U.S. patents (developed and licensed domestically, but commercialized abroad).

Appendix A: Primer on Intellectual Property Rights in the United States

Appendix B: Supporting Data and Modeling Assumptions

References


[1]  Atkinson and Andes (2011) provide a good summary of the history behind patent boxes.

[2]  What is Intellectual Property. WIPO Publication No. 450(E). World Intellectual Property Organization, Geneva, Switzerland (2015).

[3]  A person stealing another person’s car is called theft, but the stealing of a person’s IP is called infringement.  Our courts are the ones who decide what constitutes infringement.

[4] Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html.

[5]  Refer to Appendix A for a detailed description of these terms, as well as a detailed description of intellectual property in the United States.  Charmasson, Henri, Buchaca, John. Patents, Copyrights & Trademarks for Dummies, 2nd Edition, John Wiley & Sons, Inc. (2008).

[6]  John R. Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014).

[7]  A patent is granted by the USPTO and the grant will generally last for 20 years from the date of the application. Unlike all other types of patents, a design patent will only last fourteen years from the date of the application.

[8]  Robert Green Stern. Leahy Smith America Invents Act – Overview: Post Grant Review, Re- Examination, and Supplemental Examination, IEEE-USA Intellectual Property Professionals Initiative Presents First Seminar: The New Patent Law and What it Means to You (Oct. 22, 2011) (Cited in Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012)).

[9]  Types of Patents, U.S. Patent and Trademark Office, Electronic Information Products Division, Patent Technology Monitoring Team (03 October 2013). Retrieved from http://www.uspto.gov/web/offices/ac/ido/oeip/taf/patdesc.htm

[10]  Leahy–Smith America Invents Act, Pub. L. No. 112-29, 125 Stat. 284 (2011) (codified in scattered sections of 28 and 35 U.S.C.)

[11]  Patrick M. Boucher, Recent developments in US patent law, 65 Phys. Today, Jan. 2012, at 27. (Cited in Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012)).

[12]  See Eric A. Kelly, Is the Prototypical Small Inventor at Risk of Inadvertently Eliminating Their Traditional One-Year Grace Period Under the American Invents Act?—Interpreting “Or Otherwise Available to the Public” Per New § 102(a) and “Disclosure” Per New § 102(b), 21 Tex. Intell. Prop. L.J. 373, 374 (2013) (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[13]  Alexa L. Ashworth, Race You to the Patent Office! How the New Patent Reform Act Will Affect Technology Transfer at Universities, 23 ALB. L.J. SCI. & TECH. 383, 395 (2013); AIA § 3(p), 125 Stat. at 293. (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[14]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 9.

[15]  For example, if a company develops a mobile app that allows customers to use a “Buy” button to purchase inventory. Unknown to the company, a patent troll holds the design patent or utility patent on the “Buy” button. The patent troll can take the company to court, insisting it pay a licensing fee on every sale that was made using their button.  Refer to Morrow, Stephanie, Patent Trolls and Their Impact, Legal Zoom.com, Inc. Retrieved on June 1, 2015 from: https://www.legalzoom.com/articles/patent-trolls-and-their-impact

[16]  Morrow, Stephanie, Patent Trolls and Their Impact, Legal Zoom.com, Inc. Retrieved on June 1, 2015 from: https://www.legalzoom.com/articles/patent-trolls-and-their-impact.

[17]  Bessen, James E. and Meurer, Michael J., The Direct Costs from NPE Disputes (June 28, 2012). 99 Cornell L. Rev. 387 (2014); Boston Univ. School of Law, Law and Economics Research Paper No. 12-34. Available at SSRN: http://ssrn.com/abstract=2091210 or http://dx.doi.org/10.2139/ssrn.2091210.   The study found that patent trolls cost American businesses more than $29 billion in 2011, up from $7 billion in 2005.

[18]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 16.

[19]  Ibid. at page 17.

[20]  Ibid. at page 17.

[21]  Ibid. at page 19.

[22]  Leahy–Smith America Invents Act (Cited in M. Cerro, Navigating a Post America Invents Act World, 34 J. National Association of Admin. L. Judiciary Issue 1 (2014)), 12.

[23]  As discussed below, the current R&D credit expired in 2014, and has not yet been extended for 2015.  However, most believe that the Congress will act to extend retroactively the credit before the end of 2015.

[24]  An alternative simplified research credit (with a 14 percent rate and a different base amount) may be claimed in lieu of this credit (Sec. 41(c)(5)).

[25]  According to the Joint Committee on Taxation, “research does not qualify for the credit if substantially all of the activities relate to style, taste, cosmetic, or seasonal design factors (Sec. 41(d)(3)).  In addition, research does not qualify for the credit if: (1) conducted after the beginning of commercial production of the business component; (2) related to the adaptation of an existing business component to a particular customer’s requirements; (3) related to the duplication of an existing business component from a physical examination of the component itself or certain other information; (4) related to certain efficiency surveys, management function or technique, market research, market testing, or market development, routine data collection or routine quality control; (5) related to software developed primarily for internal use by the taxpayer; (6) conducted outside the United States, Puerto Rico, or any U.S. possession; (7) in the social sciences, arts, or humanities; or (8) funded by any grant, contract, or otherwise by another person (or government entity) (Sec. 41(d)(4)).”  Refer to the Joint Committee on Taxation, JCX-38-14.

[26]  For a more complete explanation, refer to U.S. Congress, Joint Committee on Taxation, Present Law and Background Related to Proposals to Reform the Taxation of Income of Multinational Enterprises, (JCX-90-14), June 2014.

[27]  In addition to complex foreign tax credit rules, the United States has unusually broad and complex rules that impose current tax on the active income of foreign affiliates.

[28]  Refer to Richard Rubin, U.S. Companies Are Stashing $2.1 Trillion Overseas to Avoid Taxes, Bloomberg Business, March 4, 2015.  Available online at: http://www.bloomberg.com/news/articles/2015-03-04/u-s-companies-are-stashing-2-1-trillion-overseas-to-avoid-taxes.

[29]  Many countries deal with this problem by relying on such consumption-based taxes, as value-added or goods and services taxes, which apply to a tax base that is easier to measure and less mobile.

[30]   Refer to Olsen, Pamela F., Statement of Pamela F. Olsen to the Senate Finance Committee on Building a Competitive U.S. International Tax System, March 17, 2015.

[31]  This section relies heavily on research conducted by the Joint Committee on Taxation.  For a more detailed discussion of these issues refer to Joint Committee on Taxation, Economic Growth and Tax Policy, (JCX-47-15), February 24, 2015.

[32]  Ibid. Cited as Francesco Caselli, Accounting for Cross-Country Income Differences, in Phillipe Aghion and Steven N. Durlauf (eds.), Handbook of Economic Growth, vol. 1A, North-Holland Publishing Co., 2005; and Charles I. Jones, Growth and Ideas, in Philippe Aghion and Steven N. Durlauf (eds.), Handbook of Economic Growth, vol. 1B, North-Holland Publishing Co., 2005.

[33]  Ibid. Cited as Bronwyn H. Hall, Jacques Mairesse, and Pierre Mohnen, Measuring the Returns to R&D, in Bronwyn H. Hall and Nathan Rosenberg (eds.), Handbook of the Economics of Innovation, vol. 1, North-Holland Publishing Co., 2010.

[34]  In addition to granting patent protection, governments have also addressed this market failure through direct spending, research grants, and favorable anti-trust rules.

[35]  The effect of tax policy on research activity is largely uncertain because there is relatively little consensus regarding the magnitude of the responsiveness of research to changes in taxes and other factors affecting its price.

[36]  As mentioned previously, the current tax credit expired in December of 2014 and has not yet been reinstated.  However this pattern – expiring and retroactive reinstating the credit – has persisted for many years.  For more detail on federal tax benefits for research activities, see Joint Committee on Taxation, Background and Present Law Relating to Manufacturing Activities Within the United States (JCX-61-12), July 17, 2012.

[37]  See Nirupama Rao, Do Tax Credits Stimulate R&D Spending? The R&D Credit in Its First Decade, March 8, 2014. Available online at: http://wagner.nyu.edu/files/faculty/publications/RD032014.pdf.

[38]  Refer to Joint Committee on Taxation, Economic Growth and Tax Policy, (JCX-47-15), February 24, 2015, cited Bronwyn Hall and John Van Reenen, How Effective Are Fiscal Incentives for R&D? A Review of the Evidence, Research Policy, vol. 29, May 2000; Bronwyn H. Hall, R&D Tax Policy During the 1980s: Success or Failure? in James M. Poterba (ed.), Tax Policy and the Economy, vol. 7, MIT Press, 1993; James R. Hines, Jr., On the Sensitivity of R&D to Delicate Tax Changes: The Behavior of U.S. Multinationals in the 1980s in Alberto Giovannini, R. Glenn Hubbard, and Joel Slemrod (eds.), Studies in International Taxation, University of Chicago Press 1993; Ishaq Nadiri and Theofanis P. Mamuneas, R&D Tax Incentives and Manufacturing-Sector R&D Expenditures, in James M. Poterba, (ed.), Borderline Case: International Tax Policy, Corporate Research and Development, and Investment, National Academy Press, 1997.

[39]  Atkinson and Andes (2011) provide a good summary of the history behind patent boxes.

[40]  France implemented patent box policies in 2001 and Hungary followed in 2003.

[41]  Refer to Griffith, Rachel, Helen Miller, and M. O’Connell, Corporate Taxes and the Location of Intellectual Property, CEPR Discussion Paper #8424, 2011 and Bohm, Tobias, Tom Karkinsky, and Nadine Riedel, The Impact of Corporate Taxes on R&D and Patent Holdings, presented to the Norwegian-German Seminar on Public Economics, June 1, 2012.

[42]  Refer to Grubert, H., Intangible income, intercompany transactions, income shifting, and the choice of location, National Tax Journal Part 2, Vol. 56, 2003.

[43]  Refer to Hassbring, P. and E. Edwall, The Short Term Effect of Patent Box Regimes: A Study of the Actual Impact of Lowered Tax Rates on Patent Income, Stockholm School of Economics, Department of Economics, Spring 2013.

[44]  Refer to Bohm, Tobias, Tom Karkinsky, and Nadine Riedel, The Impact of Corporate Taxes on R&D and Patent Holdings, presented to the Norwegian-German Seminar on Public Economics, June 1, 2012.

[45]  Refer to Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, International Tax and Public Finance, Volume 21, Number 3, June 2014.

[46]  Refer to Dischinger, M., and N. Riedel, Corporate taxes and the location of intangible assets within multinational firms, Journal of Public Economics, Vol. 95, 2011 and Griffith, Rachel, Helen Miller, and M. O’Connell, Ownership of intellectual property and corporate taxation. Journal of Public Economics, Vol. 112, 2014.

[47]  At this time, China, Italy, and Gibraltar also have proposed patent box regimes or have ones that will be effective soon.  For additional details on the patent boxes adopted by selected countries, see Joint Committee on Taxation, Present Law and Background Related to Proposals to Reform the Taxation of Income of Multinational Enterprises (JCX-90-14), July 21, 2014.  According to Evers, Miller, and Spengel (2013), in 2010, Ireland withdrew its exemption of royalty income that had been available since 1973.

[48]  Refer to Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, Discussion Draft 13-070, Center for European Economic Research, 2013.

[49]  In order to qualify, the patent or intellectual property must be conducted at the risk of the Dutch company, but research activities can occur either in the Netherlands or elsewhere. For non-patented intellectual property, generally at least 50 percent of the research and development must be performed in the Netherlands and the Dutch company must play a key role in coordinating the development.

[50]  The Joint Committee on Taxation states that for licenses subject to the law prior to the 2013 changes, the exemption is no longer available when sales or revenues from exploitation of the intangible asset exceed six times the cost of developing the intangible asset.  Refer to JCX-90-14.

[51]  Some studies indicate that the maximum corporate rate for Spain is 28 percent (which would make the effective rate on patent boxes 9.6 percent).  However, most sources report a 30 percent rate.

[52]  In 2015, the Swiss government introduced legislation that would make patent boxes generally available to qualified IP beginning in 2019.

[53]  Refer to KPMG, International Corporate Tax, IP Location Switzerland, April 2011.  License income tends to be a broad concept in Nidwalden, since it is one of the few locations that includes copyright income.

[54]  The phase in began in 2013 allowing 60 percent of the full benefit.  This percent increased by 10 percent each year until it is fully availably in 2017.

[55]  However, for acquired rights, the U.K. company must make a significant contribution to developing a product using the intellectual property, or contribute to the method of applying the intellectual property.

[56]  Refer to Bevington, Mark, Nigel Dolman, and Michelle Blunt, Green Light for New Approach to Patent Boxes, Tax/Intellectual Property, Baker & McKenzie, March 2015.

[57]  The patent box agreement followed work on the Base Erosion and Profit Sharing (BEPS) Project in 2014.

Related Research
  • The DOL's proposed overtime rule would only benefit 1 million of the 4.6 million newly covered workers

  • The average weekly pay increase from the new overtime pay rule would only be $19.60 for regular overtime workers

  • Over 10 years, businesses would face about $2.5 billion in direct costs to comply with the new overtime pay rule

Overview

The Department of Labor’s (DOL) proposed overtime pay rule has been advertised as a raise in the wages of millions of U.S. workers.  However:

  •  Since an exemption requires worker duties to be managerial, administrative, or professional in nature and that most workers do not work overtime regularly, the DOL’s proposed overtime rule would only benefit about 1 million of the 4.6 million newly covered workers;
  • For all 4.6 million newly covered workers, the average weekly pay increase would be only $5.96, while for those who regularly work overtime the increase would be a mere $19.60; and
  • U.S. businesses would face the cost of reclassifying salaried workers as hourly workers, and over 10 years, businesses would face about $2.5 billion in direct costs to comply with the new rule and billions more in lost productivity.

Introduction

When the White House unveiled important details regarding DOL’s long awaited overtime pay rule, many analysts sought to understand the impact of the rule by attempting to estimate the number of workers affected. The DOL’s proposed overtime rule would raise the minimum weekly pay threshold legally required to exempt salaried workers from overtime pay from $455 per week to the 40th percentile of earnings for full-time salaried workers. According to DOL, the 40th percentile equated to $921 per week in 2013 and will be about $970 when the rule is implemented in 2016. While the White House stated that the proposed rule would impact 5 million workers, the Economic Policy Institute projected that it would affect at least 11 million workers and we estimated that only about 3 million would benefit. According to the text of the proposed rule itself, however, everyone vastly overestimated the rule’s impact on workers. In particular, the DOL estimates that the expansion in overtime pay will only impact about 1 million workers and the actual pay increase for those workers only amounts to about $20 per week. Meanwhile over the next decade, employers would face billions in regulatory compliance costs.

The Number of Workers the Rule Would Impact is Minimal

Under DOL regulations, there are three primary requirements to exempt a worker from overtime pay: the worker must be salaried (the salary basis test), the salary must meet a minimum level (the salary level test), and the worker’s duties must align with the definition of an executive, administrator, or professional (the duties test).[1] The major discrepancy between the official DOL figures and everyone else’s exists because of the duties requirements. While most analysts have the data to determine the number of workers who would be exempt under the salary basis and salary level tests, only researchers at the DOL have the data and expertise needed to determine who would be exempt under the duties test as well. So in determining the number of workers who the proposed rule would impact, one must identify the number of workers who are currently exempt from overtime pay and would no longer be exempt because of the rule change. Most analysts simply identify the number of workers who are salaried and earn between $455 per week and the 40th percentile of earnings. However, the DOL is able to identify who among the salaried workers in that pay range fulfill the duties test and are actually currently classified as exempt from overtime pay.

When the DOL takes into account the duties requirement, the size of the salaried workforce subject to the rule shrinks substantially. Table 1 compares the total number of salaried workers who earn over $455 per week to the actual number who under current rules the DOL estimates are exempt from overtime pay.

Table 1: Total Salaried Workers vs Exempt Salaried Workers

Worker Category

Total Salaried[2]

Not Exempt because of Duties Requirements[3]

Overtime Exempt[4]

Earn over $455 per week

49.1 million

27.7 million

21.4 million

Earn between $455 and $921 per week

16.0 million

11.4 million

4.6 million

Earn between $455 and $921 per week and works overtime

2.6 million

1.6 million

988,000

AAF estimates that in 2013, 49.1 million salaried workers made over $455 per week. 27.7 million of those workers, however, were not exempt because of the duties requirements. As a result, according to DOL, only 21.4 million workers were actually exempt from overtime pay. Moreover, AAF estimates that in 2013, 16.0 million workers were salaried and made between $455 and $921 per week. But, of those workers 11.4 million were not exempt because of the duties requirements, leaving only 4.6 million workers who were classified as exempt. So under the new rule, the DOL would only expand overtime pay coverage to these 4.6 million workers who were exempt in 2013.

Moreover, the proposed overtime rule would only impact these 4.6 million newly covered workers if they in fact regularly work overtime. AAF estimates that in 2013 only about 2.6 million salaried workers earning between $455 and $921 per week actually worked more than 40 hours per week. 1.6 million of those overtime workers were not exempt because of the duties requirements. This leaves only 988,000 overtime workers who were exempt and under the new rule would receive time-and-a-half pay for their overtime hours. So in reality, the DOL’s proposed overtime rule would only benefit about 1 million of the 4.6 million newly covered workers.

The Rule’s Impact on Workers’ Income Would Be Negligible

If one assumes we live in a world where there are no costs to labor mandates, then the impact on worker pay would be significant. The DOL estimates that the proposed rule would lead to 4.6 million additional salaried workers being eligible for time-and-a-half pay for working more than 40 hours per week. Of those workers, only 988,000 regularly work overtime. If those working overtime simply started receiving time-and-a-half pay for all overtime hours, then the potential pay increase for these workers would be quite large. According to the DOL, these 988,000 regular overtime workers averaged 11.1 hours of overtime each week and earned an average weekly salary of $743 in 2013.[5] For a 40 hour workweek, this implies a regular hourly pay rate of $18.58. Under the proposed overtime rule, these workers would earn an average $27.86 per hour ($18.58 x  1.5) for their 11.1 weekly overtime hours. As a result, their weekly pay would increase by almost $310 and those 988,000 workers together would earn an additional $15.9 billion per year.

Unfortunately, the effects of expanding overtime pay coverage are not quite that simple, which the DOL itself recognizes. In particular, using previous research the DOL predicts that the cost of expanding overtime pay would result in reducing the base wages and weekly hours of the roughly 1 million regular overtime workers.

To evaluate the proposed overtime rule’s impact on wages and hours, the DOL breaks the 4.6 million newly covered workers into four main types, which are shown in Table 2.

Table 2: The DOL's Worker Types[6]

Category

Number

Percent

Total

4,646,000

100.0%

Type 1

3,478,000

74.9%

Type 2

180,000

3.9%

Type 3

920,000

19.8%

Type 4

67,000

1.4%

Type 1 workers are those who do not work more than 40 hours per week and would thus not be impacted by this rule change. DOL estimates that there are 3.5 million Type 1 workers, representing 74.9 percent of all newly covered workers. Type 2 workers are those who only occasionally work overtime and would be somewhat impacted by the rule. DOL estimates that 180,000 workers fall under this category. Type 3 workers are the 920,000 people who regularly work overtime and would see the largest declines in wages and weekly hours.  Finally, Type 4 workers also regularly work overtime, but since their weekly pay is close to the new salary threshold, the DOL predicts employers would increase their weekly pay so that they remain exempt. As a result, their wages would increase and their hours would not change. Only 67,000 workers fall under this category.

Table 3 illustrates the DOL’s findings for how expanding overtime pay would impact wages and hours by worker type.

Table 3: Overtime Rule's Impact on Wages and Hours[7]

Category

Wages

Hours

Total

-0.9%

-0.2%

Type 1

0.0%

0.0%

Type 2

-2.3%

-0.2%

Type 3

-5.3%

-0.8%

Type 4

2.0%

0.0%

DOL estimates that on average wages and hours for all 4.6 million workers would decline by 0.9 percent and 0.2 percent respectively. However, since the wage and hour effects would be concentrated on those who either occasionally or regularly work overtime, these figures hide these negative consequences for some workers. For instance, regular wages would fall by 2.3 percent for Type 2 workers (those who occasionally work overtime) and by 5.3 percent for Type 3 workers (those who regularly work overtime). Meanwhile, wages for Type 1 workers (those who do not work overtime) would not change and wages for Type 4 workers (those who’s weekly pay will rise) would increase 2.0 percent.

The reduction in wages and hours significantly limits the intended benefits of the overtime rule. For instance, take Type 3 workers, who regularly work overtime. According to DOL, their average hourly wage would fall by $0.78 and their workweek would decline by 0.4 hours. As a result, their weekly base pay would fall by about $45 from about $745 per week to $700 per week. DOL estimates that with the new overtime rule, these workers would still work 10.3 overtime hours.[8] But, since workers are $45 behind, 6.5 of their overtime hours would simply go towards breaking even.  This leaves only 3.8 overtime hours that would increase worker pay.

After accounting for changes in hours and wages, DOL finds that the actual net benefit from earning overtime pay would be very small. The average per worker net weekly income change is shown in Table 4.

Table 4: Overtime Rule's Net Impact on Weekly Pay[9]

Category

Change In Weekly Pay

Total

$5.96

Type 1

$0.00

Type 2

$45.97

Type 3

$19.60

Type 4

$20.47

According to DOL, for all 4.6 million newly covered workers, the average weekly pay increase would be only $5.96. While average weekly earnings would not change for Type 1 workers, they would only increase by $19.60 for Type 3 workers, the largest group of overtime workers. Meanwhile, weekly earnings would rise by $45.97 for Type 2 workers and by $20.47 for Type 4 workers.

Since per worker weekly earnings would only rise by a small amount, the DOL’s estimated aggregate income gain for these workers is only less than a tenth of the $15.9 billion resulting from our simple calculation above. In fact, DOL estimates that the proposed overtime rule would increase annual income for these workers by only $1.4 billion annually.[10] While the majority of the increase in income would benefit the workers who regularly work overtime, none of it would benefit the 75 percent of newly eligible overtime workers who do not actually work more than 40 hours per week.

Finally, it is important to note that in evaluating the impact of the proposed overtime rule, the DOL ignores highly relevant labor market research on this topic. In particular, the DOL analyzes how expanding overtime pay would impact the hours of people who continue work overtime. However, the DOL does not take into account how expanding overtime pay would impact the number of people who work overtime. Trejo (2003) examined this issue and found that expanding overtime pay coverage not only reduced the number of overtime workers, but it also reduced the number of full-time workers. Specifically, the proportion of overtime and full-time work falls by 2.5 and 1.2 percentage points respectively. Part-time work simultaneously rises 3.7 percentage points.[11] Taking into account the negative employment consequences for full-time and overtime workers would surely further dampen the DOL’s already minimal income estimates.

Business and the Economy Face Major Costs

While this proposed rule’s actual benefits for workers would be minimal, businesses would face significant compliance costs and the economy would suffer from lost productivity. These burdens include the costs of reclassifying workers from salaried to hourly, tracking worker hours and keeping time sheets, and rearranging work schedules. According to a previous AAF analysis, the proposed overtime rule would impose about $255 million in direct employer costs each year. Over 10 years, businesses would face about $2.5 billion in direct costs to comply with the new rule.[12]

Moreover, the overtime rule’s impact on hours will result in a significant reduction in productivity. According to the DOL, the proposed rule would cause the weekly hours of employees who regularly work overtime to fall by 0.4 hours per week or 0.8 percent. This may seem small. But, AAF previously found that when expanded to the nearly 1 million workers who would lose hours, it adds up to over 20 million labor hours lost annually. As a result, the economy would lose about $1.3 billion every year in productivity.[13] So, nearly all of the $1.4 billion in additional worker pay would be entirely offset by the $1.3 billion lost from less productivity and the over $200 million in direct compliance costs every year.

Conclusion

The DOL’s new overtime rules have been celebrated as a way to increase incomes for hardworking low- and middle-income families. Unfortunately, lack of proper data and expertise has led a lot of misinformation regarding the rule’s reach into the labor force, as most analysts vastly overestimated the number of workers who the proposed rule would impact. A careful reading of the rule itself, however, reveals the proposed rule is much ado about very little for workers and the DOL itself expects that only about 1 million workers would actually benefit with an increase in weekly pay. Furthermore, the DOL projects that most of those 1 million workers would only receive an additional $20 per week, not a raise most would call a saving grace. Meanwhile, the rule would impose billions in compliance costs and burden the economy with fewer hours and productivity. Since 2009, the Obama Administration has issued over $700 billion worth of regulations that include 475 million paperwork burden hours. AAF has shown over and over again that these regulations cumulatively reduce employment and pay. Instead of issuing yet another costly regulation, perhaps the best way to promote employment and wage growth is to slow down the rising cost of regulatory burdens.



[1] 29 CFR Part 541, “Defining and Delimiting the Exemptions for Executive, Administrative, Professional, Outside Sales and Computer Employees; Final Rule,” Preamble, Federal Registrar, Department of Labor, Wage and Hour Division, 2004, available athttp://www.dol.gov/whd/overtime/preamble.pdf

[2] Author’s analysis of 2013 data from the 2008 Survey of Income and Program Participation

[3] Total Salaried minus Overtime Exempt

[4] 29 CFR Part 541, “Defining and Delimiting the Exemptions for Executive, Administrative, Professional, Outside Sales and Computer Employees; Proposed Rule,” Federal Registrar, Department of Labor, Wage and Hour Division, July 2015, pg. 38,564, http://www.regulations.gov/#!documentDetail;D=WHD-2015-0001-0001

[5] Proposed rule, pg. 38,564

[6] Proposed rule., pg. 38,574

[7] Author’s analysis of Tables 21 and 22 on page 38,575 of the proposed rule, pg. 38,575.

[8] Proposed rule, pg. 38,575

[9] Proposed rule, pg. 38,576

[10] Proposed rule, pg. 38,577

[11] Stephen J. Trejo, “Does the Statutory Overtime Premium Discourage Long Workweeks?” Industrial and Labor Relations Review, Vol. 56, No. 3, April 2003, p. 542, http://digitalcommons.ilr.cornell.edu/ilrreview/vol56/iss3/10/?referer=http%3A%2F%2Famericanactionforum.org%2Fresearch%2Fprimer-overtime-pay-regulation

[12] Dan Goldbeck, “’White Collar’ Overtime Expansion,” American Action Forum, July 2015, http://americanactionforum.org/regulation-review/white-collar-overtime-expansion

[13] Ibid.

 
Related Research

Atkinson, R. D. and S. Andes, Patent Boxes: Innovation in Tax Policy and Tax Policy for Innovation, The Information Technology & Innovation Foundation, October, 2011.

Aylott, Colin, UK: What is the Patent Box Regime? Smith & Williamson, April 9, 2012.

Bevington, Mark, Nigel Dolman, and Michelle Blunt, Green Light for New Approach to Patent Boxes, Tax/Intellectual Property, Baker & McKenzie, March 2015.

Bohm, Tobias, Tom Karkinsky and Nadine Riedel, The Impact of Corporate Taxes on R&D and Patent Holdings, presented to the Norwegian-German Seminar on Public Economics, June 1, 2012.

Cant, Michael and Adam Singer, Patent Box: a new tax rate for IP, Tax Briefing, February 2011.

Deloitte, 2014 Global Survey of R&D Tax Incentives, March 2014.

Deloitte, Patent box regimes What’s inside? (Opportunities for Canada), R&D Tax Update, 12-2, March 5, 2012.

Devereux, M., B. Lockwood, and M. Redoano, Do Countries Compete over Corporate Tax Rates? Journal of Public Economics, Vol. 92, 2008.

Dischinger, M., & Riedel, N., Corporate taxes and the location of intangible assets within multinational firms, Journal of Public Economics, Vol. 95, 2011.

Ernst & Young, Patent Income Deduction, Attractive Tax Regime Confirms and Strengthens Belgium as a Prime Location for IP and R&D, 2011.

Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, International Tax and Public Finance, Volume 21, Number 3, June 2014.

Evers, L., H. Miller and C. Spengel, Intellectual Property Box Regimes: Effective Tax Rates and Tax Policy Considerations, Discussion Draft 13-070, Center for European Economic Research, 2013.

Freshfields Bruckhaus Deringer LLP, UK patent box: EU decision deferred, December 2013.

Griffith, Rachel, Helen Miller, and M. O’Connell, Ownership of intellectual property and corporate taxation. Journal of Public Economics, Vol. 112, 2014.

Griffith, Rachel, Helen Miller, and M. O’Connell, Corporate Taxes and the Location of Intellectual Property, CEPR Discussion Paper #8424, 2011.

Griffith, Rachel and Helen Miller, Patent Boxes: An Innovative Way to Race to the Bottom? VOX, CEPR’s Policy Portal, June 30, 2011.

Griffith, Rachel and Helen Miller, Support for Research and Innovation, in R. Chote, C. Emmerson, and J. Shaw, eds., The IFS Green budget: February 2010, IFS Commentary 112, 2010.

Griffith, Rachel, R. Harrison, and Van Reenen, J., How special is the special relationship? Using the impact of US R&D spillovers on UK firms as a test of technology sourcing. American Economic Review, Vol. 96, 2006

Grubert, H., Intangible income, intercompany transactions, income shifting, and the choice of location, National Tax Journal Part 2, Vol. 56, 2003.

Hall, Bronwyn, and John Van Reenen, How Effective Are Fiscal Incentives for R&D? A Review of the Evidence, Research Policy, vol. 29, May 2000.

Hall, Bronwyn, R&D Tax Policy During the 1980s: Success or Failure? in James M. Poterba (ed.), Tax Policy and the Economy, vol. 7, MIT Press, 1993.

Hassbring, P. and E. Edwall, The Short Term Effect of Patent Box Regimes: A Study of the Actual Impact of Lowered Tax Rates on Patent Income, Stockholm School of Economics, Department of Economics, Spring 2013.

Hines, Jr., James, On the Sensitivity of R&D to Delicate Tax Changes: The Behavior of U.S. Multinationals in the 1980s in Alberto Giovannini, R. Glenn Hubbard, and Joel Slemrod (eds.), Studies in International Taxation, University of Chicago Press 1993.

Information Technology & Innovation Foundation, 10 Frequently Asked Questions About Patent Boxes, October 2011.

Japan External Trade Organisation, European patent box regimes, April 2013.

Knight, Bernard J. and Goud Maragani, As Part of Any New Patent Legislation, the New Congress Should Enact a Patent Box Regime to Bolster America’s Competitiveness for New Innovation and Increase Job Opportunities for the Middle Class, Bloomberg, BNA, Life Sciences Law & Industry Report, 2015.

KPMG, International Corporate Tax, IP Location Switzerland, April 2011.

LCG International Trust, Intellectual Property Box (IP-Box) Liechtenstein, May 2013.

Merrill, P. R., James Shanahan, Jose Elias Tome Gomez, Guillaume Glon, Paul Grocott, Auke Lamers, Diarmuid MacDougall, Alina Macovei, Remi Montredon, Thierry Vanwelkenhuyzen, Alexandru Cernat, Stephen Merriman, Rachel Moore, Gregg Muresan, Pieter Van Den Berghe, and Andrea Linczer, Is It Time for the United States to Consider the Patent Box? Tax Analysts, March 26, 2012.

Nadiri, Ishaq and Theofanis P. Mamuneas, R&D Tax Incentives and Manufacturing-Sector R&D Expenditures, in James M. Poterba, (ed.), Borderline Case: International Tax Policy, Corporate Research and Development, and Investment, National Academy Press, 1997.

Neocleous, Elias and Philippos Aristotelous, The Cyprus Intellectual Property Rights 'Box,' Andreas Neocleous & Co LLC, Cyprus, Issue 28, May 23, 2013.

OECD (2014), Countering Harmful Tax Practices More Effectively, Taking into Account Transparency and Substance, OECD/G20 Base Erosion and Profit Shifting Project, OECD Publishing, 2014. http://dx.doi.org/10.1787/9789264218970-en

Olsen, Pamela F., Statement of Pamela F. Olsen to the Senate Finance Committee on Building a Competitive U.S. International Tax System, March 17, 2015.

Plumridge, Hester, Closing the box?  Europe Probes Patent Tax Breaks, Bloomberg, June 11, 2014.

PricewaterhouseCoopers, Patent Box and Technology Incentives: Tax and Financial Reporting Considerations, Tax Accounting Services, August 2013.

Rao, Nirupama, Do Tax Credits Stimulate R&D Spending? The R&D Credit in Its First Decade, March 8, 2014, available at http://wagner.nyu.edu/files/faculty/publications/RD032014.pdf.

Rubin, Richard, U.S. Companies Are Stashing $2.1 Trillion Overseas to Avoid Taxes, Bloomberg Business, March 4, 2015.

Shanahan, Jim, Is it time for your country to consider the "patent box"? PwC's Global R&D Tax Symposium on Designing a Blueprint for Reducing the After-Tax Cost of Global R&D, May 23, 2011.

Taxand, Dutch IP Regime with a 5% Effective Tax Rate, Netherlands International Tax, January 28, 2010.

U.S. Congress, Joint Committee on Taxation, Economic Growth and Tax Policy, (JCX-47-15), February 24, 2015.

U.S. Congress, Joint Committee on Taxation, Present Law and Background Related to Proposals to Reform the Taxation of Income of Multinational Enterprises, (JCX-90-14), June 2014.

U.S. Congress, Joint Committee on Taxation, Background and Present Law Relating to Manufacturing Activities Within the United States (JCX-61-12), July 17, 2012.

World Intellectual Property Review, European Commission Defends ‘patent box’ Probe, News, March 26, 2014.

Related Research

Summary

U.S. Patent and Trademark Office (USPTO) Activities

  •  In 2014, the U.S. Patent and Trademark Office granted approximately 300,000 Utility Patents.
  • About one-half of these were of U.S. origin.
  • Corporations account for more than 90 percent of patents granted by the USPTO in 2014.
  • Japan was the foreign country with the most patents granted (about 54 thousand), followed by Germany and South Korea (about 16 thousand each).
  • Thirty-two corporations with more than 1,000 patents granted in 2014 account for more than one-fifth of total patents.
  • IBM was granted the most patents (7,481).
  • Following IBM were: Samsung (4,936), Canon (4,048), Sony (3,214), Microsoft (2,829), Qualcomm (2,586) and Google (2,566).
  • In the US, California accounted for the largest number of patent grants (28.1 percent), followed by Texas (6.9 percent) and New York (6.2 percent).
  • Active solid-state devices were the largest technology class granted patents in 2014.
  • Other technologies receiving a large number of patents in 2014 involved: communications, computers, chemistry, biology, and data processing applications.
  • For 2012, the last year for which NAICS industry breakdowns were available, the industries with the most patents are: Computers (16.2 percent), Communications (12.7 percent) and Semiconductors (12.6 percent).

World Intellectual Property Organization (WIPO)

Forthcoming

Types of Patents

Utility Patent

Issued for the inventions of a new and useful process, machine, manufacture, or composition of matter, or a new and useful improvement thereof, it generally permits its owner to exclude others from making, using, or selling the invention for a period of up to twenty years from the date of patent application filing ++, subject to the payment of maintenance fees. Approximately 90 percent of the patent documents issued by the USPTO in recent years have been utility patents, also referred to as "patents for invention".

Design Patent

Issued for a new, original, and ornamental design embodied in or applied to an article of manufacture, it permits its owner to exclude others from making, using, or selling the design for a period of fourteen years from the date of patent grant. Design patents are not subject to the payment of maintenance fees. Please note that the fourteen year term of a design patent is subject to change in the near future.

Table 4 – U.S. Utility Patent Applications and Grants, 2014

Applications and Grants

Number

Percentage

Utility Patent Applications 1

 

 

U.S. Origin

287,086

49.6%

Foreign Origin

291,716

50.4%

Total, U.S. Patent Applications

578,802

100.0%

 

Utility Patent Grants

 

 

U.S. Origin

144,621

48.1%

Foreign Origin

156,057

51.9%

Total, U.S. Patent Grants

300,678

100.0%

Source: U.S. Patent and Trademark Office

1 Breakdown between U.S. and Foreign origin is estimated using shares from 2013.

 
Table 5 – U.S. Utility Patents Granted, by Organizational Type, 2014

Organization

Number

Percentage

U.S. Corporation

136,355

45.3%

U.S. Government

1,024

0.3%

U.S. Individual

13,869

4.6%

Foreign Corporation

143,719

47.8%

Foreign Government

321

0.1%

Foreign Individual

5,390

1.8%

Total, All U.S. Utility Patents Granted

300,678

100.0%

Source: U.S. Patent and Trademark Office

 
Table 6 – U.S. Utility Patents Granted, by Country of Origin, 2014

Country

Number

Percentage

JAPAN

53,849

34.5%

GERMANY

16,550

10.6%

KOREA, SOUTH

16,469

10.6%

TAIWAN

11,332

7.3%

FRANCE

6,691

4.3%

UNITED KINGDOM

6,487

4.2%

CANADA

7,043

4.5%

ITALY

2,628

1.7%

SWITZERLAND

2,398

1.5%

SWEDEN

2,767

1.8%

NETHERLANDS

2,505

1.6%

CHINA, PEOPLE'S REPUBLIC OF

7,236

4.6%

ISRAEL

3,471

2.2%

AUSTRALIA

1,693

1.1%

FINLAND

1,338

0.9%

BELGIUM

1,220

0.8%

INDIA

2,987

1.9%

AUSTRIA

1,180

0.8%

DENMARK

1,051

0.7%

SINGAPORE

946

0.6%

SPAIN

789

0.5%

CHINA, HONG KONG S.A.R.

606

0.4%

NORWAY

547

0.4%

RUSSIAN FEDERATION

445

0.3%

IRELAND

467

0.3%

NEW ZEALAND

261

0.2%

BRAZIL

334

0.2%

SOUTH AFRICA

152

0.1%

MALAYSIA

259

0.2%

MEXICO

172

0.1%

HUNGARY

159

0.1%

SAUDI ARABIA

294

0.2%

CZECH REPUBLIC

182

0.1%

ARGENTINA

71

0.0%

LUXEMBOURG

43

0.0%

POLAND

162

0.1%

Others (136)

1,273

0.8%

Total, All U.S. Utility Patents Granted

156,057

100.0%

Source: U.S. Patent and Trademark Office

 
Table 7 – U.S. Utility Patents Granted, by Organization, 2014
Organizations with 1,000 or more Utility Patents Granted

Country

Number

Percent of Total U.S. Utility Patents

INTERNATIONAL BUSINESS MACHINES CORPORATION

7,481

2.5%

SAMSUNG ELECTRONICS CO., LTD.

4,936

1.6%

CANON KABUSHIKI KAISHA

4,048

1.3%

SONY CORPORATION

3,214

1.1%

MICROSOFT CORPORATION

2,829

0.9%

QUALCOMM, INC.

2,586

0.9%

GOOGLE, INC.

2,566

0.9%

TOSHIBA CORPORATION

2,537

0.8%

LG ELECTRONICS INC.

2,119

0.7%

PANASONIC CORPORATION

2,079

0.7%

APPLE, INC.

2,003

0.7%

GENERAL ELECTRIC COMPANY

1,858

0.6%

FUJITSU LIMITED

1,812

0.6%

SEIKO EPSON CORPORATION

1,660

0.6%

RICOH COMPANY, LTD.

1,634

0.5%

INTEL CORPORATION

1,573

0.5%

HEWLETT-PACKARD DEVELOPMENT COMPANY, L.P.

1,573

0.5%

TELEFONAKTIEBOLAGET L M ERICSSON (PUBL.)

1,537

0.5%

SAMSUNG DISPLAY CO., LTD.

1,500

0.5%

GM GLOBAL TECHNOLOGY OPERATIONS LLC

1,470

0.5%

TAIWAN SEMICONDUCTOR MANUFACTURING CO., LTD.

1,446

0.5%

TOYOTA JIDOSHA K.K.

1,362

0.5%

BLACKBERRY LIMITED

1,328

0.4%

AT& T INTELLECTUAL PROPERTY I, L.P.

1,295

0.4%

BROADCOM CORPORATION

1,197

0.4%

SEMICONDUCTOR ENERGY LABORATORY CO., LTD.

1,177

0.4%

BROTHER KOGYO KABUSHIKI KAISHA

1,142

0.4%

HONDA GIKEN KOGYO KABUSHIKI KAISHA (HONDA MOTOR CO., LTD.)

1,099

0.4%

CISCO TECHNOLOGY, INC.

1,095

0.4%

SHARP KABUSHIKI KAISHA (SHARP CORPORATION)

1,082

0.4%

MICRON TECHNOLOGY, INC.

1,067

0.4%

SIEMENS AKTIENGESELLSCHAFT

1,040

0.3%

TOTAL

65,345

21.7%

U.S. Utility Patents  totaled 300,678, in 2014

 

 

Source: U.S. Patent and Trademark Office

 
Table 8 – U.S. Utility Patents Granted, by State, 2014

State

Number

Percent of Total

ALABAMA

500

0.3%

ALASKA

49

0.0%

ARIZONA

2,517

1.7%

ARKANSAS

204

0.1%

CALIFORNIA

40,661

28.1%

COLORADO

3,184

2.2%

CONNECTICUT

2,309

1.6%

DELAWARE

442

0.3%

FLORIDA

4,210

2.9%

GEORGIA

2,669

1.8%

HAWAII

136

0.1%

IDAHO

1,012

0.7%

ILLINOIS

5,106

3.5%

INDIANA

2,049

1.4%

IOWA

1,000

0.7%

KANSAS

960

0.7%

KENTUCKY

646

0.4%

LOUISIANA

434

0.3%

MAINE

212

0.1%

MARYLAND

1,851

1.3%

MASSACHUSETTS

6,725

4.7%

MICHIGAN

5,306

3.7%

MINNESOTA

4,626

3.2%

MISSISSIPPI

153

0.1%

MISSOURI

1,257

0.9%

MONTANA

115

0.1%

NEBRASKA

364

0.3%

NEVADA

834

0.6%

NEW HAMPSHIRE

889

0.6%

NEW JERSEY

5,036

3.5%

NEW MEXICO

423

0.3%

NEW YORK

8,904

6.2%

NORTH CAROLINA

3,411

2.4%

NORTH DAKOTA

104

0.1%

OHIO

3,755

2.6%

OKLAHOMA

572

0.4%

OREGON

2,391

1.7%

PENNSYLVANIA

4,091

2.8%

RHODE ISLAND

363

0.3%

SOUTH CAROLINA

907

0.6%

SOUTH DAKOTA

115

0.1%

TENNESSEE

1,060

0.7%

TEXAS

10,022

6.9%

UTAH

1,374

1.0%

VERMONT

578

0.4%

VIRGINIA

2,078

1.4%

WASHINGTON

6,448

4.5%

WEST VIRGINIA

134

0.1%

WISCONSIN

2,107

1.5%

WYOMING

122

0.1%

DISTRICT OF COLUMBIA

161

0.1%

GUAM

3

0.0%

PUERTO RICO

38

0.0%

U.S. VIRGIN ISLANDS

4

0.0%

Total, All U.S. Utility Patents Granted

144,621

100.0%

Source: U.S. Patent and Trademark Office

 
Table 9 – Ten Largest Number of U.S. Patents Granted, by Technology Class, 2014

Class

Technology

Number

257

Active Solid-State Devices

15,581

370

Multiplex Communications

15,559

455

Telecommunications

13,011

708

Electrical Computers: Arithmetic Processing

11,257

438

Semiconductor Device Manufacturing

10,746

514

Drug, Bio-Affecting and Body-Treating Composition

8,308

435

Chemistry: Molecular Biology and Microbiology

7,669

707

Data Processing: Database

7,334

382

Image Analysis

7,299

705

Data Processing: Finance

7,024

 

Total

103,788

Source: U.S. Patent and Trademark Office

 
Table 10 – U.S. Utility Patents Granted, by NAICS Industry Class, 2012

NAICS Code

Industry

Number

Percentage

311

 Food

436

0.2%

312

 Beverage and Tobacco Products

174

0.1%

313-316

 Textiles, Apparel and Leather

1,586

0.6%

321

 Wood Products

412

0.2%

322,323

 Paper, Printing and support activities

767

0.3%

3251

 Basic Chemicals

7,240

2.9%

3252

Resin, Synthetic Rubber, and Artificial and Synthetic Fibers and Filaments

1,291

0.5%

3254

Pharmaceutical and Medicines

7,859

3.1%

3,253,325,532,563,250

Other Chemical Product and Preparation

6,924

2.7%

326

Plastics and Rubber Products

5,558

2.2%

327

Nonmetallic Mineral Products

2,579

1.0%

331

Primary Metal

921

0.4%

332

Fabricated Metal Products

9,099

3.6%

333

Machinery

26,601

10.5%

3341

Computer and Peripheral Equipment

40,939

16.2%

3342

Communications Equipment

32,122

12.7%

3344

Semiconductors and Other Electronic Components

31,823

12.6%

3345

Navigational, Measuring, Electromedical, and Control Instruments

25,250

10.0%

33,433,346

 Other Computer and Electronic Products

8,106

3.2%

335

Electrical Equipment, Appliances, and Components

16,255

6.4%

3361-3363

Motor Vehicles, Trailers and Parts

6,090

2.4%

3364

Aerospace Product and Parts

2,161

0.9%

336,533,663,369

Other Transportation Equipment

1,231

0.5%

337

Furniture and Related Products

684

0.3%

3391

Medical Equipment and Supplies

8,018

3.2%

339 (except 3391)

Other Miscellaneous

9,029

3.6%

 

Total, All U.S. Utility Patents Granted

253,155

100.0%

Source: U.S. Patent and Trademark Office

Related Research

This research was prepared for AAF by Jordan F. Howard, J.D.*

The Basics of Intellectual Property Rights

In the United States of America there are two types of property: tangible property and intangible property. Tangible property includes physical objects, such as real estate or a house. Intangible property consists of things that do not necessarily have a physical form but can be commercially transferable, such as stocks or custom computer software. All intellectual property is considered intangible property. The purpose of intellectual property is to facilitate innovation and knowledge, while promoting fairness and certainty.

Intellectual property (IP) is a legal term to describe things that are a creation of the mind.[1] A creation of the mind is a broad term, and therefore the term will thus cover a vast array of individual categories and activities. In the U.S. a person has a legal right to protect their IP.

A person stealing another person’s car is called theft, but the stealing of a person’s IP is called infringement. Our courts are the ones who decide what constitutes infringement. There are many ways to infringe on a person’s IP rights including but not limited to software piracy, plagiarism, licensing violations, and the stealing of corporate secrets. Of these, licensing violations is the most common infringement.[2] A person protects physical property with a lock or an alarm system, but a person protects IP with a copyright, trademark, trade secret, or patent.[3]

A copyright is a type of safeguard designed to protect creators of any “original works of authorship.” A copyright protects an original work regardless of whether it is published or unpublished.[4] For example, if a person found a code for software lying on the table of Starbucks, he could not then use, or sell, the contents of the paper for financial gain. It is the original author’s work. Copyright protection applies the instant the original work is created in a tangible form, such as writing the idea down [not simply the idea to create the work].[5] The moment the author writes it down it is the author’s property and copyright protected. There must be some proof that it is yours and simply stating “it was my idea” is insufficient. This type of IP right frequently protects academics and artists. Plagiarism is a common form of copyright infringement. An original author’s copyright will last for the duration of the person’s life plus seventy years.[6]

A trademark can be understood as a business’ unique name that helps a consumer know precisely which business they are dealing with. The U.S. Patent and Trademark Office (USPTO) describes a trademark as “any word, name symbol, or device, or any combination, used, or intended to be used, in commerce to identify and distinguish the goods of one manufacturer or seller from goods manufactured or sold by others.”[7] This protection is particularly useful because the relationship between buyers and businesses are increasingly impersonal and at arm’s length. It benefits businesses by protecting them from other sellers’ free-riding on the reputation that their business has established. Think about the many knock-off items sold abroad, such as fake Apple Inc. products, such as iPhones® sold throughout China.  A trademark protection can last as long as the trademark is used in commerce.

A trade secret protects a wide arrange of a company’s particular method of generating a business or a product. The U.S. code broadly defines trade secret as “all forms and types of financial, business, scientific, technical, economic, or engineering information, including patterns, plans, compilations, program devices, formulas, designs, prototypes, methods, techniques, processes, procedures, programs, or codes, whether tangible or intangible, and whether or how stored, compiled, or memorialized physically, electronically, graphically, photographically, or in writing.” Protection of the trade secret will apply if the following conditions are met: (1) it is not generally known to the public; (2) confers some sort of economic benefit on its holder; and (3) some sort of reasonable effort to keep it secret.[8] Examples of trade secrets include Coca-Cola’s receipt or a computer company’s software source code. A trade secret may last indefinitely, so long as the secret is “commercially valuable, its value derives from the fact that it is secret, and the owner take reasonable precautions to maintain its secrecy.”[9]

A patent provides the inventor with a limited-time monopoly over the use of the discovery in exchange for informing the public of the information, or invention.[10] They own the rights, profit, and determine how and in what manner it is sold. The rationale for patent law is a social contract between the individual and the public, and that society should compensate a person who has created a beneficial service. Simply put, patent protection is about fairness. The USPTO defines a patent as “the grant of a property right to the inventor” that gives the owner the power to “exclude others from making, using, offering for sale, selling, or importing the invention.” The USPTO will only grant patents for inventions that are: 1) new; 2) not obvious to the average person working in the field of the invention; 3) not momentary or a natural phenomenon; and 4) had some minimal utility.[11] There are three primary types of patents the USPTO will grant: utility patents, design patents, and plant patents.[12] The graph below shows the types of patents granted by the USPTO from 1790 to 2010[13]:

Graph 1 – Patents Granted by the USPTO, 1790 to 2010

A utility patent is issued for inventing a new and useful process, machine, manufacture, or composition of matter, or a new and useful improvement.[14] The vast majority of all patents granted by the USPTIO are utility patents. These are the types of inventions, methods, one typically thinks of when we think of patent protection. Thomas Edison’s light bulb or Apple’s silent button feature are both examples of utility patents. On the other hand, design patents protect the way that an invention looks. Design patents issued for a new, original, an ornamental design embodied in or applied to an article of manufacture.[15] Examples of design patents include computer icons or soft drink bottles. Lastly, plant patents are issued for a new verity of asexual plant. Drought resistant plants created by Monsanto for use in farming are an example of a plant patent. 

Patent Application

A patent is granted by the USPTO and the grant will generally last for 20 years from the date of the application. Unlike all other types of patents, a design patent will only last fourteen years from the date of the application. A patent is applied for by submitting a patent application to the USPTO. The application must conform to federal laws, rules and guidelines outlined in the Manual of Patent Examining Procedure.[16] While a patent can last up to 20 years there are three maintenance fees that must be paid at 3 to 3.5 years, 7 to 7.5 years, and 11 to 11.5 years after the date of issue. These three fees cannot be paid early, and if the fees are not paid on time the patent protection will expire.

Figure 1 – Patent Examining Procedure

The figure published by the USPTO above is helpful in listing the steps that should be taken in order to apply for a design, utility, or plant patent for an invention.[17] While not required, the USPTO recommends that an applicant hire a patent attorney when an applicant applies for a patent.

The USPTO approximates that the process of applying for and being granted a patent takes between one to four years, with the average wait being three years.[18] The USPTO will decide whether to grant a patent by considering if the application fully complies with the patent application requirements.[19] To be patentable, an invention or process must be “useful, novel and nonobvious.”[20] An invention is useful if it is “operable and provides a tangible benefit.”[21] An invention satisfies the novel requirement if it is not “fully anticipated by prior patent, publication, or other state-of-the-art knowledge” which is known as prior art.[22] The requirement of nonobvious is met when the invention is not easily known by an expert, or skilled person, in the same field of expertise.[23] Patent applicants must also include a definition of what the applicant believes would be infringement upon their invention. The application must disclose the best mode, or favored way, to use the invention. There is a one-year grace period from when the inventor announces his invention to the public, and where the inventor can decide whether to file for a patent to protect his rights.[24] This grace period is designed to help “encourage early public disclosure of new inventions,”[25] while also providing a safeguard for inventors.

The USPTO allows for the filling of a non-provisional patent application that secures his spot (priority) in line by establishing his filing date.[26] A non-provisional patent application need not be submitted with supporting documents.  However, the one-year grace period begins at that moment, and he would need to file a standard provisional patent application within the one-year time period or the application will be deemed abandoned.[27] The figure below illustrates helps illustrate the how both non-provisional and provisional patents fit into the patent process[28]:

Figure 2 – Patent Processes

Patents are not self-enforcing. If there is a patent infringement then a lawsuit, or legal complaint, must be filed to enforce the patent holder’s rights and stop the violator’s bad behavior. Patent state courts do not have jurisdiction to hear patent claims. It is the federal circuit courts that has jurisdiction over patent cases and decide the outcome of patent lawsuits.[29] The table below lists fees associated for the different entities[30]:

Table 1 – Fees Associated with Patents

Fee Description

General Fee

Small Entity

Micro Entity

Basic Utility Patent Filing

$280

$140

$70

Provisional Patent Filing

$260

$130

$65

Utility Patent Search

$600

$300

$150

Utility Patent Search Examination

$720

$360

$180

3.5 Years Patent Maintenance Fee

$1,600

$800

$400

7.5 Years Patent Maintenance Fee

$3,600

$1,800

$400

11.5 Years Patent Maintenance Fee

$7,400

$3,700

$1,850

 

The International Property Rights Index (IPRI) reports and compares on the status of property rights in countries across the world.[31] The study is based on three core components: legal and political environment, physical property rights, and intellectual property rights.  The intellectual property rights factor is composed of the protection of intellectual property rights, patent protection, and copyright piracy. IPRI ranks the United States seventeenth and in the second quartile in the protection of intellectual property.

Table 2 – International Property Rights Rankings, 2014

Source: Intellectual Property Rights Index 

Figure 3 – International Property Rights Index, 2014


Source: Intellectual Property Rights Index

History of Patent Law

Patents are an old concept that predates the founding of the United States. In fact, the first known patent was issued in England in 1331[32] and the Massachusetts Bay Colony issued the earliest known patent in America in the 1640s.[33] The congressional power to create patent laws derives from Article I, Section 8, Clause 8 of the U.S. Constitution, which states Congress has the power “to promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”[34] U.S patent laws have been significantly revised only a few times since the first Patent Act of 1790, such as in 1793, 1836, 1839, 1939, 1952, 1994, 1999, and 2011.[35]

The first federal Patent Act of 1790 was entitled “An Act to promote the Progress of Useful Arts.”[36] At the time “useful arts” generally meant the work of skilled workers and artisans, particular engineers and manufacturers.[37] Any two of the Secretary of War, Secretary of State, or Attorney General were given the power to grant a patent for useful inventions for up to fourteen years.

The Patent Act of 1793 expanded the definition of “useful arts” from the narrow scope of engineers and manufactures to more general skills. The Act now included, “any new and useful art, machine, manufacture or composition of matter and any new and useful improvement on any art, machine, manufacture or composition of matter.”[38] An applicant had to submit clear written description of the invention, the manner of using or creating, in order “distinguish the same from all other things before known, and to enable any person skilled in the art or science, of which it is a [part], or with which it is most nearly connected, to make, compound, and use the same.”[39] The Act was first to recognize potential conflicting patents. The Act stated that a holder of a patent on a particular improvement of a previously patented invention did not give the patentee of the improvement patent any right to use the invention that was the subject of the original patent, or vice versa.[40]

The Patent Act of 1836 stemmed from criticisms that patents were being granted for things that lacked novelty.[41] The Act now required the inventor to distinguish his the invention and raised the required standard to “particularly specify and point out the part, improvement, or combination, which he claims as his own invention or discovery.”[42] The Act transferred the patent office to the State Department.

In 1839, the law was amended to provide for a “two-year grace period for publication, or use of the invention, by the applicant before the filing of his patent application was submitted.”[43] In 1849, the Patent Office was transferred from the State Department to the Department of the Interior.[44] In 1850, the U.S. Supreme Court introduced the requirement that an invention had to be “non-obvious”, along with the previous requirement of being both new and useful.[45]

In 1887 the United States joined the Paris Convention for the Protection of Industrial Property, requiring members to give patent applicants who were nationals of one member state the right to file an application in their own county and to have the date of the filing in their home country count as the effective filling date in the other.[46] In 1925, the Patent Office transferred from the Department of Commerce to the Department of Labor. In 1930, the Plant Patent Act created the patent for plants invented by the applicant. In 1939, the two-year grace period was reduced to one year.  

The 1952 Patent Act was the “culmination of 160 years of developing patent law, selectively incorporating some of the provisions in prior statutes, codifying sensible judicial precedents.”[47] The Act lays out the basic format of present day patent law.[48] Patent applications now had to include inventions that were “non-obvious,” include a definition of what the applicant believed would be infringement, a description of elements in functional terms, and loosened the requirements by joint inventors to file.[49]

In 1994, the law was amended to comply with Uruguay Round Agreement Act, an international agreement that imposed some international minimum standards in patent protection.[50] Among other things, the Act changed the U.S law with regard to minimum duration of a patent protection.[51]

American Invents Act

The Leahy–Smith America Invents Act (AIA), enacted in 2011, made both substantive and procedural changes to the U.S. patent process.[52] The act, which is more than 150 pages and 137 sections, has been described as “the most comprehensive revisions of U.S. patent law in more than 50 years,”[53] and even the USPTO has called it one of the most significant laws ever enacted by the U.S. since 1836.[54] The reasoning behind the revision was to “promote harmonization of the United States patent system with… nearly all other countries with who the United States conducts trade and thereby promote greater international uniformity and certainly in the procedures used for securing the exclusive rights of invertors to their discoveries.”[55] Thus, it recognized that the U.S. was out-of-step with the rest of the world and U.S. patent holders were vulnerable to conflict with international patent enforcement provisions.

Another reason for the passage of the AIA was to address speculative patent litigation – trolling – by people that might attempt unwarranted allegations of patent infringement in order to seek monetary gain through the threat of enforcement.[56] Patent trolls are also known as a non-practicing entity, patent assertion entity, or patent holding company. For example, if a company develops a mobile app that allows customers to use a “Buy” button to purchase inventory. Unknown to the company, a patent troll holds the design patent or utility patent on the “Buy” button. The patent troll can take the company to court, insisting it pay a licensing fee on every sale that was made using their button.[57]

The Boston University School of Law conducted a study, “The Direct Costs for Non-Practicing Entity Disputes,” on the cost of patent trolls on the United States economy. The study found that patent trolls cost American businesses more than $29 billion in 2011, up from $7 billion in 2005.[58]  Patent trolls typically rely on the complexity of patent laws against business who may have an inferior understanding.[59] Thus, patent trolls usually avoid suing larger companies. The Boston University School of law study found that small and medium-sized entities made up 90 percent of the companies sued and accounted for 59 percent of the defenses in 2011.[60]

The most important reform brought about by the AIA is the change in how the USPTO changed the right of priority from the old “first-to-invent” to the new “first-inventor-to-file.”[61] The patent priority rights deal with the situation when two applicants file for a patent that for a nearly identical patent.[62] Therefore the USPTO must decide which applicant is first in line, and has the right to file for a patent.  Before the AIA, the United States was the only country to follow the “first-to-invent” system for priority of patent applications. When there were conflicting patent claims, this system sought to establish whom the original and true inventor was.[63] This was a complex system involving lots of fact-finding, testimony, and a great deal of uncertainty.  By contrast, the first-to-file system grants priority to the first inventor to file a patent application with the USPTO.[64] This greatly reduced transaction costs and increased certainty in patent applications. Congress clearly stated that that is the first inventor to file with the USPTO, meaning a non-inventor applicant will not be granted a patent.[65] The graph below illustrates the priority system before and after the AIA for two inventors who apply for a patent application for the same invention[66]:

Figure 4 – Patent Filing Priority System before and after the AIA

U.S. patent law has a long history of encouraging inventors to market their invention, notify the general public of their patent rights, and the AIA reaffirms this commitment.[67] The grace period is designed to help “encourage early public disclosure of new inventions,”[68] while providing a safeguard for inventors to still patent their invention that is now public knowledge. After an inventor makes his invention known to the public, the AIA provides a one-year grace period where the inventor can decide whether to file for a patent to protect his rights.[69]

A fundamental area of patent law is what falls under the definition of prior art.[70] AIA added a somewhat vague addition to the end of the definition of what constitutes prior art. Prior to the AIA, a patent would be revoked through showing that the invention did not meet the requirement of being “non-obvious.”[71] In determining if the invention was non-obvious, the USPTO would determine if the invention was obvious to a person of reasonable, similar, skill in the field.[72] The AIA added to or otherwise available to the section dealing with prior art.[73] The definition, in part, now reads: “(a) Prior art: A person shall be entitled to a patent unless—(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.”[74] There is debate whether this new phrase, “otherwise available to the public,” creates an additional requirement, or simply a “catchall” category that is “consistent with the long held U.S. patent policy that inventions in public should not be given monopoly advantages of a patent.”[75] The meaning of this addition is not yet known and will likely end up being decided by the court system.[76],[77] A potential consequence of this confusion is that it may lead to unnecessary and costly litigation to determine.[78]

The AIA still requires that the application to disclose the best mode, or favored way, to use the invention.[79] The rationale behind the requirement of best mode is to protect against the desire of some applicants/inventors to gain patent protection without making a full disclosure as required by the patent application.[80] Failure to disclose the best mode of the claimed invention used to be grounds to invalidate a patent application.[81] However, the AIA has largely defanged the requirement, and the failure to disclose the best mode is no longer a basis by which any patent may be invalidated or canceled.[82] Still it is possible that an application could be invalidated, but it likely this would only occur in the most extreme cases.[83]

The AIA altered the structure of fees based on the size of the business and added a third category to better reflect the type of entity applying for a patent.[84] Patent applicant’s fees are classified as either a large (or general), small or micro entity. The general fee, attributed to large entities, will be applied to all patent applications unless the applicant can prove he is a small entity or a micro entity.[85] The small entity category is used generally for businesses with less than 500 employees. The AIA added the third category – micro entity – in order to better represent small independent inventors.[86] A micro entity are those applicants who would qualify as a small entity, but also is 1) a named inventor who has no more than four patent applications; 2) has a gross income of no more than three times the median household income, and 3) is not stepping-in-the-shoes-of-another applicant who would otherwise not qualify.[87] The figure below shows the change in fees[88]:

Figure 5 – Patent Applications Fees

Tax Strategy Patents 

In the late 1990s, the USPTO issued patents on methods and strategies for new methods to minimize taxpayers’ liability of state and federal taxes.[89] The AIA holds that tax strategies alone are insufficient, but did not completely close the door. The AIA now holds that the tax strategy ban on “a method, apparatus, technology, computer program product, or system used solely for financial management, to the extent it is severable from any tax strategy or does not limit the use of any tax strategy by any taxpayer or tax adviser.”[90] The AIA explicitly states that “[n]othing in this section shall be construed to imply that other business methods are patentable or that other business method patents are valid.”[91]

Satellite USPTO Offices                               

For the first time the AIA was required to establish a minimum number of satellite offices spread across the United States.[92] The purpose of which was to ensure geographical diversity and better access for patent applicants. Prior to this law, the only USPTO office was located in a fourteen building complex located in Alexandria, VA. The USPTO decided to open five additional offices located in Dallas, TX, Denver, CO, Detroit, MI, Ithaca, NY and San Jose, CA. As the writing of this paper, not all of these offices are open.

Conclusion

Intellectual property rights have, and continue to have, an important role in facilitating entrepreneurial growth, furthering scientific and economic progress. It serves an important role in protecting the inventions of people and the work of businesses.  Copyright, trademark, trade secret, and patent protections stand on the front line of protecting these intellectual property rights.  The USPTO is one of the United States’ oldest regulatory entities and continues to be active in protecting inventors and businesses. The USPTO has been reformed numerous times since its creation in 1790 to keep up with the fluid nature of IP. The recent passage of The American Invents Act exemplifies this continuous evolution of the USPTO.

The regulatory and structural changes that the AIA imposes help reduce transaction costs, increase certainty, and increases access to patent applicants. The satellite USPTO office will give inventors better access and alleviate the burden of the USPTO’s main office in Alexandria, VA.  AIA’s change from a first-to-invent to a first-to-file process of patent filing brings the United States’ patent law in line with the rest of international patent law. Alignment of U.S. and international patent law will now better afford patent protection for inventors and business. The AIA continues to reflect the long trend in the United States of protecting domestic intellectual property both domestically and internationally.

 


[1]  What is Intellectual Property. WIPO Publication No. 450(E). World Intellectual Property Organization, Geneva, Switzerland (2015).

[2]  Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html.

[3]  Charmasson, Henri, Buchaca, John. Patents, Copyrights & Trademarks for Dummies, 2nd Edition, John Wiley & Sons, Inc. (2008).

[4]  Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html

[5]  Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html

[6]  Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html.

[7]  Understanding the Different Kinds of Intellectual Property. Dummies.com. John Wiley & Sons, Inc. (2015). Retrieved on May 25, 2015 from: http://www.dummies.com/how-to/content/understanding-the-different-kinds-of-intellectual-.html.

[8]  18 U.S. Code § 1839

[9]  How long does patent, trademark or copyright protection last? Retrieved from www.stopfakes.gov on June 7 2015.

[10]  John R. Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014).

[11]  Robert Green Stern. Leahy Smith America Invents Act – Overview: Post Grant Review, Re- Examination, and Supplemental Examination, IEEE-USA Intellectual Property Professionals Initiative Presents First Seminar: The New Patent Law and What it Means to You (Oct. 22, 2011) (Cited in Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012)).

[12]  Types of Patents, U.S. PATENT AND TRADEMARK OFFICE, Electronic Information Products Division, Patent Technology Monitoring Team (03 October 2013). Retrieved from http://www.uspto.gov/web/offices/ac/ido/oeip/taf/patdesc.htm

[13]  US patents 1790-2008. Retrieved from http://en.wikipedia.org/wiki/File:US_patents_1790-2008.png on May 23, 2015.

[14]  U.S. PATENT AND TRADEMARK OFFICE, www.uspto.gov (2015).

[15]  Types of Patents, U.S. PATENT AND TRADEMARK OFFICE, Electronic Information Products Division, Patent Technology Monitoring Team (03 October 2013).

[16]  Checklist for Filing a Non Provisional Utility, U.S. PATENT AND TRADEMARK OFFICE, www.uspto.gov (2015).

[17]  Process for Obtaining a patent, U.S. PATENT AND TRADEMARK OFFICE, http://www.uspto.gov/patents-getting-started/patent-basics/types-patent-applications/utility-patent/process-obtaining (2015).

[18]  How Long Does It Take, U.S. PAT. & TRADEMARK OFF. (Mar. 4, 2012); Patent Backlog Frustrates Inventors, CBSNEWS (Aug. 8, 2010). (Cited in Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012)).

[19]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §101), 4.

[20]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §112)), 6.

[21]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §154(a)(2)), 6.

[22]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 4.

[23]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 4.

[24]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 4-5.

[25]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 5.

[26]  Patent FAQ LegalZoom.com, Inc. (2015). Retrieved on May 26, 201 from https://www.legalzoom.com/knowledge/patent/faq/pap-versus-nap-filing.

[27]  Patent FAQ LegalZoom.com, Inc. (2015). Retrieved on May 26, 201 from https://www.legalzoom.com/knowledge/patent/faq/pap-versus-nap-filing.

[28]  The Illustrated Patent Process. Retrieved on May 30, 2015 from http://www.macdrifter.com/2014/01/on-patents-applications-and-prior-art.txt.

[29]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing P.L. 112-29 at §9), 8.

[30]  Melissa Cerro, Navigating a Post America Invents Act World: How the Leahy-Smith America Invents Act Supports Small Businesses, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014).

[31]  "International Property Rights Index 2014". International Property Rights Index. Property Rights Alliance. Archived from the original on 13 January 2015. Retrieved on June 1, 2015 from: http://internationalpropertyrightsindex.org/ip ri_analysis

[32]  Terrell on Patents, 8th edition edited by J R Jones, London (Sweet & Maxwell) 1934. (cited in History of Patent Law, Wikipedia.com (2015), retrieved on May 15, 2015 from: http://en.wikipedia.org/w/index.php?title=History_of_patent_law&oldid=661669078)

[33]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, (2015). Retrieved on May, 29 2015 from http://ladas.com/a-brief-history-of-the-patent-law-of-the-united-states-2/.

[34]  U.S. Const. art. I, § 8, cl. 8.

[35]  Melissa Cerro, Navigating a Post America Invents Act World: How the Leahy-Smith America Invents Act Supports Small Businesses, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014).

[36]  “The First United States Patent Act” Patent Act of 1790, Ch. 7, 1 Stat. 109 (April 10, 1790) CHAP. VII. – An ACT to promote the Progress of uſeful Arts.  Retrieved on May 29, 2015 from docs.law.gwu.edu.

[37]  Quick, Patent It! The New York Times Company. New York, New York. November 8, 2009.

[38]  Patent Act of 1793, Ch. 7, 1 Stat. 109-112 (April 10, 1790) CHAP. VII. – An ACT to promote the Progress of uſeful Arts.  Retrieved on May 29, 2015 from docs.law.gwu.edu.

[39]  Patent Act 1793, § 3. (as cited in A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015.) Retrieved on May, 29 2015 from http://ladas.com/a-brief-history-of-the-patent-law-of-the-united-states-2/.

[40]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, (2015). Web. May, 29 2015.

[41]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015

[42]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015.

[43]  Patent Act 1836, § 6. (as cited in A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015.)

[44]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015

[45]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015 (with reference to Hotchkiss v. Greenwood, 52 U.S. 248 (1850)).

[46]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015

[47]  See Charles E. Miller & Daniel P. Archibald, Beware the Suppression of District-Court Jurisdiction of Administrative Decisions in Patentvalidity Challenges Under the America Invents Act: A Critical Analysis of a Legislative Black Swan in an Age of Preconceived Notions and Special-interest Lobbying, 95 J. PAT & TRADEMARK OFF. SOC’Y 124, 135 (2013); Case, supra note 11, at 60, n.205 (citing Renee Kaswan et al., Patent Reform: Effects on Medical Innovation Businesses, 2 MED. INNOVATION & BUS. 11, 11 (2010) (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014)

[48]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015.

[49]  A Brief History of Patent Law of the United States, Ladas & Parry LLP, 2015. Web. 29 May 2015.

[50] A Brief History of Patent Law of the United States, Ladas & Parry LLP, (2015). Web. 29 May 2015.

[51] A Brief History of Patent Law of the United States, Ladas & Parry LLP, (2015). Web. 29 May 2015.

[52]  Leahy–Smith America Invents Act, Pub. L. No. 112-29, 125 Stat. 284 (2011) (codified in scattered sections of 28 and 35 U.S.C.) [AIA]

[53]  Patrick M. Boucher, Recent developments in US patent law, 65 PHYS. TODAY, Jan. 2012, at 27. (Cited in Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012)).

[54]  See Eric A. Kelly, Is the Prototypical Small Inventor at Risk of Inadvertently Eliminating Their Traditional One-Year Grace Period Under the American Invents Act?—Interpreting “Or Otherwise Available to the Public” Per New § 102(a) and “Disclosure” Per New § 102(b), 21 TEX. INTELL. PROP. L.J. 373, 374 (2013) (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[55]  Alexa L. Ashworth, Race You to the Patent Office! How the New Patent Reform Act Will Affect Technology Transfer at Universities, 23 ALB. L.J. SCI. & TECH. 383, 395 (2013); AIA § 3(p), 125 Stat. at 293. (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[56]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 9.

[57]  Morrow, Stephanie, Patent Trolls and Their Impact, Legal Zoom.com, Inc. Retrieved on June 1, 2015 from: https://www.legalzoom.com/articles/patent-trolls-and-their-impact

[58]  Bessen, James E. and Meurer, Michael J., The Direct Costs from NPE Disputes (June 28, 2012). 99 Cornell L. Rev. 387 (2014); Boston Univ. School of Law, Law and Economics Research Paper No. 12-34. Available at SSRN: http://ssrn.com/abstract=2091210 or http://dx.doi.org/10.2139/ssrn.2091210

[59] Morrow, Stephanie, Patent Trolls and Their Impact, Legal Zoom.com, Inc. Retrieved on June 1, 2015 from: https://www.legalzoom.com/articles/patent-trolls-and-their-impact.

[60]  Bessen, James E. and Meurer, Michael J.,  The Direct Costs from NPE Disputes (June 28, 2012). 99 Cornell L. Rev. 387 (2014); Boston Univ. School of Law, Law and Economics Research Paper No. 12-34. Available at SSRN: http://ssrn.com/abstract=2091210 or http://dx.doi.org/10.2139/ssrn.2091210

[61]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 16.

[62]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 17.

[63]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 17.

[64]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 19.

[65]  Leahy–Smith America Invents Act (Cited in M. Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014)), 12.

[66]  Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012).

[67]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §101), 17.

[68]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §112.), 12.

[69]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014).

[70]  See Hung H. Bui, An Overview of Patent Reform Act of 2011: Navigating the Leahy–Smith America Invents Act Including Effective Dates for Patent Reform, 93 J. PAT. & TRADEMARK OFF. SOC’Y 441, 467 (2012). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[71]  “Disclosure” Per New § 102(b), 21 TEX. INTELL. PROP. L.J. 373, 374 (2013); Jennifer L. Case, How the America Invents Act Hurts American Inventors and Weakens Incentives to Innovate, 82 UMKC L. REV. 29, 30–31, 62 (2013) (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[72]  Christopher Brown, Survey: Intellectual Property Law: Developments in Intellectual Property Law, 45 IND. L. REV. 1243, 1243 (2012). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[73]  Brown, supra note 35, at 1248. (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[74]  See Heath W. Hoglund et al., A Different State of Grace: The New Grace Period Under the AIA, 5 No. 6 LANDSLIDE 48, 48–49 (July/Aug. 2013). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[75]  See Heath W. Hoglund et al., A Different State of Grace: The New Grace Period Under the AIA, 5 No. 6 LANDSLIDE 48, 48–49 (July/Aug. 2013). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[76]  Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014)

[77] See 35 U.S.C. § 102 (2011). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[78]  Joshua D. Sarnoff, Derivation and Prior Art Problems with the New Patent Act, 2011 PATENTLY-O PATENT L. REV. 12, 25 (2011). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[79]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing 35 U.S.C. §112).

[80]  Quinn, Gene. America Invents Act: Traps for the Unwary: AIA Oddities: Trade Secrets, Rep-patenting and Best Mode. Ipwatchdog.com. September 18, 2013. (2013)

[81]  Quinn, Gene. America Invents Act: Traps for the Unwary: AIA Oddities: Trade Secrets, Rep-patenting and Best Mode. Ipwatchdog.com. September 18, 2013. (2013)

[82]  Quinn, Gene. America Invents Act: Traps for the Unwary: AIA Oddities: Trade Secrets, Rep-patenting and Best Mode. Ipwatchdog.com. September 18, 2013. (2013)

[83]  Quinn, Gene. America Invents Act: Traps for the Unwary: AIA Oddities: Trade Secrets, Rep-patenting and Best Mode. Ipwatchdog.com. September 18, 2013. (2013)

[84]  See generally AIA § 10, 125 Stat. at 316–17. (as cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[85]  Bui, supra note 14, at 447–48 n.52 (citing the Small Business Act, 15 U.S.C. §§ 631–657). (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[86]  Bui, supra note 14, at 447–48 n.52. (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[87]  AIA § 10(g)(1)(a)(1)–(4), 125 Stat. at 318. (Cited in Cerro, Navigating a Post America Invents Act World, 34 J. Nat’l Ass’n Admin. L. Judiciary Iss. 1 (2014))

[88] Sherman, C. & Anderson, P. M., What students and independent inventors need to know about the America Invents Act. Southern Law Journal. (2012).

[89]  See CRS Report RL34221, Patents on Tax Strategies: Issues in Intellectual Property and Innovation, by John R. Thomas. (Cited in Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014)), 7.

[90]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014), 16.

[91]  See CRS Report RL34221, Patents on Tax Strategies: Issues in Intellectual Property and Innovation, by John R. Thomas. (Cited in Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014)), 6.

[92]  Thomas, The Leahy-Smith America Invents Act: Innovation Issues, Congressional Research Service, (2014) (citing P.L. 112-29 §14.), 9.


* Mr. Howard is an attorney residing in Arlington, VA, and a consultant to Quantria Strategies, LLC. He can be reached at jorhowar@gmail.com.

 

Related Research

Executive Summary

Electronic medical records (EMRs), as a cornerstone of a more intelligent, adaptive, and efficient health care system, have the potential to improve the overall health of our society and begin to rein in the trillions of dollars spent on health care each year. However, implementation and utilization of such record systems brings its own significant costs and challenges which must be carefully considered and overcome in order to fully realize the potential benefits.

Implementing an EMR system could cost a single physician approximately $163,765. As of May 2015, the Centers for Medicare and Medicaid Services (CMS) had paid more than $30 billion in financial incentives to more than 468,000 Medicare and Medicaid providers for implementing EMR systems. With a majority of Americans now having at least one if not multiple EMRs generated on their behalf, data breaches and security threats are becoming more common and are estimated by the American Action Forum (AAF) to have cost the health care industry as much as $50.6 billion since 2009.

Introduction

It may seem obvious that in 2015 most health care providers in the U.S. are tracking patient encounters through an EMR system. However, that was not the case a few years ago. While roughly three quarters of Americans had a computer in their home in 2009,[1] only 21.8 percent of office-based physicians and 12.2 percent of non-federal acute care hospitals were using a “basic” EMR system.[2] Seeking to speed along adoption throughout the rest of the industry in order to take advantage of the many benefits which could be made possible by system-wide utilization, the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 (enacted as part of the American Recovery and Reinvestment Act) provided financial incentives for Medicare and Medicaid providers who become “meaningful users” of EMRs.

Some of the initial benefits of EMR use include better patient care coordination and disease management, fewer medical errors, increased productivity, and the reduced costs which could result if all of these objectives were achieved. The long-term benefits include more targeted public health initiatives; more effective preventive health measures; personalized, predictive medicine; significant reductions in national health expenditures as we are able to determine the most effective treatment options for the lowest cost; and ultimately a healthier society. However, none of these benefits will be achieved without providers, the federal government, and patients incurring significant upfront costs for both implementation and information security.

Adoption Rates and Implementation Costs

Only a few years after passage of the HITECH Act, adoption has significantly increased and much more information is being gathered and reported electronically. An estimated 78 percent of office-based physicians were using some form of EMR system in 2013, and 48 percent were using a qualified “basic” system.[3] Among non-federal acute care hospitals, 76 percent were using a “basic” system by 2014.[4] The CMS reports that, as of May 2015, more than 468,000 Medicare and Medicaid providers (87 percent) have received payments through the HITECH Act totaling approximately $30.4 billion.[5] That amounts to roughly $65,000 per provider in federal subsidies.[6] One thing worth noting is that the subsidies were not exclusively available to new EMR adopters, nor were they available to all EMR adopters; rather, the subsidies were available to providers with EMR systems that met “meaningful use” standards determined by CMS. This has resulted in increased payments for providers who had already adopted such technology, thus providing them a reward for something they had already done. Additionally, providers who are unable to afford the upfront costs of such systems not only lose out on the subsidy, but, as of January 1, 2015, are now facing financial penalties for not meeting the new standards.[7]

Based on research by Dr. Neil Fleming, et al, the following chart shows the average cost for a physician practice transitioning to EMR use between 2009 and 2011, both for a physician practicing on their own (total cost of $163,765) and for a practice with five physicians ($233,298).[8] Because many of the costs are fixed, such as the software and much of the hardware, it is much cheaper to share the costs amongst multiple providers in a single practice.

Is It Worth It? Evidence Shows Mixed Results So Far

While the costs for many providers transitioning to an EMR system have been largely offset by the federal incentive payments, the evidence thus far seems to suggest that most providers are not yet seeing the payoff. Research by David Dranove, et al, analyzing hospitals using EMR systems from 1996-2009, found that these early adopters, on average, had increased costs, at least for the first three years after adoption.[9] There were differences, though, based on the strength of the local Information Technology (IT) labor supply. In areas without a strong IT workforce, costs increased 4 percent, but in IT-intensive areas, hospitals with basic EMR systems saw cost decreases of 3.4 percent three years after adoption. As the number of workers in IT-related jobs continues to increase and EMR technology is adapted and improved, all areas may begin to see cost decreases.[10]

Research by Michael Howley, et al, examining thirty ambulatory practices for two years after EMR implementation found that, on average, productivity declined by an average of 15 patients per physician per quarter following implementation of an EMR.[11] At the same time, reimbursements to the practices actually increased. The researchers found that this was not a result of upcoding or more generous reimbursements per charge, but rather a significant increase in the number of ancillary procedures billed following EMR implementation. While we look to increase access to care and simultaneously decrease costs, this study finds that physicians are instead receiving more money for treating fewer patients—which runs counter to the intended result.

It is not surprising that increased productivity has not yet occurred. A study by the Agency for Healthcare Research and Quality (AHRQ) found that only 14 percent of providers in 2013 were sharing data with health care providers outside their organization, hindering the ability to improve patient care coordination as desired.[12] The “meaningful use” requirements are being implemented by CMS in various stages, gradually increasing the number and type of advanced functionalities which must be included in qualifying EMR systems. For now, EMRs are still primarily viewed as an administrative tool. A survey analysis from Software Advice finds that the most commonly requested functionality for an EMR system continues to be billing (45 percent) followed by claim support (27 percent) and patient scheduling (23 percent).[13] Additionally, 60 percent of EMR purchasers in 2015 are replacing current EMR systems, which may delay full interoperability and the use of the more advanced functions as providers continue to spend time learning new systems.[14] As more providers progress through the various stages, each requiring more advanced functionality and use among a greater percentage of patients, the systems should begin to provide more comprehensive benefits. Researchers have found significant reductions in medication errors and, consequently, reductions in mortality rates for hospitalized patients, with use of computerized provider order entry (CPOE, a mandatory functionality of EMR systems in Stage 1 of the meaningful use requirements); the reductions increased as CPOE was used for larger percentages of patients.[15]

Security Costs

Besides the costs to the federal government and the providers themselves to implement these new electronic systems, increased security threats and privacy concerns are adding even greater costs. While the average cost of a data breach per record compromised over the last several years, according to the Ponemon Institute, has held relatively steady generally, the average cost of data breaches in the health care industry has been more volatile and has increased sharply in the last two years, as shown below. The average cost of a data breach in the U.S. in 2014 was $217 per compromised record, compared to $398 in the health care industry.[16]

In 2013, 90 percent of hospitals claimed to have a computerized system capable of conducting or reviewing a security risk analysis.[17] Despite this, the number of data breaches, and the number of records compromised, continues to climb, as shown in the charts below. This is costing the industry more and more money and costing patients their peace of mind. 

Given the nearly 135 million health care records that have been compromised in more than 1,200 separate data breaches since October 2009,[18]AAF estimates the total cost of these breaches to be $50.6 billion in less than 6 years.[19]  It is important to recognize that the majority of this cost comes from the exceptionally high number of records compromised so far this year, despite there being less than half the number of data breach incidents to date in 2015 compared with 2014. This amounts to an increase in the average number of records compromised in a single breach of 160 percent from 2014 to 2015 (even when the outlier in which nearly 79 million records were compromised in a single breach in 2015 is removed). As the year continues, the number of records compromised and the resulting cost will increase even further. Already, more has been spent on responding to security breaches of health care records in the first six months of 2015 than the total amount of federal incentives paid through the HITECH Act to make this transition happen.   

The dramatic increase in the average number of records compromised in a single breach is alarming and may be a consequence of the more connected health care system for which we are striving. With the growing number of electronic records and increased sharing among providers, the number of records potentially accessed in a single incident is growing exponentially. Recognizing this, Blue Cross Blue Shield has just announced they will offer customers identity protection beginning in 2016.[20]

Conflicting Interests

While the low rate of data sharing mentioned previously may largely be due to the reportedly high costs charged by EMR vendors for such capabilities—which the Obama Administration is trying to curb—it is important to understand that some of this “data blocking” may be intentional. Despite the myriad benefits which could accrue to the health care system as a whole through access to the trove of information being collected, the conflicting interests of the various stakeholders—namely providers and payers—means that it is not in stakeholders’ self-interest to share their information. Essentially, under the current payment models, one person’s revenue gain is another person’s revenue loss. Thus, it will likely require a complicated policy solution in order to bring all of the players together for the benefit of society as a whole. Even federal agencies are conflicted on how to best tackle these issues. The Health and Human Services’ Office of the National Coordinator for Health Information Technology is working on a plan to improve electronic information exchange by creating industry standards, but the Federal Trade Commission warns that there could be unintended consequences which stifle competition.[21]

Conclusion

Widespread use of electronic medical records could bring beneficial change to the health care system in a variety of ways, largely because they are the foundational piece to many technologies and analyses that could change health care delivery. Having every patient’s data stored electronically, in a standardized form creating an easy transfer and comparison of data among providers, insurers, and researchers will allow the recognition of patterns that could provide smarter, more targeted personal, population, and public health measures. For example,  the development of not just personalized medicine, but predictive medicine; reductions in medical errors; better disease management and treatment adherence; predicting and potentially preventing disease outbreaks; elimination of insurance fraud; identification of the most effective treatments for the fewest dollars; and identification of the best treatments that are worth the extra money.

All of these potential advances could greatly improve health outcomes and help bend the health care cost curve. Unfortunately, these advances come with significant costs, both financially and in terms of personal privacy. Going forward, policymakers should work to ensure limited resources are used in a more cost-effective manner. Changes to EMR policy have been part of recent legislative and executive action. Efforts to align various conflicting interests were included in the recently passed H.R. 2, Medicare Access and CHIP Reauthorization Act of 2015, for example. CMS recently announced that entrepreneurs and innovators will be given access to Medicare data for research purposes. The House-passed H.R. 6, the 21st Century Cures Act, encourages greater access to and use of health care data for research purposes. [22] As EMR adoption continues to increase along with the type of information gathered, policymakers should work with experts and the public to ensure that the appropriate balance is struck between sharing information to allow advancements and providing necessary privacy protections.


[2] To qualify as a “basic” HER system, the program must be able to: record patient history and demographics, track patient problem lists, store physician clinical notes, record a comprehensive list of patients’ medications and allergies, provide for computerized ordering of prescriptions, and allow for electronic viewing of lab and imaging results. http://www.cdc.gov/nchs/data/databriefs/db143.htm, https://www.healthit.gov/sites/default/files/oncdatabrief9final.pdf

[6] Calculating average amount per provider is difficult given that a “provider” may be a hospital, which is eligible to receive up to $6.1 million over the course of the program. Medicare physicians may receive up to $44,000 and Medicaid physicians are eligible to receive up to $63,750.

[7] Providers are allowed to apply for a hardship exemption if unable to meet the criteria; if approved, these providers would not be penalized.

[18] https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf (Data available through June 26, 2015, for breaches in which 500 or more records were compromised; thus, the actual total is even higher.)

[19] This estimate assumes cost per record breached in 2015 is the same as the cost in 2014: $398. A conservative estimate for average cost per record compromised in 2015 of $350, to account for possibility that costs may decline as has happened in previous years, would result in a cost of more than $46 billion.

[22] http://www.cms.gov/Newsroom/MediaReleaseDatabase/Press-releases/2015-Press-releases-items/2015-06-02.html


Summary

  • The Iran deal will provide the Islamic Republic with an estimated $140 billion in sanctions relief and unfrozen assets.
  • Iran currently spends 3.4 percent of its total budget on defense.
  • Iran spends 65 percent of its defense budget on the IRGC, its elite paramilitary force that actively supports terrorist organizations throughout the Middle East.
  • If current budget trends persist, the Iran deal would mean at least $4.8 billion in additional Iranian defense spending and a 50 percent budget increase for the IRGC.

Economic Impact of the Deal

The recent Iran deal provides relief from international economic sanctions in exchange for certain limitations on Iran’s nuclear program. Upon implementation of the deal, a substantial amount of money would flow to the Iranian government, although the total amount is the subject of much debate.

According to initial reports from U.S. officials, Iran would have access to $100 billion of frozen assets. The Under Secretary of Treasury for Terrorism and Financial Intelligence cited this same figure in congressional testimony earlier this year. Secretary of State John Kerry later walked this number back to around $50 billion, reasoning that half of Iran’s frozen assets were already obligated to various projects. While Iran may have already decided how to allocate some of the windfall, it does not reduce the amount they will receive from sanctions relief.

President Obama himself has used a much larger figure, however, citing Iran’s $150 billion in offshore assets in a recent interview. The Israeli Ambassador to the United States has also publicly stated that the deal would give Iran $150 billion.

The ultimate unfrozen assets figure is likely to be somewhere between $100 and $150 billion. A reasonable estimate from Foreign Policy is that Iran will receive an initial influx of $120 billion in unfrozen foreign assets and $20 billion in additional annual oil revenues – totaling $140 billion in the first year.

Iran’s Defense Budget & the IRGC

Iran’s military spending is also unclear. President Obama has said Iran’s defense budget is $30 billion. Two independent estimates, however, have concluded Iran spends anywhere from $12 to 14 billion (the United States Institute of Peace) to almost $18 billion (the Center for Arms Control and Nonproliferation) annually on its military.

While it is unlikely that Iran accurately reports its military spending, it is informative to consider the country’s official budget as a minimum. This year, the Iranian government reported that it will spend 3.4 percent of its total budget on defense, which amounts to almost $10 billion. Iran reports that it spends 65 percent of its defense budget to fund the Islamic Revolutionary Guard Corps (IRGC), the Iranian elite paramilitary force. That works out to 2.2 percent of its total budget or over $6 billion.

The IRGC actively supports terrorist organizations throughout the Middle East, such as the Houthis in Yemen, Hezbollah in Lebanon, and Hamas in Gaza. Iran also sends billions of dollars and provides military assistance to Syria. A spokeswoman for the United Nations recently estimated that Iran spends $6 billion per year to support the regime of Bashar al-Assad.

Conclusion

The international nuclear agreement with Iran would initially provide a $140 billion economic windfall to the Islamic Republic. Of course, not all of the money would go directly to military spending and terror finance, as Iran has a variety of existing contract obligations and domestic spending needs.

It is important to remember that much of Iran’s financial support for terror remains off the books. Iran’s official budget, however, reveals the minimum amount the country spends on the military and the IRGC. If current spending trends continue, the $140 billion windfall would mean at least an additional $4.8 billion in defense spending. Of this, $3.1 billion would go specifically to the IRGC – amounting to a 50 percent budget increase.

Nothing in the deal would prevent Iran from spending more than that to fund their military or terrorist organizations and authoritarian regimes throughout the Middle East. Already this summer, Iran’s supreme leader, Ayatollah Ali Khamenei, ordered  that the Islamic Republic would increase defense spending to at least 5 percent of its budget.


  • 125,800 jobs could be eliminated directly due to the President's #CleanPowerPlan

  • The new #CleanPowerPlan could shrink the coal industry by nearly half by 2030.

  • Just since 2008, #coal has lost 47,500 jobs

This week, the Environmental Protection Agency (EPA) released its final greenhouse gas (GHG) standards for existing power plants. According to American Action Forum (AAF) research, the final plan will shutter 66 power plants and eliminate 125,800 jobs in the coal industry. All of these figures are based on EPA data. Perhaps more alarming, using the 2012 baseline for coal generation and projections for 2030 output, the industry could shrink by 48 percent.

Here is a breakdown of the final “Clean Power Plan” (CPP) compared to the proposal:

  • Annual Cost: $8.4 billion ($8.8 billion in proposed)
  • Annual Benefit: $37 billion ($71 billion in proposed)
  • Paperwork Hours: 821,000 (316,217 in proposed)
  • Initial Electricity Price Increase: 3.2 percent (6.5 percent in proposed)
  • Total Costs: $11.9 billion ($21.7 billion in proposed)

The map below tracks the average emissions rate each state must meet by 2030. For perspective, the average emissions rate is currently 1,665 pounds of carbon dioxide per megawatt hour. By 2030, the average national rate will be 1,077, a 35 percent reduction. South Dakota, Montana, and North Dakota must make the steepest cuts (average of 46 percent); Maine, Idaho, and Connecticut (average of nine percent) conceivably have the easiest path.

Employment and the CPP

EPA acknowledges that its CPP will cut employment in coal extraction, coal generation of electricity, and oil-fired generation. “For 2030, the estimates of the net decrease in job-years are 31,000 under the rate-based approach and 34,000 under the mass-based approach.” The question is how accurate and comprehensive are those figures. Incredibly, for the proposed rule, EPA claimed it would support 78,800 new jobs for “demand-side energy efficiency.” The agency has now partially cut the building block that supported that demand side growth, yet EPA still projects 83,300 more energy efficiency jobs because of the rule. EPA qualifies this figure as merely “illustrative.”

A topline number of 34,000 lost jobs might sound somewhat insignificant in the broader scope of the economy, but it’s unlikely the total economic loss from the regulation. In AAF’s last measurement of the economic loss of the rule, we used a PricewaterhouseCoopers study that found one energy job supports 3.7 additional jobs. If we take that multiplier and apply it to EPA’s figure of 34,000 jobs, it nets a total employment impact of 125,800 fewer workers. Given that this rule will remake the generation and transmission of energy in the U.S., it is probably a conservative figure.

In addition, AAF found that the agency estimates roughly 33 GW (from 207 GW of coal by 2030 to 174 GW) of coal-fired power being eliminated by the rule. This figure is a steep drop from 49 GW estimate that EPA used for its proposed rule, but there’s more to it than that. Actual coal emissions in 2012 were 336 GW. EPA eventually expects coal generation to decline to 174 GW. This is a loss of 162 GW, or 48 percent from coal’s 2012 base. The graphs below show the cumulative impact of aggressive EPA action and the coal industry’s rapid descent.

Using a similar methodology from AAF’s past work where we sorted the least efficient coal power plants in the nation and then determined which states would be most affected, AAF now predicts at least 66 coal-fired power plants will close because of the CPP. The average generation for these facilities is 514 MW and their average emissions rate (pounds of carbon dioxide output per megawatt hour) is 2,708. Compare that emissions rate to EPA’s 2030 average goal for the entire U.S.: 1,077. This disparity between the 2030 benchmark and the emissions rate of the least efficient plants explains why these 66 facilities will likely be the first to go.

These figures shouldn’t be too surprising. The hit that local communities in the south and Midwest have taken, in part due to EPA regulations, is staggering. The following charts display recent trends in state coal power plant and coal mining jobs across the U.S.

In West Virginia (1,318 lost jobs), Kentucky (705 lost jobs), Pennsylvania (1,096 lost jobs), and Ohio (1,395 lost jobs), fossil-fired generation employment fell by more than 4,500 workers from 2008 to 2014. The news is even bleaker for Kentucky, as it has also lost 6,333 coal mining jobs during this period or 37 percent of the state’s coal industry. Overall, U.S. coal mining has shed more than 8,900 jobs since 2008. Combine coal extraction losses with coal generation declines nationwide and the coal industry has lost more than 47,500 jobs already, with the promise of more to come by 2030.

These troubling figures are also static, one-time snapshots at industry employment. They hardly capture the true economic costs to the region and the local community of losing so many jobs so quickly. The PricewaterhouseCoopers study implying that one energy jobs supports 3.7 additional jobs hints at the total economic damage, but remaking an entire industry in one administration is no small feat.

Best Estimates?

Much of the debate surrounding the plan has focused on EPA’s incredible annual benefit figure, compared to its still significant cost. Juxtaposed, it seems difficult to dismiss EPA’s estimated benefits, but that assumes the agency’s estimates translate into reality. Too often in the past, EPA’s initial benefit figures fail to materialize.

Take EPA’s toxic air pollution rule for power plants, which was recently remanded by the U.S. Supreme Court. In its original projections for coal-fired capacity, it estimated by 2013 the industry would generate 341,000 MW of power. What actually happened? The graph below compares EPA’s projected coal generation output (the purple line), with actual data from the Energy Information Administration (EIA).

In sum, EPA projections were not close. By 2013, EPA’s was 12,000 MW off from the actual result. If EPA were accurate, today’s coal plants would produce 339,000 MW of electricity. It doesn’t. It will produce approximately 315,000, according to EIA data.

EPA’s toxics air pollution projections missed the mark, but what happens if the agency’s estimates for the CPP are also erroneous? The graph below tracks the EIA projected trend (red line) for coal generation between 2011 and 2030 without the rule. The green lines represents EPA’s projected generation under the rule. The blue line represents where EPA expects coal generation by 2030. The yellow line reflects the same “error rate” from EPA’s projection in its toxics air pollution rule.

If EPA is as wrong as it was with its previous landmark air pollution rule, expect coal generation to be at roughly 162 GW by 2030, not 174 GW. This error on EPA’s part would of course translate into more coal generation retirements and additional coal extraction losses. The agency has missed its estimates in the past and there is little reason to believe its 2030 figures will be accurate.

Conclusion

Once published, the CPP will push total regulatory costs for the year past $145 billion. On an annualized basis, final rules have already imposed $4.8 billion in costs. However, the most expensive current measure (efficiency standards for fluorescent lamps) will add just $841 million in burdens. At $8.4 billion, the CPP is almost ten times more expensive than the most burdensome rule of 2015 to date. It is no wonder the measure will likely close at least 66 plants and eliminate 125,800 jobs.

DOL’s Proposed Fiduciary Rule: Not in the Best Interest of Investors

Executive Summary

·       If the Department of Labor’s (DOL) proposed fiduciary rule is implemented, almost all retail investors will see their costs increase by 73-196 percent due to a mass shift toward fee-based accounts.

·       Firms providing investment advice will see an average of $21.5 million in initial compliance costs and $5.1 million in annual maintenance costs.

·       Up to 7 million current Individual Retirement Accounts (IRAs) would fail to qualify for an advisory account due to the balance being too low to be profitable for the adviser.

Introduction

On the five-year anniversary of Dodd-Frank, the comment period ended on DOL’s controversial proposed “fiduciary rule.” The rule sets standards for providers of financial advice, but its most likely impact would be to hurt retirement savers – especially low and middle income retirement savers – and many small businesses that provide investment advice. This short paper contains an explanation of the proposal, a survey of the literature on its likely impacts, and an assessment of two comparable rules abroad that DOL used in its proposal.

2010 Proposal

DOL first proposed a fiduciary regulation back in 2010. That proposal set new definitions for when an adviser was considered to be giving investment advice and when a person was acting as a fiduciary as those actions relate to employee benefit plans under the Employee Retirement Income Security Act of 1974 (ERISA). Specifically, investment advice would be held to a fiduciary standard (as opposed to a suitability standard) even if it was just a single recommendation given once, or if the adviser was a Registered Investment Adviser (RIA), among other relaxed provisions. That fiduciary standard is the highest standard of care under the law and would require investment advisers to act in the absolute best interest of their clients or risk being in breach of that standard of care and face harsh punishment under the law. On the other hand, a suitability standard requires only that the adviser recommend investments that are “suitable” to their clients’ needs. In response to the proposal, DOL received over 300 comments which, in large majority, criticized the proposal, its costs and effects, and urged DOL to seek other means for implementing such a standard. Congress responded in turn with the Retail Investor Protection Act which would prohibit DOL from issuing any rules regarding fiduciary standards under ERISA until at least 60 days after the Securities and Exchange Commission (SEC) issues a final rule regarding fiduciary standards for broker-dealers. The SEC rulemaking was required by Dodd-Frank, but had to be issued after an in-depth study on its costs and benefit, which is discussed below. By September 2011, DOL announced that it would withdraw the proposed rule and go back to the proverbial drawing board.

2015 Proposal

In April 2015, DOL announced its new proposal which stipulates the types of advice that qualify as fiduciary in nature, carves out certain categories of investment advice that aren’t subject to the rule, and amends several existing exemptions from the classes of prohibited transaction while adding two more.

Types of Advice Held to a Fiduciary Standard

This proposed rule differs from the 2010 proposal in two important ways: 1) no longer is advice considered fiduciary just because the adviser is a RIA – it now takes a “functional approach” and must meet both prongs of the definition; and 2) advice doesn’t have to be tailored specifically to the needs of the advice-receiver; it only has to be directed to the plan, participant, or beneficiary.  Specifically, under the new proposed rule, a person renders investment advice that is subject to a fiduciary standard by: “(1) providing investment or investment management recommendations or appraisals to an employee benefit plan, a plan fiduciary, participant or beneficiary, or an IRA owner or fiduciary and (2) either (a) acknowledging the fiduciary nature of the advice, or (b) acting pursuant to an agreement, arrangement, or understanding with the advice recipient that the advice is individualized to, or specifically directed to, the recipient for consideration in making investment or management decisions regarding plan assets. When such advice is provided for a fee or other compensation, direct or indirect, the person giving the advice is a fiduciary.” Put more simply, an investment adviser would be held to this heightened standard if they give any investment advice under the mutual understanding that they are their clients’ best interests into consideration and acting on those interests.

Carve Outs from the Fiduciary Standard

The proposed rule lays out several categories of “carve-outs” for “communications that the Department [of Labor] believes Congress did not intend to cover as fiduciary ‘investment advice’ and that parties would not ordinarily view as communications characterized by a relationship of trust of impartiality.” It also specifies that “[n]one of the carve-outs apply where the adviser represents or acknowledges that it is acting as a fiduciary under ERISA with respect to the advice.” These carve-outs fall into 7 broad categories:

1)     Statements or recommendations made to a “large plan investor with financial expertise” by a counterparty acting in an arm’s length transaction;

2)     Offers or recommendations to plan fiduciaries of ERISA plans to enter into a swap or security-based swap that is regulated under the Securities Exchange Act or the Commodity Exchange Act;

3)     Statements or recommendations provided to a plan fiduciary of an ERISA plan by an employee of the plan sponsor if the employee receives no fee beyond his or her normal compensation;

4)     Marketing or making available a platform of investment alternatives to be selected by a plan fiduciary for an ERISA participant-directed individual account plan;

5)     The identification of investment alternatives that meet objective criteria specified by a plan fiduciary of an ERISA plan or the provision of objective financial data to such fiduciary;

6)     The provision of an appraisal, fairness opinion or a statement of value to an ESOP regarding employer securities, to a collective investment vehicle holding plan assets, or a plan for meeting reporting and disclosure requirements; and

7)     Information and materials that constitute “investment education” or “retirement education.”

These carve outs are a step in the right direction having taken into account many of the comments received after the 2010 proposal.  Unfortunately, there are also exceptions to the carve-outs, which translate into an even greater need for compliance personnel and procedures for the individuals and businesses affected by this rule.

Prohibited Transaction Class Exemptions

ERISA contains a handful of “prohibited transactions” that are automatically considered to be in breach of a fiduciary duty along with several exemptions to those prohibitions. This rule proposes two new exemptions from the prohibited transaction provisions of ERISA along with amendments to exemptions that were already adopted. Per DOL’s submissions to the Office of Management and Budget, “[t]he proposed exemptions and amendments would allow, subject to appropriate safeguards, certain broker-dealers, insurance agents and others that act as investment advice fiduciaries to nevertheless continue to receive a variety of forms of compensation that would otherwise violate prohibited transaction rules and trigger excise tax.” The two proposed new exemptions are significant:

1)     Best Interest Contract Exemption – One of the biggest criticisms of the 2010 proposal was that it would effectively ban commission-based and other indirect compensation. The Best Interest Contract Exemption (BIC) is DOL’s response to that concern. BIC allows an adviser to receive otherwise prohibited forms of compensation in connection with the purchase, sale or holding of certain investment products provided that the adviser “contractually acknowledge fiduciary status, commit to adhere to basic standards of impartial conduct, warrant that they will comply with applicable federal and state laws governing advice and that they have adopted policies and procedures reasonably designed to mitigate any harmful impact of conflict of interest, and disclose basic information on their conflicts of interest and on the cost of their advice” with the investor. Such a contract isn’t a “get out of jail free card” for an adviser should an investor later find the advice unsuitable. Rather, the BIC subjects the adviser to private actions under general contract law instead of fiduciary claims under ERISA.

2)     Principal Transaction Exemption – Investment advisers often sell fixed-income securities out of their own inventory, triggering taxes and other legal sanctions, unless they are exempt. In DOL’s eyes, these principal transactions raise conflict of interest concerns because the adviser is better able to manipulate the price. However, because the practice is so widespread, DOL decided to exempt these transactions from ERISA prohibitions provided they execute a contract containing the same provisions as the BIC as detailed above and further include precise terms related to the price of the security in the transaction. Specifically, the adviser would be required to obtain two outside price quotes for a same or similar security, then provide that security to the investor at a price “at least as favorable” as those two quotes. This exemption is similar to the SEC’s principal transaction guidance for broker-dealers.

Possible Effects of the Fiduciary Rule

Although DOL’s proposal is potentially months (or even years) from being implemented, several studies have been conducted to assess the potential impacts that such a rule would have on the market. A few of the most relevant are discussed below. Each of which is broken out, based on their findings, into two categories: 1) costs to retirement savers and 2) burden on businesses giving investment advice.

Costs to Retirement Savers

Section 913 of Title IX of Dodd-Frank required the SEC to examine the regulatory regimes covering broker-dealers and investment advisers. The SEC was directed to also examine the potential impacts on retail customers if those regulatory requirements are changed or eliminated and make appropriate recommendations as to what sort of fiduciary standards should govern the industry. This mandate has caused many to question why DOL is proposing this fiduciary rule instead of SEC. However, the SEC provides some useful analysis should such a rule be put in place.

The SEC study examined the impact on investors if the impact of the rule resulted in fee based accounts. The SEC study found that this could increase costs as well as impact the gains of their investment. The SEC analysis found that a1 percent increase in annual fees reduces an investor’s return by approximately 18 percent over 20 years. The shift to a fee-based model would reduce cumulative returns to “small investors” (with $200,000 or less in assets) by $20,000 over the next 20 years.

Another study examined the impact on the U.S. retirement market, and quantif[ied] differences in investing behavior and outcomes between advised and non-advised individuals” by surveying over 4300 retail investors and 1200 small businesses.

The study found that DOL’s proposed rule would likely reduce individual retirement savings. Other key findings include:

·       7 million IRAs would fail to qualify for an advisory account due to the balance being too low, as a result individual investors with small-balance accounts likely will lose access to retirement advice and support

·       Almost all retail investors will see average increased costs of 73-196 percent due to a mass shift toward fee-based accounts.

·       As many as 360,000 fewer IRAs would be opened each year as a result of the rule.

·       As a result of the lack of a carve-out that would allow providers to market self-directed plans with fewer than 100 participants, financial advisers would be forced to stop providing workplace retirement plan set-up and support services to small businesses which would cause many small businesses to close existing plans or never establish a plan in the first place.

Lastly, several comments received by DOL questioned the regulatory impact analysis conducted by the DOL. One comment by the Financial Services Institute (FSI) relied on a study conducted in conjunction with Oxford Economics that quantitatively criticizes the regulatory impact analysis (RIA) conducted by DOL.  In particular, the comments say that the RIA failed to take into account the impact of the rule on savers who have an IRA and who do not. For those savers that do have an IRA, the analysis failed to quantify whether the current system suits them well.

Second, FSI explains how the rule’s resulting compliance costs outlined in the RIA are understated or, at times, absent entirely. Those costs include:

·       Elimination of choice for investors as firms reduce their catalog of investment options

·       Homogenization of investing strategies that will create greater risk for investors

·       The push into “robo-investing” which will particularly hurt inexperienced investors

·       Information overloads, most often in the form of the BIC, that will be faced by investors and inhibit their ability to properly comprehend their investment choices

·       Loss of access to commission based accounts and products that are more appropriate for some investors

·       Extensive disclosure requirements

·       Recordkeeping costs

·       Implementation costs of BICs for both new and existing clients

·       Supervisors, compliance and legal oversight costs

·       System interface development costs related to the need to accept new data feeds required by the proposal

·       Training and licensing costs

·       Litigation costs, including potential class action lawsuits stemming from the BIC

·       Each of the above new costs to investment advisers will result in increased direct costs to investors in the form of pass-through costs

Costs to Businesses Providing Investment Advice

With regard to the impact of DOL’s proposal on businesses that provide investment advice, the SEC study found that, as the rule decreases savings over time for retirement savers, the cost of services becomes prohibitively high for the advisers forced to deal with increased compliance costs, and as a result retail investors would face limited choices. It even goes so far as to say that “the increase in costs to broker-dealers could cause many to decide to no longer offer certain products and services to retail customers (e.g., due to risk of litigation under a new fiduciary standard or due to restrictions on principal trading), or would only offer them at increased prices, thereby limiting retail customers’ access to the currently available range of products and services.” The footnote to that section predicts that “middle class Americans – especially those unwilling or unable to pay upfront fees for guidance – will effectively lose access to competent financial guidance and certain investment products and services.” In sum, as fiduciary duties are levied on advisers who previously were under a suitability standard, advisers will be hit with compliance costs that will eventually be passed onto low and middle income investors, thereby limiting their access to investment accounts and retirement savings.

Earlier this month Deloitte released a report which examined the operational expenses for companies complying with the new rule. The study yielded “five themes” which indicate that broker-dealers will face a great deal of operational and financial challenges in implementing the proposed rule. Those five themes are as follows:

1)     It will be unfeasible or impossible to operationalize certain requirements.

2)     Significant personnel, process and technology changes and investments to operations, business and compliance will be required to comply with the rule.

3)     Rule requirements will create disruptions to business operations and customer experience.

4)     Rule requirements may conflict with existing regulatory obligations.

5)     The rule is ambiguous and broad in certain areas, which challenges the operationalization of the rule’s requirements.

The report also included the costs to firms as a result of the rule. The estimates follow broken down in terms of net capital: small (less than $50 million), medium ($50 million to $1 billion), and large (greater than $1 billion):

·       Small firms: $3.4 million in initial compliance costs; $2.6 million in ongoing maintenance costs.

·       Medium firms: $23.1 million; $5 million.

·       Large firms: $38.1 million; $9.5 million.

Another study surveyed over 600 retirement plan decision makers at various small businesses in an effort to “how the rule could impact assistance provided to small businesses and their employees.  HCC’s research examined three general topics: 1) the current environment of investment selection and monitoring, 2) how small businesses currently offering retirement plans feel about the proposed rule and its impact, and 3) how small businesses currently not offering plans but considering doing so feel about the rule and its impact. Like Deloitte’s study, HCC’s results were five-fold:

1)     Thirty percent of small businesses with a plan indicated that it was at least “somewhat likely” that they would drop the plan if the rule were to be implemented.

2)     Fifty percent of small businesses with a plan indicated that it was at least “somewhat likely” that the rule, if it were implemented, would cause them to reduce their matching contribution, offer fewer investment options, and increase fees charged to plan participants.

3)     Fifty percent of small businesses without a plan, but considering implementing a plan indicated that the rule would reduce the likelihood of them offering a plan, with thirty-six percent indicating that it would greatly reduce that likelihood.

4)     Forty percent of small businesses without a plan, but considering implementing a plan indicated that the regulation would be at least somewhat likely to cause them to charge higher fees to participants and not offer matching contributions.

5)     Over eighty percent indicated that they believed their current adviser did a “very good” or “excellent” job of investment selection and over ninety percent indicated that they were at least “somewhat satisfied” with the plan’s current investment options.

As such, HCC found that DOL’s proposed rule, if implemented as-is, “could have a negative impact on the number of employees who got offered plans and a profound negative impact on the quality and generosity of those plans.”

Finally, in terms of the impact on small businesses, the aforementioned FSI study explains that there are about 240,000 broker-dealers and investment adviser firms in operation today. Of those, approximately 60,000 are the big, national companies. The other 180,000 are smaller firms whose business is comprised mostly of $30,000-$50,000 accounts of low to middle income investors. FSI estimates that if DOL’s proposal is implemented, the costs required to comply would be significant enough to make those $30,000-$50,000 accounts unprofitable for the firms, and those firms would be forced to drop the investors as customers, which, for many small firms, would put them out of business entirely.

Results of Comparable Regulations Abroad

DOL’s own RIA uses examples of recently-implemented regulations in the United Kingdom and in Australia that it says mirror its proposal. In doing so, DOL argues that the United States should take a lesson from each of these countries since their regulatory regimes were comparable to the DOL proposal, and, at least in the case of the United Kingdom, there had been positive results. That contention is simply not true.

Australia’s Future of Financial Advice (FOFA) Reforms

FOFA does closely resemble DOL’s proposal: it prospectively banned conflicted payment structures including commission and volume-based payments; it imposed a “best-interest” duty on advisers on behalf of their clients; and it required various disclosure and client agreement duties while giving the Australian government more power over broker-deals and investment advisers. However, Australia’s retirement system looks nothing like that of the United States. FOFA did in fact dramatically reduce the size and accessibility of the investment adviser industry, but Australian retirement savers were not harmed because of a mandatory 9.5 percent employer retirement contribution. Those contributions go into a “Superannuation Fund” that can then be designated into various products with differing assets and diversification levels. Even if an employee is not advised as to which product they should choose or if he fails to choose at all, those deposits go into what is called a “MySuper” product which is required under the law to hold appropriate diversified assets. Because Australia’s retirement system offers a backup plan for those investors who are unable to access an investment adviser, it does not offer a meaningful comparison to DOL’s proposal and should not be a part of its RIA even though it highlights the negative effects on the businesses providing investment advice.

United Kingdom’s Retail Distribution Review

In 2006 the United Kingdom introduced its Retail Distribution Review (RDR) which, at its heart, requires advisers to: 1) explicitly disclose and separately charge clients for their services; 2) disclose to clients whether they are providing independent or restricted advice; 3) subscribe to a code of ethics; 4) hold an appropriate qualification; 5) complete at least 35 hours of continuing professional education annually; and 6) hold a “Statement of Professional Standing” from an accredited institution. DOL states in its RIA that “the results [of RDR implementation] were positive and show material improvements” and that “[a]s a result, clients should be in a better position.”  However, a 2014 Europe Economics “post implementation review” of RDR finds just the opposite.

At over 100 pages, the review quantifies several downfalls of RDR, but most alarming – and one of the biggest worries about the DOL proposal – is the effect it notes on low and middle income investors. Per Europe Economics: “Pre-RDR there was a concern that the ban on commissions and introduction of more transparent adviser charging would lead some firms to remove previous implicit cross-subsidies between customer groups. As a result, customers with low levels of investable wealth and simple ongoing advice needs may no longer be profitable for some firms, at least not at fee levels which customers would be willing to pay…Post-RDR there is evidence that firms are indeed segmenting their client books…There is also evidence of a move among some advisers towards higher net-worth customers. Although sources suggest minimum thresholds vary by firm, some firms have moved to minimum wealth levels of between £50,000 and £100,000.” The review also cites to decreased options for investors, not only as a result of minimum wealth thresholds, but from firms and banks exiting the market altogether. “This exit appears to have been driven by a mix of factors. Barclays, for example, cited ‘a decline in commercial viability for such services over recent years’…HSBC, RBS, Lloyds and Santander also re-structured their advisory businesses, leading overall to a partial withdrawal from the advice market – and a decline in adviser numbers.” It’s hard to imagine a scenario in which the same or similar negative effects would not occur in the United States as a result of DOL’s proposal.

Conclusion

Many experts agree in their concerns that DOL’s proposal will do more harm than good by way of reduced choice and limited access for investors. While most everyone involved is in favor of some sort of fiduciary standard, DOL should take into account what many experts are warning will be the impact of the rule and, at the very least, modify the proposal.

Ideally DOL would leave the setting of broker-dealer and investment adviser fiduciary standards to the SEC. DOL will conduct public hearings on the issue the week of August 10, after which it will accept further comments and possibly modify the proposed rule. Implementation is still a long way off, and hopefully DOL will use that time to provide a more appropriate proposal.

Introduction

 

The United States continues to experience tepid economic growth, despite the official end of the recession more than 5 years ago. The U.S. economy remains exposed to contraction – with negative growth rates seen in the first quarters of 2015 and 2014.  More worrisome is the U.S.’s projected growth rate of 2.3 percent over the next 10 years. Essential to improving this outlook is a pro-growth policy agenda that includes reforming the nation’s broken tax code.

 

The House and Senate are both in the early stages of considering tax reform, characterized by committee-level working groups, discussions, and hearings. The Senate Finance Committee recently released the results of a bipartisan effort to examine 5 separate elements of the tax code.[1] The congressional budget resolution also assumes a fundamental rewrite of the tax code. While promising, history indicates that there have only been a handful of tax overhauls of the modern tax regime, and a total overhaul remains a difficult policy goal. In the near-term, it will likely remain elusive, but reforms of a smaller scope, such as business-only reform, or reform that focuses only on taxation of U.S. multinational corporations, are more achievable. Of course this approach would also be less desirable in terms of the implications for overall growth and the manufacturing sector, which remains a vital component of the U.S. economy.

 

This analysis reviews the need for tax reform, characteristics of a successful tax reform, and the policy merits of recent tax proposals and concludes that comprehensive tax reform would offer the largest opportunity for U.S. firms, particularly domestic firms, to include domestic manufacturing firms, while a robust business tax reform that includes essential rate reduction, could yield 2.5 million new jobs and as much as $9,000 in additional annual income per American. Less optimal for the economy in general and manufacturing would be a narrower reform that does not include business pass-throughs, while still less optimal would be a reform that only addresses the international tax system.

 

The Need for Tax Reform: The Corporate Tax Rate

 

The single most important characteristic of the U.S. corporate tax is that the rate is too high. The combined federal-state U.S. corporate tax rate of 39 percent is the highest among all major developed economies.[2] The high U.S. rate is seemingly not a matter of deliberate choice. Instead, it stems from a failure to acknowledge and keep abreast of broader global trends.

 

The U.S. corporate tax rate is essentially unchanged since 1986, when a significant rate reduction was enacted. Prior to 1986, the U.S. levied corporate taxes in excess of the Organization for Economic Cooperation and Development (OECD) average. By 1988, when the 1986 reform was fully implemented, the combined U.S. statutory rate had fallen below the OECD average.

 

Since 1988, however, the U.S. has again become a corporate tax outlier. According to OECD data from 1988-2011, every OECD nation, except the U.S., reduced its combined statutory corporate tax rate. On average, these nations saw a decline of 18 percentage points in combined statutory rates. The only OECD nation that saw a net increase in the combined statutory rate was the United States – the result of a 1-percentage point increase in 1993.

 

Additional Measures of the U.S. Tax Rate

 

Just as taxing wages creates a disincentive to work, taxing capital creates a disincentive to invest. The tax system that currently exists in the United States is full of these distortions, which leads to an inefficient amount and distribution of capital. But measuring the effect of taxes on capital investment is more difficult than just looking at the statutory rate found in the U.S. tax code. Instead, it is useful to consider two additional measures of taxation: the average effective tax rate and the effective marginal tax rate.

 

While statutory tax rates are critical to firm investment decisions, other measures of corporate taxation also warrant consideration.[3] A firm’s average effective tax rate includes other facets of the corporate tax code, such as credits and deductions, which figure in the determination of a firm’s tax burden. While less stark than top statutory rates, an international comparison of effective corporate rates still paints the U.S. in an unfavorable light. According to a study by PricewaterhouseCoopers, “companies headquartered in the United States faced an average effective tax rate of 27.7 percent compared to a rate of 19.5 percent for their foreign-headquartered counterparts. By country, U.S.-headquartered companies faced a higher worldwide effective tax rate than their counterparts headquartered in 53 of the 58 foreign countries.”[4]

 

The effective marginal tax rate measures the true value of taxes paid on each additional dollar of investment. Firms make their marginal investment decisions based on the expected return from that investment. A high effective marginal rate lowers that return, making investment less attractive.

 

Making investment even less attractive is the marginal tax distortions created by the U.S. tax code. For example, publicly traded C-corporations face a higher marginal rate than sole proprietorships and other S-corporations. Similarly disruptive, debt-financed investments face a lower rate than equity-financed investments. Finally, investments in assets such as inventory, computers, and manufacturing buildings typically face a higher effective marginal rate than investment in petroleum and natural gas structures.[5]

 

Because these rates vary based on asset type and financing structure, the U.S. tax code inadvertently incentivizes certain types of investment over others. This contributes to inefficient investment decisions and limits economic growth. Pro-growth tax reform would address these inefficiencies, which would include a lower rate and an associated diminution of distortions.

 

Alternative Approaches to Tax Reform

 

Comprehensive Tax Reform

 

A fundamental tax reform would address both business and individual tax systems, which have gone without a complete reform since 1986. Since then, rates and complexity in both systems have increased. A fundamental overhaul would likely face revenue-neutrality constraints as was the case in 1986. Such a constraint would require important tradeoffs, but it could still allow for a pro-growth tax reform, that could increase GDP growth, employment, wages, and could yield budgetary savings.[6]

 

Comprehensive tax reform, however, has proven to be an elusive public policy goal. The most recent attempt advanced by former Chairman of the Ways and Means Committee was met with restraint in Congress, while the administration has expressed little interest in pursuing a tax reform that would pair business and individual rate reduction.

 

Businesses-Only Reform

 

That United States has a highly uncompetitive tax code that harms growth and disadvantages U.S. firms at home and abroad. U.S. corporate tax reform therefore is essential, but any effort to attempt to reform “business” taxation that focused solely on the corporate code would fall short of taking full advantage of the chances for improved economic policy.

 

The code taxes businesses in two distinct ways: either at the entity level or the individual level. Whether the business is taxed directly or the taxable income is passed through and then taxed at the individual level is determined by the legal form that the business takes.

 

There are four major organizational forms available to non-farm businesses: C-corporations (named for subchapter C of the tax code), non-farm sole-proprietorships, S-corporations (subchapter S of tax code), and partnerships. A business may elect to organize as a particular type of entity for a number of reasons that are beyond the scope of this discussion.[7] Broadly, these organizational forms can be separated by how related income is taxed. This division separates C-corporations, where income is taxed at the business or entity level (and again when passed along to shareholders as a dividend), and the other major forms of organization broadly referred to as pass-through entities because related business income is said to “pass-through” the organization to be taxed at the individual level through the personal income tax.

 

As of 2012, there were 32.8 million non-farm businesses filing tax returns: 1.6 million C-corporations, 23.6 million sole-proprietors, 4.2 million S-corporations, and 3.4 million partnerships (including LLCs). The past several decades have seen the relative growth of non-farm sole proprietors, S-corporations and partnerships, and the associated diminution of the C-corporation.[8]

 

This dates to tax reform legislation in 1986 that raised the corporate rate above the top individual rate. S-corps quickly gained popularity followed later by partnerships. As as the Joint Committee on Taxation notes, “1986 was the last year in which the number of C-corporation returns exceeded the number of returns from pass-through legal entities…. while the number of C-corporations has generally declined in the United States since 1986 by a third, the number of pass-through entities has nearly tripled.”[9]

 

The tax code driving a firm’s organization produces an inefficient distortion, which ultimately is a drag on economic growth.[10] There is evidence that the corporate tax code can reduce the incentive to organize business activity as a Schedule C corporation in favor of other forms of organization, this has been found to impose a cost by misallocating entrepreneurial talent in the economy.[11]

 

A sound tax reform would mitigate this effect. Any attempt to reform business taxation generally must therefore be neutral to legal form of organization. Pursuing corporate tax reform, to include needed rate reduction, while leaving the individual code untouched would exacerbate any existing distortions.

 

International-Only Reform

 

The U.S. corporation income tax applies to the worldwide earnings of U.S. headquartered firms. U.S. companies pay U.S. income taxes on income earned both domestically and abroad. Active income earned in foreign countries is generally only subject to U.S. income tax once it is repatriated, giving an incentive for companies to reinvest earnings anywhere but the U.S., owing to its high corporate tax rate.

 

This system distorts the international behavior of U.S. firms and essentially traps foreign earnings that might otherwise be repatriated back to the U.S.

 

While the U.S. has maintained an international tax system that disadvantages U.S. firms competing abroad, many U.S. trading partners have shifted toward a territorial system; that system exempts entirely, or to a large degree, foreign source income. Of the 34 economies in the OECD for example, 28 have adopted such systems, including recent adoption by Japan and the United Kingdom.

 

Maintaining the U.S. worldwide system compounds the incentive for firms to keep earnings offshore in the face of high domestic rates. The combination of high rates and an increasingly outmoded worldwide tax system disadvantages U.S. firms abroad, where market opportunities are growing. 

 

The 1990’s and early 2000’s saw a series of corporate “expatriations” whereby U.S. firms re-domiciled abroad to reduce their tax burden, a phenomenon that has seen a recent resurgence. Maintaining, the U.S.’s antiquated tax system will, as previous research by the American Action Forum has demonstrated, will continue to put domestic headquarters at risk.[12] This activity has drawn attention to international inversions, and spurred potential reforms to address international tax policy as a revenue source for additional infrastructure policy.[13]

 

However, modernization that only includes an international reform would come at the expense of reforms that fundamentally address the primary failures of the U.S. business tax code – the rate. An international-only reform would not benefit C and S-Corporations with sole operations in the United States, including domestic manufacturing. Moreover, such a reform would not benefit the million of domestic S-Corporations that do not conduct overseas operations. An international-only reform falls well short of the goals of both comprehensive and an ideal business tax reform.

 

Benefits of Reform

 

Early research on the economic effects of corporate taxes was largely focused on closed economies.[14] Despite this limitation, this early work revealed many of the pernicious effects of corporate taxes, and laid the foundation for better understanding of the tax’s effects. While the U.S. corporate tax code had remained largely unchanged for decades, there has been significant global economic and geopolitical change in the intervening years. In an increasingly interdependent global economy, corporate taxes must be considered in the context of high capital mobility, a world that only amplifies flaws observed in the early literature. In a global economy where investment can more easily shift, the implications for economic growth from corporate tax policy can be significant.

 

There is a strong body of research identifying the negative effect on investment and capital formation from corporate tax.[15] Recent work has furthered the understanding that a high corporate tax rate increases the user cost of capital, which slows investment, productivity, and economic growth. Djanker et. al. present a robust finding that “the effective corporate tax rate [has] a large adverse impact on aggregate investment, FDI, and entrepreneurial activity.”[16] Among the more telling examples is a recent study by the OECD that notes "corporate income taxes have the most negative effect on GDP per capita.”[17]

 

Another OECD study found that reducing the statutory corporate tax rate from 35 percent to 30 percent increases the ratio of investment to capital by approximately 1.9 percent over the long term.[18] This is also consistent with the finding from the JCT, which observed that reducing corporate income taxes have the greatest effect on long-term growth by increasing stock of productive capital, which leads to higher labor productivity.[19]

 

In a 2008 OECD study of how corporate taxes affect investment decisions, Arnold and Schwellnus conclude that corporate taxes lower the rate of return for innovative-risky investments, reducing innovation and risk-taking.[20] To the extent that the corporate income tax discourages risk-taking, this suggests that the corporate income tax is like a “success tax” that firms with higher than average productivity must face, which is consistent with Gentry and Hubbard.[21]

 

Among the most clearly stated observation of the growth implications for corporate tax reform is from Gordon and Lee, who found that cutting the corporate tax rate by 10 percentage points can increase the annual growth rate by between 1.1 percent and 1.8 percentage points.[22]

 

The Tax Foundation also recently published estimates of the potential growth effects from corporate rate reduction, finding that reducing the “federal corporate tax rate from 35 percent to 25 percent would raise GDP by 2.2 percent, increase the private-business capital stock by 6.2 percent, boost wages and hours of work by 1.9 percent and 0.3 percent, respectively, and increase total federal revenues by 0.8 percent.”[23]

 

There is a clear consensus that the high U.S. corporate rate is a detriment to the economy and should be addressed through a reform that results in lower rates. As the research literature indicates, lower rates would have a significant and positive effect on economic growth, and therefore on employment and wage growth.

 

Employment and Income Effects

 

The American Action Forum has previously estimated that a pro-growth corporate reform could yield a significant improvement in annual economic growth.[24] This effect is predicated on the GDP effects of a lower corporate rate in keeping with findings in the economic literature. A reform that did not include these effects – such as an international only reform would therefore preclude these gains. Over the long-term, a 1 percent increase in trend economic growth would accrue to workers, first as the U.S. economy returns to full employment in the form of new jobs, then as wages. On recent estimate of an improvement in trend economic growth of the magnitude finds that such an improvement could yield 2.5 million new jobs and nearly $9,000 in annual income growth for Americans.[25]

 

Assessing the Benefits of Tax Reform for U.S. Manufacturing

 

The U.S. manufacturing sector is essential to a vibrant national economy over the long term and to a robust continued recovery in the near term. While a comprehensive tax reform is unlikely in the this Congress or the next, attention by policymakers to business tax reform is worth considering in its impact not just on the economy broadly, but in terms of the manufacturing sector specifically.

 

Ranking alternative approaches to tax reform requires assessing the impact of a given reform on a sector requires gauging the footprint of a sector on the overall economy. Manufacturing contributed $2.1 trillion to the U.S. economy on annual basis in the first quarter of 2015, or 12 percent of GDP.[26] Taken alone, the manufacturing sector would stand as the 9th largest economy – surpassing the GDP of India.[27]  According to the U.S. census, 11.3 million Americans are employed in the manufacturing sector.[28] The manufacturing sector is thus a critical sector of the U.S. economy, and the incidence on manufacturing of any tax reform should be considered in this context.

 

As noted above, comprehensive tax reform is unlikely in the near-term, but a reform that focuses on business income has gained increased attention from lawmakers over the past year. U.S. firms face disproportionately high business tax rates compared to other economies – especially for manufacturers. Indeed, according to one study, compared to other large economies, U.S. manufacturing faces one of the highest effective tax rates, second only to Japan, compared to other large markets.[29] Rate reduction as part of a business tax reform is therefore essential to any tax reform that addresses the anticompetitive taxation of U.S. manufacturing.

 

The scope of any business tax reform is also essential. Corporations tend to be larger than other forms of organization in terms of payrolls. This also holds true in manufacturing, where 64 percent of manufacturing employment is concentrated among corporations. However, other forms of organization comprise the bulk of manufacturing employers in the U.S. Indeed, of the 292,094 manufacturing establishments in 2013, 63 percent were organized as S-corporations, sole proprietorships, partnerships, or other form of organizations.[30] Failure to address these pass-through entities as part of business tax reform would thus leave the bulk of U.S. manufacturing firms exposed to anticompetitive rates. Thus, the optimal tax reform approach for the manufacturing sector would achieve needed rate reduction that would be applied broadly to all firms – both C corporations as well as pass-through entities.

 

The least competitive tax reform approach for U.S. manufacturers would be a reform that left the statutory rate untouched, but would alter the taxation of foreign source income. Many U.S. manufacturers have robust overseas operations, and indeed, the majority of manufacturing employees in the U.S. are employed by U.S. multinational corporations.[31] These firms must confront a tax regime that is increasingly out of step with trading partners. While this includes the U.S. system of taxing worldwide profits, this system is exacerbated by the U.S. high statutory rate. Even a tax reform that incorporated some rate reduction could come at the expense of domestic manufacturers that do not have overseas operations, or would lose certain tax benefits in exchange for international reforms.[32] Indeed, some of the international tax reforms that have been considered as part of a financing mechanism for infrastructure spending could harm the pass-through entities that form the majority of manufacturing establishments in the U.S.[33]

 

Conclusion

 

The United States is on a pathway to grow at middling rates for the next decade. In the absence of major policy reforms, the U.S. will increasingly cede the global marketplace to foreign competitors. One essential area of reform is tax policy. The U.S. has not undertaken a major tax reform in nearly 30 years, while other major trading partners have pursued pro-growth reforms to attract new investment and opportunities. As a result the U.S. imposes an antiquated system of tax on its firms, which comprise over 10 percent of the nation’s income. Reforming the tax code to be more pro-growth  should be the goal of the policy-makers in the near-term. While a fundamental overhaul of the tax code is most desirable, alternative approaches to tax reform would have varying effects on the economy as a whole and the manufacturing sector in particular. A business tax reform that includes robust rate reduction and enhances the competitiveness of U.S. pass-through entities represents the optimal approach to a business tax reform for the U.S. economy – improving trend growth and incomes.



[2] "Table II.1. Corporate income tax rate (2015)." OECD.org. The Organisation for Economic Co-operation and Development  July 2015 Web.

[3] Auerbach, Alan J., Michael P. Devereux, and Helen Simpson, “Taxing Corporate Income.” National Bureau of Economic Research Working Paper No. 14494 November 2008

[7] For more detail on the features of business legal forms of organization see: Joint Committee on Taxation, “Selected Issues Relating to Choice of Business Entity,” JCX-66-12 (July 27, 2012), pp. 20-31

[9] Joint Committee on Taxation, op. cit. , p. 6

[10] Goolsbee, Austan, “The Impact of the Corporate Income Tax: Evidence from State Organizational Form Data,” Journal of Public Economics Vol. 88 No. 11 (2004) :2283-2299 and Robert Carroll and David Joulfain, “Taxes and Corporate Choice of Organizational Form,” Office of Tax Analysis Working Paper 73, October 1997

[11] Jane G. Gravelle and Laurence J. Kotlikoff, "The Incidence and Efficiency Costs of Corporate Taxation When Corporate and Non-Corporate Firms Produce the Same Good," Journal of Political Economy,Vol. 97, No. 4 (August 1989), pp. 749-780.

[14] Harberger, Arnold C., “The Incidence of the Corporation Income Tax.” Journal of Political Economy Vol. 70 No. 3 (1962): 215-240

[15] See Hassett, Kevin A. and R. Glenn Hubbard, “Tax Policy and Business Investment,” Handbook of Public Economics, Vol. 3. Amsterdam North Holland 1293-1343.

[16] Djankov, Simeon, Tim Ganser, Caralee McLiesh, Rita Ramalho, and Andrei Shleifer: "The Effect of Corporate Taxes on Investment and Entrepreneurship." American Economic Journal: Macroeconomics, 2(3): 31–64, July 2010

[17] Arnold, Jens, “Do Tax Structures Affect Aggregate Economic Growth? Empirical Evidence from a Panel of OECD Countries.” Organisation for Economic Co-operation and Development     Economics Department Working Paper No. 643 October 2008 18 Web. http://www.oecd.org/officialdocuments/displaydocumentpdf/?cote=eco/wkp(2008)51&doclanguage=en?  

[18] Johansson, Asa, Christopher Heady, Jens Arnold, Bert Brys, and Laura Vartia, “Tax and Economic Growth.” Organisation for Economic Co-operation and Development       Economics Department Working Paper No. 620 July 2008 Web. http://www.oecd.org/dataoecd/58/3/41000592.pdf

[19] “Macroeconomic Analysis of Various Proposals to Provide $500 Billion in Tax Relief.” Joint Committee on Taxation JCX-4-05 March 2005

[20] Arnold, Jens and Cyrille Schwellnus, “Do corporate taxes reduce productivity and investment at the firm level? Cross-Country evidence from the Amadeus dataset.”             OECD Economics Department Working Paper No. 641 September 2008

[21] Gentry, W. and R.G. Hubbard, “Success Taxes, Entrepreneurial Entry, and Innovation.” National Bureau of Economic Research NBER Working Paper No. 10551 June 2004

[22] Young Lee and Roger Gordon, “Tax Structure and Economic Growth," Journal of Public Economics, Vol. 89, Issues 5-6 (June 2005), pp. 1027-1043

[23] Schuyler, Michael, “Growth Divided from a Lower Corporate Tax Rate.” Tax Foundation March 2013 Web. http://taxfoundation.org/article/growth-dividend-lower-corporate-tax-rate

[26] http://www.bea.gov/iTable/index_industry_gdpIndy.cfm

[30] http://censtats.census.gov/cgi-bin/cbpnaic/cbpdetl.pl

[31] http://www.finance.senate.gov/legislation/download/?id=45b8dcf2-b1a4-493b-b8cf-bec770815d18

[32] http://www.ey.com/publication/vwluassets/ey-bna-article/$file/ey-bna-article.pdf

[33] http://www.finance.senate.gov/legislation/download/?id=dd80cb90-fce6-4588-b843-1454c0ae374e

  • The ride sharing industry has brought in $519 million in economic activity. #Uber #Lyft #Sidecar

  • 22,000 jobs have been created just from the #GigEconomy's ride sharing industry since 2009

Summary

The rise of the so-called “gig economy” and the increasing use of independent contractors has captured the attention of policymakers. Nontraditional work arrangements are hardly a new development in the American economy, yet there is the perception that smartphone apps and online marketplaces have led to a sharp increase in nontraditional work and even a new class of jobs. The goal of this short paper is to survey the limited available data to shed light on the magnitude of these labor force developments. 

Our analysis covers two aspects of work arrangements: (1) the gig economy – workers with alternative work arrangements, and (2) the online gig economy – workers who utilize new technologies, markets and platforms for alternative work arrangements. In addition we touch on related developments in product markets giving rise to the “sharing economy” – goods and services that employ under-utilized assets via online marketplaces or decentralized networks.

We find that the number of workers in the gig economy grew between 8.8 and 14.4 percent from 2002 to 2014. For comparison, overall employment increased by 7.2 percent over the same period. Independent contractors constitute a significant portion of gig workers, and grew by 2.1 million workers from 2010 to 2014, accounting for 28.8 percent of all jobs added during the recovery. The online gig economy has experienced significant growth as well. Faster growth in taxis and boarding rooms since the arrival of companies like Uber, Lyft, and Airbnb indicates that online gig jobs are transforming the labor force. In particular, the data suggest that the ride sharing industry has helped bring in an additional $519 million in economic activity from 2009 to 2013, and created 22,000 jobs in the sector.

Defining the Gig and Sharing Economies

A common phrase for the businesses and workers that are marked by alternative jobs that are usually temporary and influenced by technology is the gig economy. Gig workers are mostly independent contractors and freelancers. The workers who could also be considered gig workers include agency temps, on-call workers, contract company workers, self-employed workers, and standard part-time workers.

A related concept in product markets is the sharing economy; services and goods that employ under-utilized assets via online marketplaces or decentralized networks for both monetary and non-monetary benefits.[1] For example, if you have extra time, you can contribute to Wikipedia or Linux. If you have an idle computer, you can download the Folding@home program and help advance medical research and cancer drug design.[ii]

Though still in the early stages of development, some sharing economy business models adopt the alternative worker arrangements of the gig economy. Uber, Lyft, and Airbnb are just a few examples of how companies are merging the temporary work arrangements of the gig economy with the sharing aspects of an online marketplace.[iii] The chart below illustrates the relationships between the concepts. The employment conditions for the workers who fall under this category are frequently the topic of policy discussions. So in addition to evaluating the overall gig workforce, we analyze the trends in this subset of the gig economy, which we will refer to as the online gig economy.

 

Quantifying the Gig Economy Workforce

There is no official classification for gig economy workers. We can, however, use estimates of the size of the contingent workforce to provide insights into these workers. The Government Accountability Office (GAO) recently examined the entire contingent workforce. Under its broadest definition, GAO found that in 2010, 40.4 percent of workers were contingent employees.[iv] This definition of contingent workers included anyone who was not a standard full-time employee; i.e., it encompassed agency temps, on-call workers, contract company workers, independent contractors, self-employed workers, and standard part-time workers. Under its more narrow definition, GAO estimated that 7.9 percent of the workers were “core” contingent workers. This only included agency temps, on-call workers, and contract company workers.[v]

The actual number of gig economy workers likely falls in between these two ranges. For instance, the majority of standard part-time workers are in construction, manufacturing, retail, education services, and accommodation and food services, which are not necessarily the type of industries associated with smartphone apps and software specialists. Meanwhile, independent contractors and freelancers, who are excluded from GAO’s core contingent category, likely consist of the gig economy workers.

Using data from the General Social Survey (GSS)[vi], a statistical survey conducted by University of Chicago’s National Opinion Research Center, we can proxy the workforce trends of the contingent worker categories that are most representative of the gig economy. These trends are illustrated in Table 1.[vii] For our most narrow measurement of gig workers (labeled Gig 1) we simply include independent contractors, consultants, and freelancers. Our middle measurement (Gig 2) includes all Gig 1 workers plus temp agency workers and on-call workers. Our broadest measurement (Gig 3) includes all Gig 2 workers plus contract company workers.

Table 1: Alternative Workers

Year

Gig 1

Gig 2

Gig 3

2002

18.9 million

22.7 million

26.0 million

2006

19.7 million

24.8 million

30.0 million

2010

18.5 million

25.4 million

29.7 million

2014

20.5 million

25.8 million

29.7 million

 

 Overall, we estimate that in 2014 there were 20.5 million to 29.7 million people in the alternative work arrangements that are often featured in the gig economy, constituting 14.0 percent to 20.3 percent of all employed people.

Growth in Gig Workers in the 21st Century

From 2002 to 2014, gig workers grew at a rapid pace, despite taking a hit during the recession.[viii]

Table 2: Growth In Workforce, 2002 to 2014

Worker Type

Percent Growth

Total Employment

7.2%

Gig 1

8.8%

Gig 2

13.9%

Gig 3

14.4%

 

During that time period, total employment increased by 7.2 percent. Meanwhile workers in the gig economy expanded between 8.8 percent and 14.4 percent. As a result, while independent contractors (Gig 1) made up 14.0 percent of the workforce in 2014, they represent 16.9 percent of all new jobs added during the previous 12 years. Our broadest measure of gig workers (Gig 3) indicates that they accounted for up to 38.2 percent of job growth.

Growth in independent contractors was particularly rapid during the recent recovery. From 2010 to 2014, we estimate that independent contractors increased by 2.1 million workers, accounting for 28.8 percent of all jobs added during that time period.

Trends in Non-employer Establishments and the Online Gig Economy

Standard labor data also do not easily reveal employment changes due to the online gig economy. For example, Etsy, the online marketplace, found through surveys that only 48 percent of its sellers would consider their work on the site as independent work; either a full time, part time, or temporary.[ix] About 24 percent of all sellers identify themselves as unemployed. Another 26 percent of sellers have full time jobs other than their Etsy store. Standard labor market surveys would probably underreport these workers, even though they receive income via one of the biggest companies in the online gig economy. Etsy also found that only 2 percent of sellers are incorporated. Meanwhile, 90 percent of sellers are sole proprietorships, and by their very definition have no employees.

To proxy the growth of the online gig economy, we use Census data for businesses that have no paid employees and pay federal income tax. These nonemployer establishments illustrate the rise of the online gig economy. Table 3 shows recent data for nonemployer establishments and their average receipts.

Table 3: Trends in Nonemployer Establishments

 

 

Total Nonemployer Establishments

Receipts (in $1,000s)

Average Receipt / Establishment

2002

17,646,062

770,032,328

$43,638 

2003

18,649,114

829,819,228

$44,496

2004

19,523,741

887,001,820

$45,432

2005

20,392,068

951,206,297

$46,646

2006

20,768,555

970,384,137

$46,724

2007

21,708,021

991,791,563

$45,688

2008

21,351,320

962,791,527

$45,093

2009

21,695,828

923,018,039

$42,544

2010

22,110,628

950,813,840

$43,003

2011

22,491,080

989,628,512

$44,001

2012

22,735,915

1,030,932,886

$45,344

2013

23,005,620

1,052,025,268

$45,729

 

Since the beginning of the recovery in 2009,[x] nearly 1.3 million new nonemployer establishments were created, outpacing growth in total employer establishments to become nearly 75 percent of all businesses. From 2009 to 2013, these companies had $129 billion in additional receipts, which translates to a 14 percent increase over this time period. Of the nearly 270,000 nonemployer businesses added between 2012 and 2013, three sectors accounted for 60 percent of the growth: other services; real estate, rental and leasing; and transportation and warehousing.[xi]

The growth in companies like Airbnb, Lyft, and Uber coincides with the significant gains in the transportation and warehousing sector, suggesting that these companies have likely driven nonemployer establishment growth. For instance, from 2002 to 2008, the number of new taxi and limousine companies increased at an average rate of 4.3 percent per year. After Uber was established, the average annual growth rate in taxi and limousine companies jumped to 7.0 percent from 2009 to 2013.[xii] Similarly, the total receipts from 2002 to 2008 increased at an average annual rate of 8.3 percent, and then jumped to 10.1 percent from 2009 to 2013. Moreover, growth in receipts has accelerated in recent years as the number of new nonemployer establishments and the total receipts each increased by 11 percent in 2013 alone. The faster growth since ride sharing companies entered the market has led to significant economic advances in the taxi and limousine industry. In 2013 taxi and limousine receipts were about $519 million higher than they would have been had the pre-ride sharing growth continued since 2009. That’s about 19.5 percent of the $2.7 billion in growth in receipts since 2009. Similarly, there were 22,000 more nonemployer establishments, which probably equates to 22,000 unique jobs. That means the accelerated growth since ride sharing companies formed accounts for 41.3 percent of nonemployer establishments created since 2009.

 

The rideshare industry is not the only part of the online gig economy to see rises in usage. Before Uber, Lyft, Sidecar, and others were turning the taxi industry on its head, VBRO and others invited regular people to rent out their property as alternative living arrangements for vacationers. Soon after Airbnb was founded in late 2008, the total number of people offering their houses and apartments to others increased by 18 percent and total receipts increased by nearly 20 percent.

Conclusion

The gig economy is of rising importance in the overall U.S. economy. From 2002 to 2014, workers in the gig economy expanded between 8.8 percent and 14.4 percent. Independent contractors, most notably, helped put 2.1 million people to work, accounting for 28.8 percent of all jobs added from 2010 to 2014. The ride sharing industry alone helped bring in an additional $519 million in economic activity from 2010 to 2013 for independent workers, while injecting 22,000 jobs into the sector. Though it is early, the online gig economy also seems to also be a growing piece of United States’ economy of the 21st century.



Rachel Botsman, Defining The Sharing Economy: What Is Collaborative Consumption—And What Isn't?, http://www.fastcoexist.com/3046119/defining-the-sharing-economy-what-is-collaborative-consumption-and-what-isnt

[ii] Folding@home, The Software, https://folding.stanford.edu/home/

[iii] It is important to note that some analysts exclude Uber and Lyft from the sharing economy because they believe there is little sharing and collaboration involved in the service. However, these on-demand services do connect buyers and sellers in real time via a centralized market. Moreover, for those drivers that don’t work full time, which is about 81 percent of all drivers, the cars would still idly otherwise, which seems to qualify them for the sharing economy. 

[iv] Government Accountability Office , Contingent Workforce: Size, Characteristics, Earnings, and  Benefits, http://www.gao.gov/assets/670/669899.pdf 

[v] Government Accountability Office, Contingent Workforce, http://www.gao.gov/assets/670/669766.pdf

[vi] NORC at University of Chicago, General Social Survey, http://www.norc.org/Research/Projects/Pages/general-social-survey.aspx

[vii] Authors’ analysis of GSS data using GSS Data Explorer, https://gssdataexplorer.norc.org/, and Bureau of Labor Statistics Household Survey Data, http://www.bls.gov/cps/cpsaat01.pdf

[ix] Etsy, Redefining Entrepreneurship: Etsy Sellers’ Economic Impact, http://extfiles.etsy.com/Press/reports/Etsy_RedefiningEntrepreneurshipReport_2013.pdf

[x] National Bureau of Economic Research, US Business Cycle Expansions and Contractionshttp://www.nber.org/cycles.html

[xi] United States Census, Nation Gains More than 4 Million Nonemployer Businesses Over the Last Decade, Census Bureau Reports, http://www.census.gov/newsroom/press-releases/2015/cb15-96.html

[xii] The 2009 reduction and deceleration in establishments and receipts that occurs in both taxi and limousine services and room and boarding was likely caused by the national economic recession from 2007 to 2009. This is also around the time when the initial online gig economy businesses were established. To analyze the impact of the online gig economy, we use 2009 as their initial period and measure the growth in their industries since.

  • Reducing the percentages of beneficiaries reaching later stages of disease could reduce total MA payments

  • Medicare Advantage health plans can, and should, be paid to keep patients healthy.

Executive Summary

One of the great dilemmas in health policy is that almost no one in the health care fields is actually paid to keep patients healthy. Health care providers are usually paid for performing specific services, while health plans are usually paid a specific amount per month for each beneficiary covered. As a result, providers have an incentive to perform more services and health plans have an incentive to cover fewer services. Most approaches to health policy focus on balancing these two interests, rather than incentivizing anyone to deliver the right services – both in terms of quantity and type. However, it is possible in at least some common situations to create incentives to do the job right, rather than simply to do “more” or “less.” It is long past time to begin paying to keep patients healthy rather than paying for specific treatments.

There are many common, chronic diseases which are progressive in nature, and whose progression can be slowed or stopped by appropriate disease-specific preventive care.[1] Allowing a disease to progress is harmful to patients, as well as financially costly in the long run, because more intensive treatment is then required. Proper preventive care is also costly, but less (often much less) costly than allowing a disease to progress.

Health plans should be paid to keep patients healthy. That is, when some percentage of patients with a chronic progressive disease may be expected to progress to the next stage of that disease in the course of a year, health plans should be paid if they can keep the percentage of patients progressing significantly below the expected level. Depending on the level of success in preventing disease progression, and the payment required to achieve that level of success, preliminary estimates suggest that annual program savings could range from several hundred million to almost $3 billion from chronic kidney disease alone. This is just the tip of the iceberg, since there are many other chronic progressive conditions that would be amenable to this sort of incentive system. Most important, because the payment for reducing disease progression can be calibrated to be lower than the payment for more severe stages of the diseases, it can be guaranteed that program costs will not increase.  In other words, there is great potential benefit, but no downside risk to the taxpayer.

Risk Adjustment and Incentives

Medicare Advantage (MA), and several other government programs, currently pay health plans a monthly fee to provide health care to enrollees. The base monthly fee is determined according to a well-known formula, but there is also a process for adjusting the fees paid to plans based on the health history and status of the beneficiaries actually enrolled in each particular plan. This process, known as “risk adjustment,” is intended to increase payments for enrollees who will cost more to take care of, and decrease payments for healthier enrollees. This has two related benefits: first, it decreases the risk to each plan of attracting a disproportionate number of relatively unhealthy enrollees, and second, it decreases the incentive for plans to attempt to disproportionately attract healthier enrollees.

While risk adjustment is necessary to ensure that plans don't try to game the system by disproportionately attracting healthier enrollees, it does not necessarily create an incentive to keep patients healthy; if an enrollee gets sicker and therefore more expensive to care for, the enrollee will be given a higher risk score the following year, and the MA plan will be paid more to offset the increased care required.

Instead of just paying an additional amount for each enrollee with a particular disease stage, plans should also be paid for reducing the percentage of patients advancing to the next disease stage.

Suppose there is a chronic disease with stages A and B. Stage A is not all that severe and requires little treatment to maintain normal activities, but without any treatment, say 20 percent of patients progress to Stage B each year. Stage B is severe, and requires substantial amounts of treatment, very costly, to avoid becoming fatal. However, with appropriate preventive care, only 5 percent of patients progress to Stage B each year.

The current practice is to add a small amount, or no amount, to an enrollee's risk score (and therefore the MA payment) if that enrollee has the disease at Stage A, since little or no treatment is required, but to add a large amount if the enrollee has the disease at Stage B, since much treatment is required.

The American Action Forum (AAF) proposes providing an additional payment to MA plans if they are able to substantially reduce the rate of progression from Stage A to Stage B. For example, if the “usual” percentage progressing annually to Stage B is 20 percent, an additional payment should be provided for every percentage point reduction below (say) 16 percent. That amount should be multiplied by the number of Stage A patients, in excess of that percentage, not progressing in the given year. This amount should be calibrated so it is higher than the amount expected to be spent on appropriate preventive care, but lower than the amount that would be spent if those patients had progressed to stage B. This will make “keeping patients healthy” profitable for both the government and the insurer.

The payment should be made in year two based on the number of Stage A enrollees at the beginning of year one who have not progressed to Stage B (and are not deceased) by the beginning of year two. The payment should be made to the MA plan that covered the enrollee in year one, regardless of whether the enrollee is still enrolled in that plan in year two.

Example: Chronic Kidney Disease

A disease particularly suited for this type of incentive is chronic kidney disease (CKD). CKD is characterized by five well-defined stages objectively measurable in laboratory tests, defined primarily by the patient's glomerular filtration rate (GFR). Left untreated, a patient can progress from Stage 1, which is not all that serious, through Stage 5, and ultimately end stage renal disease (ESRD), requiring lifetime dialysis or a transplant. Table 1 displays the various stages of CKD and how they are defined. Early-stage diagnosis and appropriate treatment can reduce (though not eliminate) the probability of a patient progressing to more severe stages.

 

Table 1.

 

Stage of Kidney Disease

ICD-9 Code

Criteria*

Chronic Kidney Disease Stage 1

585.1

GFR ≥ 90 with any sign of kidney damage

Chronic Kidney Disease Stage 2

585.2

GFR range 60-89

Chronic Kidney Disease Stage 3**

585.3

GFR range 30-59

Chronic Kidney Disease Stage 4

585.4

GFR range 15-29

Chronic Kidney Disease Stage 5

585.5

GFR < 15

End Stage Renal Disease

586.6

Requires indefinite dialysis or transplant

* “Clinical Practice Guidelines for Chronic Kidney Disease: Evaluation, Classification, and Stratification,” National Kidney Foundation, http://www2.kidney.org/professionals/KDOQI/guidelines_ckd/p4_class_g1.htm. GFR is glomerular filtration rate, measure in ml/min per 1.73 m2.

** Some clinicians and health systems distinguish between stage 3a (GFR between 45 and 59) and stage 3b (30-44), but Medicare Advantage does not.

 

In the current (2014 going forward) CMS-HCC risk model, there is no risk score or additional payment to MA plans associated with stages 1, 2, or 3. Stages 4 (HCC 137) and 5 (HCC 136) receive a risk factor of 0.230 for community beneficiaries. This corresponds to an average additional payment of approximately $2,375 per year, based on the national average benchmark.[2] For institutionalized beneficiaries, stage 4 (HCC 137) is assigned a risk factor of 0.302, corresponding to an average additional payment of approximately $3,119, and stage 5 (HCC 136) has a risk factor of 0.521, corresponding to an average additional payment of approximately $5,380. An MA patient with ESRD is paid according to an entirely separate payment system, with an annual payment approximately $57,225 higher than the average annual MA benchmark.[3]

Consequently, reducing the rate of disease progression would reduce MA program spending by those amounts for each patient whose disease was prevented from progressing.

Preliminary estimates imply that approximately a quarter of patients with stage 3 advance to stage 4, about a third advance from stage 4 to stage 5, and approximately a quarter advance from stage 5 to ESRD.[4] Table 2 displays the prevalence of each of the 5 stages in the Medicare population, and associated payments for treatment provided.

Table 2.

Stage

ICD-9

2014 HCC

Prevalence in

Risk Factor

Incremental Outlay
Per Beneficiary

Medicare Population*

Community

Institutional

Community

Institutional

CKD Stage 1

585.1

None

0.232%

 

 

 

 

CKD Stage 2

585.2

None

0.774%

 

 

 

 

CKD Stage 3

585.3

None

4.589%

 

 

 

 

CKD Stage 4

585.4

137

0.920%

0.230

0.302

$566

$743

CKD Stage 5

585.5

136

0.198%

0.230

0.521

$889

$2013

ESRD

586.6

N/A

0.013%

N/A

N/A

$16,011

$16,011

* U.S. Renal Data System Coordinating Center, 2014 USRDS Annual Data Report, http://www.usrds.org/adr.aspx

 

Potential Savings

Clearly, reducing the percentages of beneficiaries reaching later stages of disease could reduce total MA payments. Based on current prevalence of each disease stage and current Medicare enrollment, a ten percentage point reduction in the rate of progression from one stage to the next could reduce MA program payments by approximately $384 million annually. Paying a bonus to MA plans out of this savings would benefit all parties – patients would live longer, healthier lives, taxpayers would save money, and MA plans would benefit financially as well.

This of course represents only a small portion of the potential savings. Clinical experts indicate that reduction in CKD progression could substantially exceed this level, with a theoretical maximum savings of over $3 billion annually at current disease prevalence rates and MA enrollment levels. With both CKD prevalence and MA enrollment increasing over time, the savings will also increase over time.

Most importantly, because the payment for reducing progression rates can be set to be less than the additional payment when the disease progresses, the potential for savings is not offset by any risk of spending increases. In the worst-case scenario, MA spending does not change at all. In other words, an incentive program tied to slowing disease progression has absolutely no downside risk to the taxpayer.

Additional Applications

Furthermore, CKD is only one of many chronic diseases whose progression can be mitigated by appropriate treatment and measured objectively. Another example is diabetes, whose management can be evaluated by a patient's HbA1C measurement. Insufficient diabetes management increases the probability of complications, and complications can result in severe consequences for patients. Complications also increase health care costs, and thus MA payments. A patient who progresses from uncomplicated diabetes to diabetes with complications generates an increased MA expenditure averaging about $2,520 per year.

Conclusion

There are many common, chronic diseases which are progressive in nature, and whose progression can be slowed or stopped by appropriate disease-specific care. Allowing a disease to progress is harmful to patients, as well as financially costly in the long run, because more intensive treatment is then required. Proper preventive care is also costly, but less (often much less) costly than allowing a disease to progress. While clinicians are surely not intentionally allowing the quick progression of disease stages, the payment structure is working against providers and health plans who are proactive about preventive care.

MA health plans can, and should, be paid to keep patients healthy. This can be accomplished by rewarding plans when they reduce the progression rates of their patients below the level that would normally be expected based on population data.

If the reward to keep patients healthy is lower than the additional risk adjustment payment that would be made in the event of disease progression, there is no downside risk to the taxpayer, and the prospect of significant financial benefits to both taxpayers and health plans. Plans would be paid for keeping patients healthy rather than simply treating them when they are sick. Most importantly, patients would live healthier and longer lives. 

 


[1] Here the term “preventive care” is used in the linguistic sense of “health care intended to prevent otherwise likely adverse health outcomes” rather than in the regulatory sense of “services designated by the U.S. Preventive Services Task Force or the Secretary of Health and Human Services.”  Unlike the latter, in the case of  health care intended to prevent otherwise likely adverse health outcomes, there is no requirement in the Affordable Care Act to provide such services at all, let alone without cost-sharing.

[2] Robert A. Book, “Medicare Advantage Cuts in the Affordable Care Act: April 2014 Update,” American Action Forum, April 17, 2014, at  http://americanactionforum.org/research/medicare-advantage-cuts-in-the-affordable-care-act-april-2014-update.

[3] While not all Medicare beneficiaries with ESRD are able to enroll in an MA plan, those those who are already enrolled in an MA plan when diagnosed with ESRD are able to stay in the same plan. Certain other categories of ESRD beneficiaries are also able to enroll in MA.

[4] Further research is planned to refine these estimates. Other results and figures in this paper do not depends on these preliminary estimates.


  • Raising the minimum wage to $15 per hour would cost 6.6 million jobs

  • Only 6.7 percent of income gains from a $15 minimum wage would go to workers in poverty

  • Raising the minimum wage to $12 per hour would cost 3.8 million jobs

 

NEW VIDEO: Economist @djheakin explains how a $15 #minimumwage will destroy nearly 7 million jobs bit.ly/1N7hhyc

Posted by American Action Forum on Tuesday, August 4, 2015

Research published by the American Action Forum and the Manhattan Institute

Foreword

By eliminating jobs and/or reducing employment growth, economists have long understood that adoption of a higher minimum wage can harm the very poor who are intended to be helped. Nonetheless, a political drumbeat of proposals—including from the White House—now calls for an increase in the $7.25 minimum wage to levels as high as $15 per hour.

Such demands assume that the additional income for lower-income households would come from flush firms or wealthy households. But this groundbreaking paper by Douglas Holtz-Eakin, president of the American Action Forum and former director of the Congressional Budget Office, and Ben Gitis, director of labor-market policy at the American Action Forum, comes to a strikingly different conclusion: not only would overall employment growth be lower as a result of a higher minimum wage, but much of the increase in income that would result for those fortunate enough to have jobs would go to relatively higher-income households—not to those households in poverty in whose name the campaign for a higher minimum wage is being waged.

Specifically, using time-tested modeling techniques, such as those that Holtz-Eakin used while at the CBO, the authors found that a $15-per-hour minimum wage could mean the loss of 6.6 million jobs. What’s more, despite the fact that there would be some Americans whose wages would be lifted by a higher minimum wage, the effect on the poor would be minimal—of the increase in income for low-wage workers, only 6.7 percent would go to families in poverty. In other words, this is reverse–Robin Hoodism: taking jobs and income from the poorest to give to those who are better-off. The wealthy, whom demagogues now attack, would be untouched.

As the minimum-wage debate proceeds, it’s important to keep in mind that work itself benefits those of modest means. The first job, even at relatively low pay, provides that first step on the ladder of upward mobility. Eliminating those rungs on the ladder threatens the future of workers who are starting out today. There are far better ways—including the Earned Income Tax Credit, targeted wage supplements, and, of course, a more effective public-education system—to assist low-income Americans and to make work pay, while not reducing job growth. As this paper makes clear, the poor cannot afford counterproductive initiatives advanced in their name but harmful to their lives.

Lawrence Mone

President, Manhattan Institute for Policy Research

 

Executive Summary

We examine the employment effects and antipoverty implications of raising the federal minimum wage to $12 per hour and to $15 per hour, respectively, by 2020. We focus on how raising the federal minimum wage would affect the very low-wage workers whom the policy is intended to help. Overall, we find significant trade-offs in raising the federal minimum wage.

While a minimum-wage hike would benefit millions of workers with higher earnings, it would also hurt millions of others who would lose earnings because they cannot attain or retain a job. Our estimates show that raising the federal minimum wage to $12 per hour by 2020 would affect 38.3 million low-wage workers. Using our central estimate, we find that raising the minimum wage would cost 3.8 million low-wage jobs. In total, income among low-wage workers would rise by, at most, $14.2 billion, of which only 5.8 percent would go to low-wage workers who are actually in poverty.

Labor-Market Effects of Raising Federal Minimum Wage to $12 per Hour

Workers Affected

38.3 million

Jobs Lost

3.8 million

Net Income Change

$14.2 billion

Percent of Income Gained Going to Workers in Poverty

5.8%

Similarly, we find that increasing the federal minimum wage to $15 per hour by 2020 would affect 55.1 million workers and cost 6.6 million jobs. Aggregate income among low-wage workers would rise by $105.4 billion, after accounting for income declines from job losses. However, only 6.7 percent of the increase in income would go to workers who are actually in poverty.

Labor-Market Effects of Raising Federal Minimum Wage to $15 per Hour

Workers Affected

55.1 million

Jobs Lost

6.6 million

Net Income Change

$105.4 billion

Percent of Income Gained Going to Workers in Poverty

6.7%

Because the exact effect of the minimum wage on employment remains unsettled, we check the robustness of our results by employing a range of estimates from the literature that imply modest, moderate, and severe employment consequences. In each case, we analyze how the change in earnings resulting from a minimum-wage increase would be distributed across income levels.

I. Introduction

In recent years, American policymakers and labor advocates have argued for—and, in many cases, successfully enacted—increases in the minimum wage at federal, state, and local levels. At the federal level, President Obama initially proposed raising the minimum wage to $9 per hour in his February 2013 State of the Union address and later embraced a proposal in Congress to raise it to $10.10.

Now, lawmakers are proposing to raise the federal minimum wage to $12 per hour by 2020. Others recently introduced a new bill that would raise the federal minimum wage to $15 per hour by 2020, which would more than double the $7.25 federal minimum wage. Several cities, such as Los Angeles, Seattle, and San Francisco, have approved raising the minimum wage to $15 per hour.

In 2014, the Congressional Budget Office (CBO) analyzed the employment and income effects of raising the federal minimum wage to $9.00 and to $10.10 per hour.[1] In this paper, we estimate the employment and income effects of increasing the minimum wage to $12 and to $15 per hour, focusing on the low-wage workers whom such raises would be intended to assist. In doing so, we project a range of job losses that would occur if lawmakers were to raise the federal minimum wage to $12 or to $15 per hour; the net change in total income for all low-wage workers in the United States; and how the net change in earnings for low-wage workers would be distributed across income levels.

II. Previous Research

The central policy goal of raising the federal minimum wage is to increase incomes for less affluent Americans. This goal, in turn, raises two questions: How does raising the minimum wage affect the income of low-wage workers? Are those who would be affected by an increase in the minimum wage those who are most in need of assistance?

A higher minimum wage’s impact on annual income depends on how it affects employment. For instance, if the minimum wage increased to $12 per hour, many of those earning $7.25 per hour today would benefit from a wage increase of at least $4.75. Other workers who earn below $12 per hour, however, could lose their jobs and thus see their wage fall to $0 per hour. Additionally, those who are looking for work might not get hired and would suffer the same fate. To estimate the total net impact of raising the minimum wage on the income of low-wage workers, one must project the total income gained by workers who remain employed, minus total income lost by those who do not attain or retain a job.

To estimate the impact of a $12 and a $15 minimum wage on employment and income, we utilize studies by the CBO (2014),[2] Meer & West (2015),[3] and Clemens & Wither (2014),[4] which provide a range of estimates. These studies examined different labor-market aspects of the minimum wage, resulting in different conclusions regarding the policy’s impact on employment and income. Using these three studies, we consider the effects of the minimum wage under modest, moderate, and strongly negative employment scenarios.

CBO

In 2014, the CBO examined the impact of raising the federal minimum wage to $9.00 or $10.10 per hour, two of the most popular proposals at the time. For the $10.10 proposal, the CBO found that the policy would result in employment falling by 500,000 jobs relative to their projected 2016 baseline. The CBO assumed that, in addition to those earning between $7.25 and $10.10 getting a raise, those earning just above $10.10 would also see their wages increase. Specifically, those who earn up to 50 percent more than the minimum-wage hike would see their hourly earnings rise. As a result, people earning below $11.50 (who stay employed) would benefit from a wage increase of some sort.

The CBO concluded that net earnings for low-wage workers would increase by $31 billion: 19 percent of those additional earnings would go to families below the poverty threshold; 52 percent to families with incomes one to three times the poverty threshold; and 29 percent to families with incomes more than three times the poverty threshold. We employ these findings when assuming our lower-bound employment consequences of raising the federal minimum wage.

Meer & West

While there is an ongoing debate regarding the impact of the minimum wage on the level of employment, Meer & West suggest that the negative impact of the minimum wage is best isolated by focusing on employment dynamics. Specifically, they find that a 10 percent increase in the real minimum wage is associated with a 0.30 to 0.53 percentage-point decrease in the net job-growth rate.

Previously, the American Action Forum (AAF) applied Meer & West’s work to California’s recent law that raises the state’s minimum wage to $10 per hour (effective 2016). Using Meer & West’s result, the AAF found that this wage increase in California means a loss of 191,000 jobs that will never be created.[5] In addition, the AAF found that if every state followed suit, more than 2.3 million new jobs would be lost across the United States. We employ the estimates found in Meer & West’s study to characterize the most moderate employment consequences of raising the federal minimum wage.

Clemens & Wither

In late 2014, Jeffrey Clemens and Michael Wither of the University of California at San Diego released research examining what happened to low-wage workers the last time that the federal government raised its minimum wage—rising in three steps, during 2007–09, from $5.15 to $7.25 per hour. Using data from the Survey of Income and Program Participation (SIPP), they focused on how the minimum-wage hike affected employment and income among those whom the minimum-wage hike affected most: low-wage workers earning below $7.50 per hour.

Clemens and Wither found significant, negative consequences for low-wage workers. From 2006 to 2012, employment in this group fell by 8 percent, translating to about 1.7 million jobs.[6] The job loss in this low-wage group accounted for 14 percent of the national decline in employment during this period.[7] The minimum-wage hike also increased the probability of working without pay (e.g., unpaid internships) by 2 percentage points. Workers with at least some college education were 20 percent more likely to work without pay than before the minimum wage rose.

As a result of the reduction in employment and paid work, net average monthly incomes for low-wage workers fell by $100 during the first year after the minimum wage increased and fell by an additional $50 in the following two years. We use the Clemens & Wither estimates as the upper bound of the employment consequences from raising the federal minimum wage.

III. Methodology

Of the many minimum-wage proposals espoused, two stand out: the Raise the Wage Act,[8] which would increase the federal minimum wage to $12 per hour by 2020; and the Pay Workers a Living Wage Act,[9] which would increase the federal minimum wage to $15 per hour by 2020. In this paper, we analyze the labor-market effects of raising the minimum wage to $12 or to $15 per hour by 2020.

In estimating the number of workers whom the minimum-wage hike would affect, we use a methodology similar to that employed in the 2014 CBO report. We assume that those who would be most directly affected by the minimum-wage increase are the workers who, we project, would earn between $7.25 per hour and the new minimum-wage level in 2020 under current law.[10] For the $12 minimum wage, this includes all hourly workers who would earn between $7.25 and $12 per hour; for the $15 minimum wage, it includes everyone who would earn between $7.25 and $15 per hour. These workers stand to see the largest wage hikes. However, consistent across all minimum-wage studies, this low-wage group would also bear, almost entirely, the job losses. Like the CBO, we assume that all job losses occur only among those who, under current law, would earn between $7.25 and the new minimum-wage level in 2020.

The CBO anticipated that a minimum-wage hike would also increase earnings for those who earn just above the new minimum-wage level. In particular, the CBO assumed that workers who earn wages up to 50 percent higher than the minimum-wage hike would see their hourly earnings rise. This means that for the $10.10 option, the CBO projected that workers earning between $10.10 and $11.50 per hour would see an increase in hourly earnings. The CBO assumed, however, that the minimum-wage hike would not affect this group’s employment.

To identify the number of workers who will earn just above the new minimum-wage level and would still be affected by the minimum-wage hike, we use the same method as the CBO. For the minimum-wage hike to $12 per hour, we assume that workers earning, under current law, $12–$14.40 per hour would see earnings rise to $14.40 per hour—without any negative employment consequences. For the wage hike to $15 per hour, we assume that those earning $15–$18.90 per hour under current law would get a raise to $18.90—without losing their jobs. While it is possible that workers in this group could experience a wage increase, it is unlikely that everyone will experience a raise all the way up to $14.40 or $18.90 per hour. Our results likely overestimate the income gains resulting from each minimum-wage increase.

Finally, for identifying those who will be affected by the minimum-wage hike and the resulting net effect on annual income, the CBO employed two approaches. In the first, the CBO used monthly Current Population Survey (CPS) wage and hours data in 2012 to isolate those who would be affected by the hike. In the second approach, the CBO used the 2013 March CPS Annual Social and Economic Supplement, which surveyed a much larger sample of workers and collected detailed information on annual income and earnings data for 2012. In the latter approach, the CBO estimated hourly earnings by dividing total earnings in 2012 by total hours worked. While the first approach (the “wage approach”) has the benefit of directly recording hourly earnings, the second approach (the “annual earnings approach”) is based on a much larger sample of workers and more directly relates to annual income.

We present our estimates using the wage approach and use the regular monthly wage and hour data from the 2014 March CPS Annual Social and Economic Supplement.[11] The wage approach results in more positive benefits to raising the minimum wage than the annual earnings approach because it yields lower hourly earnings for each worker. As a result, the wage approach projects a much larger number of workers subject to the effects of raising the minimum wage. Thus, we view our net income figures as upward-bound estimates.

Our estimates using the annual earnings approach can be found in the appendix. In the annual earnings approach, we use the supplemental annual income and earnings information from the same 2014 March CPS supplement.

IV. Workers

$12 Minimum Wage

We estimate that raising the federal minimum wage to $12 per hour would affect 38.3 million workers. These are the hourly workers who, we project, will earn between $7.25 and $14.40 in 2020 under current law (Figure 1).

Figure 1. Workers Affected by $12 Minimum Wage

Wage Range

Workers

$  7.25–$12.00

25.8 million

$12.00–$14.40

12.5 million

Total

38.3 million

We project that, under current law, about 25.8 million hourly workers will earn between $7.25 and $12 per hour in 2020. An additional 12.5 million workers will earn between $12 and $14.40 per hour. Consequently, we project that a minimum-wage hike to $12 per hour would affect 38.3 million hourly workers in total.

$15 Minimum Wage

We project that raising the federal minimum wage to $15 per hour would affect 55.1 million workers. These are the number of hourly workers who, we project, will earn between $7.25 and $18.90 under current law (Figure 2).

Figure 2. Workers Affected by $15 Minimum Wage[12]

Wage Range

Workers

$  7.25–$15.00

40.6 million

$15.00–$18.90

14.6 million

Total

55.1 million

We project that about 40.6 million hourly workers will earn between $7.25 and $15 per hour in 2020. An additional 14.6 million will earn between $15 and $18.90 per hour. In total, we project that a minimum-wage hike to $15 per hour will affect 55.1 million hourly workers.

V. Employment

Many like the idea of increasing the minimum wage. The potential employment consequences of mandating a minimum-wage hike calls the merits of this policy into question, however. When the federal government increases the minimum hourly pay for workers, it effectively increases the per-hour cost of low-wage labor. Employers have three main mechanisms to pay for this additional labor cost: lower profits, higher prices, and fewer workers.

While many minimum-wage advocates hope that employers pay for the additional cost with their own profits, the evidence suggests that the vast majority of low-wage workers are in industries that have razor-thin profit margins, such as retailers and restaurants.[13] In these industries, businesses tend to pay for minimum-wage hikes by increasing prices, reducing current and future employment, or both. While the exact impact of a minimum-wage hike on employment is debated, extensive literature, from the 1950s to today,[14] concludes that raising the minimum wage damages the labor market.[15] Moreover, the literature shows that the workers who tend to become jobless are the low-skilled, low-wage workers whom the policy intends to help.

The CBO, Meer & West, and Clemens & Wither demonstrate negative labor-market consequences of raising the minimum wage, with varying degrees of severity. In this section, we apply their findings to the proposals to increase the federal minimum wage to $12 and to $15 per hour by 2020. As mentioned, we follow the CBO’s methodology by assuming that all job losses occur within the group of workers who, under current law, will earn between $7.25 and the new minimum-wage level in 2020. Specifically, we assume that no one projected to be earning above the new minimum-wage level would suffer employment loss.

To preview our estimates, we find that increasing the minimum wage to $12 per hour would cost 1.3 million–11.4 million jobs. Raising the minimum wage to $15 per hour would cost 3.3 million–16.8 million jobs.

$12 Minimum Wage

Overall, we estimate that low-wage employment would be 1.3 million to 11.4 million lower than under current law if the federal government were to raise the minimum wage to $12 per hour (Figure 3).

Figure 3. Jobs Lost from $12 Minimum Wage

Model

Jobs Lost

CBO

1.3 million

Meer & West

3.8 million

Clemens & Wither

11.4 million

Using the CBO report, our lower-bound employment scenario, we find that raising the minimum wage to $12 per hour by 2020 would cost about 1.3 million jobs nationwide. This means that there would be 1.3 million fewer workers than the 25.8 million workers who, we project, will earn between $7.25 and $12 per hour, absent the minimum-wage increase.

In our middle-range negative employment scenario, derived from Meer & West, this minimum-wage increase would reduce the net job-growth rate significantly, costing 3.8 million low-wage jobs. As a result, almost 4 million fewer low-wage jobs would be created than under current law.

The Clemens & Wither estimate indicates severe labor-market consequences. With this model, we estimate that there would be 11.4 million fewer low-wage jobs than under current law. The Clemens & Wither estimate results in such a large decline in employment because they find that the last federal minimum-wage hike actually caused low-wage employment to fall from its initial level, whereas the CBO projected the reduction in employment relative to current law, and Meer & West measured the minimum wage’s impact on net job growth.

Under the Clemens & Wither estimate, one finds that low-wage employment in 2020 would be 4.1 million fewer than today’s level. When one compares the resulting employment level with what is projected under current law, including jobs created from economic growth, the minimum wage ends up costing 11.4 million jobs that would be lost or not created.

$15 Minimum Wage

We estimate that 3.3 million to 16.8 million fewer low-wage jobs would exist in 2020 if policymakers increased the federal minimum wage to $15 per hour (Figure 4).

Figure 4. Jobs Lost from $15 Minimum Wage

Model

Jobs Lost

CBO

3.3 million

Meer & West

6.6 million

Clemens & Wither

16.8 million

Using the CBO estimate, we find that increasing the minimum wage to $15 per hour would cost 3.3 million low-wage jobs. The reduction in job creation captured by the Meer & West estimate reveals that in 2020, the U.S. would have 6.6 million fewer low-wage jobs than under current law. Using the Clemens & Wither estimate leads to 16.8 million fewer low-wage jobs in 2020 than under current law.

VI. Income

In this section, we project how increasing the federal minimum wage to $12 and to $15 per hour by 2020 would affect total annual income earned by low-wage workers. This involves calculating the total earnings increase for those employed and the total earnings loss for the jobless. After subtracting total income lost from total income gained, we derive the net income change for all low-wage workers.

Methods and Assumptions

For all workers who keep their jobs and will earn between $7.25 per hour and the new minimum-wage level under current law, we assume that their hourly pay rate would increase to the new minimum-wage level. In the $12 minimum-wage scenario, for all hourly workers who, we project, will earn between $7.25 and $12 per hour in 2020 under current law, we assume that their wages would rise to $12 per hour—if they stay employed. For all who will earn between $12 and $14.40 per hour in 2020, we assume that their wages would rise to $14.40.

Likewise, in the $15 minimum-wage scenario, we assume that all hourly workers who will earn between $7.25 and $15 per hour in 2020 under current law would see their wages rise to $15 per hour—if they stay employed. For all who will earn between $15 and $18.90 per hour, we assume that their wages would rise to $18.90. Under both the $12 and $15 minimum-wage scenarios, we assume that the minimum-wage increase itself would have no impact on hours worked per week and weeks worked per year—for those who keep their jobs. Finally, we assume that all who are jobless as a result of the minimum-wage increase would see their individual annual earnings fall to $0.

$12 Minimum Wage

The impact of raising the federal minimum wage to $12 per hour on the income of low-wage workers depends largely on how many become jobless (Figure 5).

Figure 5. Net Change in Total Income from $12 Minimum Wage

Model

$7.25 to $12.00

$7.25 to $14.40

CBO

$  30.2 billion

$49.6 billion

Meer & West

–$    5.2 billion

$14.2 billion

Clemens & Wither

–$112.5 billion

–$93.1 billion

Figure 5 illustrates the net income effect for those who, under current law, will earn between $7.25 and $12 per hour in 2020 and for everyone who will earn between $7.25 and $14.40 per hour. Consider the former (i.e., those directly affected by the law).

In the modest CBO employment scenario, the income gains for those who stay employed outweigh the income losses for those who lose their jobs. On net, income would increase in this group by $30.2 billion. However, in the two other employment scenarios, the earnings gained for those who would keep their jobs would be outweighed by the earnings lost by those who would become jobless. As a result, a $12 minimum wage would cause net income to fall in both these employment scenarios. In the middle-range Meer & West scenario, total income would decline by $5.2 billion. The income lost in the severe Clemens & Wither scenario would be even worse, as total net income would decline by $112.5 billion. These results highlight the importance of labor-market policies that do not harm employment.

When one assumes that everyone earning just above the new minimum wage would also get a significant wage bump—and suffer no employment loss—the net income changes become more positive. In the case of a $12 minimum wage, this means including the income increases for those who, under current law, will earn between $12 and $14.40 per hour in 2020. For this group, we assume that workers’ wages rise without any negative employment consequences. In the modest CBO scenario, raising the minimum wage to $12 per hour would increase income for low-wage workers by $49.6 billion, in net. In the moderate Meer & West scenario, the minimum-wage increase would slightly increase low-wage income, by $14.2 billion. However, in the severe Clemens & Wither scenario, raising the minimum wage still has a net negative impact on income for low-wage workers, as their income would fall by $93.1 billion.

$15 Minimum Wage

Figure 6 illustrates how raising the minimum wage to $15 per hour on net would affect total income for low-wage workers.

Figure 6. Net Change in Total Income from $15 Minimum Wage

Model

$7.25 to $15.00

$7.25 to $18.90

CBO

$118.8 billion

$171.3 billion

Meer & West

$  52.8 billion

$105.4 billion

Clemens & Wither

–$153.2 billion

–$100.6 billion

For the $15 minimum-wage proposal, we project the impact on net income just for those who, under current law, will earn $7.25 to $15 per hour in 2020, as well as for all low-wage workers who will earn between $7.25 and $18.90 per hour. For those who will earn just above $15 per hour under current law ($15 to $18.90 per hour), we assume that their wages would increase without job losses.

Looking first at those who will earn between $7.25 and $15 per hour in 2020 under current law, the modest CBO and moderate Meer & West scenarios both yield positive income changes. In the former, increasing the minimum wage to $15 per hour would, on net, increase total incomes in this group by $118.8 billion. In the Meer & West scenario, we find that increasing the minimum wage would result in much smaller net income gains. In this case, the minimum-wage hike would increase incomes by $52.8 billion. In the severe Clemens & Wither scenario, however, the drastic employment losses suggest that raising the minimum wage to $15 per hour would cause a significant reduction in earnings for low-wage workers. Under this scenario, net earnings would fall by $153.2 billion for hourly workers who will earn less than $15 per hour under current law.

As Figure 6 illustrates, including the income increases for those who would earn just above $15 per hour under current law significantly increases the net income gains under both the CBO and Meer & West scenarios. Yet in the Clemens & Wither scenario, net income for low-wage workers would still decline, by $100.6 billion.

VII. Net Income Changes by Income Level

While many hope that raising the minimum wage will greatly assist those in poverty, we find little evidence that raising the federal minimum wage would substantially increase incomes for those with family incomes below the poverty threshold. Specifically, we find that in most of the cases that result in net income gains from a minimum-wage increase, only 10 percent or less would go to workers currently in poverty.

$12 Minimum Wage

Figure 7 displays our estimates for how raising the federal minimum wage to $12 per hour would increase (or decrease) earnings for low-wage workers, by income level.

Figure 7. $12 Minimum Wage’s Resulting Net Pay Change, by Income Level[16]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$  4.0 billion

$0.8 billion

–$  8.8 billion

1x–3x

$23.3 billion

$5.7 billion

–$47.4 billion

3x–6x

$16.5 billion

$5.7 billion

–$27.2 billion

6x plus

$  5.8 billion

$1.9 billion

–$  9.7 billion

On net, earnings would increase for low-wage workers at all income levels in the modest CBO and moderate Meer & West employment scenarios and decrease at all income levels in the severe Clemens & Wither scenario. For instance, in the Meer & West scenario, we find that the net income of low-wage workers would increase by $0.8 billion for those with family incomes below the poverty threshold; by $5.7 billion for those with incomes one to three times the poverty threshold; by $5.7 billion for those three to six times the poverty threshold; and by $1.9 billion for workers with incomes over six times the poverty threshold.

As a result, in the Meer & West scenario, only 5.8 percent of all the income gained from increasing the minimum wage to $12 per hour would go to families in poverty; 40.5 percent would go to families with incomes one to three times the poverty threshold; 40.1 percent would go to families with incomes three to six times the poverty threshold; and 13.7 percent would go to families with incomes over six times the poverty threshold. Figure 8 illustrates percentage distribution of income gained or lost in each employment scenario.

Figure 8. Percentage Distribution of Net Pay Change, by Income Level, from $12 Minimum Wage[17]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

8.1%

5.8%

9.5%

1x–3x

46.9%

40.5%

50.9%

3x–6x

33.3%

40.1%

29.2%

6x plus

11.7%

13.7%

10.4%

 

$15 Minimum Wage

Figure 9 highlights our estimates for how raising the federal minimum wage to $15 per hour would change net earnings for low-wage workers, by income level.

Figure 9. $15 Minimum Wage’s Resulting Net Pay Change, by Income Level[18]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$11.9 billion

$  7.0 billion

–$  8.4 billion

1x–3x

$77.2 billion

$44.9 billion

–$56.2 billion

3x–6x

$59.9 billion

$38.0 billion

–$30.2 billion

6x plus

$22.2 billion

$15.4 billion

–$  5.8 billion

As with raising the minimum wage to $12 per hour, raising it to $15 would result in net earnings increasing at all income levels in the CBO and Meer & West employment scenarios and net income decreases in the Clemens & Wither scenario. Using the moderate Meer & West scenario, we find that raising the minimum wage to $15 per hour would increase the income of low-wage workers by $7.0 billion for those in poverty; by $44.9 billion for those with incomes one to three times the poverty threshold; by $38.0 billion for those with incomes three to six times the poverty threshold; and by $15.4 billion for those with incomes over six times the poverty threshold.

As a result, only 6.7 percent of the net income increase from raising the minimum wage to $15 per hour would go to families in poverty; 42.6 percent would go to families with incomes one to three times the poverty threshold; 36.1 percent would go to families with incomes three to six times the poverty threshold; and 14.7 percent would go to families with incomes over six times the poverty threshold.

As illustrated in Figure 10, in every model we run, only a small minority of the income benefits (or costs) from increasing the minimum wage to $15 per hour would actually go to families in poverty.

Figure 10. Percentage Distribution of Net Pay Change, by Income Level, from $15 Minimum Wage[19]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

7.0%

6.7%

8.4%

1x–3x

45.1%

42.6%

55.9%

3x–6x

35.0%

36.1%

30.0%

6x plus

13.0%

14.7%

5.8%

 

VIII. Conclusion

Many lawmakers continue to debate the merits of a $12 federal minimum wage, while others advocate for $15 per hour. In this paper, we find that any potential benefits from raising the minimum wage would be greatly offset by the negative labor-market consequences of the policy.

For the $12 federal minimum wage, when assuming moderate negative employment consequences, we find that the policy would cost 3.8 million jobs—at most, it would increase the earnings of low-wage workers by only $14.2 billion. Further, only a small portion of that income gain would benefit families in poverty: we find that only 5.8 percent of the increase in pay would go to workers in poverty.

For the $15 federal minimum wage, when assuming moderate negative employment consequences, we find that the policy would cost 6.6 million jobs. On net, it would raise the earnings of low-wage workers by $105.4 billion, at most. Again, however, only a small minority of that additional income would benefit families in poverty. In particular, only 6.7 percent of the increase in earnings would go to workers in poverty.

Overall, the income gains from raising the minimum wage would come at a significant cost to the large number of workers who would become jobless. In effect, raising the minimum wage transfers incomes from the low-wage workers who are unfortunate to become jobless to the low-wage workers who remain employed. It accomplishes this without effectively helping those who are most in need.


Click here to view the appendix


[1] Congressional Budget Office, “The Effects of a Minimum-Wage Increase on Employment and Family Income,” February 2014, https://www.cbo.gov/publication/44995.

[2] Ibid.

[3] Jonathan Meer and Jeremy West, “Effects of the Minimum Wage on Employment Dynamics,” January 2015, http://econweb.tamu.edu/jmeer/Meer_West_Minimum_Wage.pdf.

[4] Jeffrey Clemens and Michael Wither, “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers,” National Bureau of Economic Research, December 2014, http://www.nber.org/papers/w20724.

[5] Ben Gitis, “The Steep Cost of a $10 Minimum Wage,” American Action Forum, October 2013, http://americanactionforum.org/research/the-steep-cost-of-a-10-minimum-wage.

[6] The 1.7 million jobs figure is based on authors’ analysis of Clemens & Wither (2014) estimates.

[7] Clemens & Wither (2014) accounted for the effects of the recession by using state, time, and individual effects and controlling for the Federal Housing Finance Agency (FHFA) House Price Index. For more information on their methodology, see http://www.nber.org/papers/w20724.

[10] The CBO’s baseline projection has low-wage-worker hourly earnings rising at an average annual rate of 2.9 percent from 2013 to 2016. We assume the same and project wages to increase by 2.9 percent each year until 2020.

[11] Current Population Survey, 2014 Annual Social and Economic Supplement, retrieved from the National Bureau of Economic Research, http://www.nber.org/data/current-population-survey-data.html.

[12] Numbers may not add to total due to rounding.

[16] Figures may not add to total reported in Figure 5 due to rounding.

[17] Red indicates distribution of income lost.

[18] Figures may not add to totals reported in Figure 6 due to rounding.

[19] Red indicates distribution of lost income.


Executive Summary         

The Medicare Trustees issued their annual report detailing the financial state of America’s entitlement programs. The report echoed past conclusions: Medicare and Social Security are still going bankrupt.

At its current pace, Medicare will be bankrupt in 2030 and Social Security will go bankrupt in 2034 (a year later than last year’s projection).

Despite what many will herald as good news for Medicare, a deeper look at the data proves just how broken our current entitlement programs are. An American Action Forum analysis of the data found other startling statistics, including:

  • Medicare’s Annual Cash Shortfall in 2014 was $308.9 billion
  • Payroll taxes would have to increase 18% to pay for Medicare Part A just this year
  • Over the next 75 years, Social Security will owe nearly $11 trillion more than it is projected to take in

What You Need to Know About the Medicare Trustees Report includes one-pagers and relevant statistics on:

  • The solvency of Medicare
  • President Obama’s stewardship of Medicare
  • The solvency of the Social Security Trust Fund
  • The solvency of the Social Security Disability Insurance (DI) program
  • The solvency of the Social Security Old-age and Survivors Insurance (OASI) Program

The Solvency of Medicare

Treasury Secretary Jacob Lew recently released the 2015 Medicare Trustees Report. This annual rite delivered yet another reminder to the American public that Medicare is undeniably going bankrupt. 

The report estimated that the Medicare Hospital Insurance Trust Fund will be bankrupt by 2030.  While the bankruptcy projection may snag the headlines, there are 3 key budgetary numbers that shouldn’t go unnoticed:

$308.9 Billion

Medicare’s Annual Cash Shortfall in 2014

  • In 2014, Medicare spent $613.3 billion on medical services for America’s seniors but only collected $304.4 billion in payroll taxes and monthly premiums.
  • This cash shortfall represents 60 percent of the federal deficit in 2014.

$3.6 Trillion

Medicare’s Cumulative Cash Shortfall Since 1965

  • Medicare has had a cash shortfall every year since its creation except two: 1966 and 1974.
  • The Medicare Trustees report covers these cash shortfalls by “borrowing” unrelated tax revenues from other programs. 

29.6%

Medicare’s True Contribution to the National Debt

  • America’s fiscal trajectory is unsustainable and Medicare is the primary source of red ink driving this trajectory.
  • The cash shortfall is responsible for over one-fourth of the federal debt.

Continuing with the Medicare status quo is unacceptable. Balancing Medicare’s annual cash shortfalls under the existing system would prove devastating to seniors and require:

18% Increase

Annual Payroll Tax Increase Needed to Balance Medicare Part A

  • In 2014, Medicare Part A (hospitals) cash deficit was $42 billion.
  • To balance, payroll taxes would increase from 1.45 percent to 1.71 percent. 

$3,840 Increase

Annual Premium Increase Needed to Balance Medicare Part B

  • In 2014, Medicare Part B (physicians) cash deficit was $200 billion.
  • To balance, seniors’ premiums for physicians would need to increase by 405 percent meaning the typical annual physician premium cost to seniors would rise from $1,330 to $5,387– an increase of $4,057.

$2,107 Increase

Annual Premium Increase Needed to Balance Medicare Part D

  • In 2014, the Part D (drugs) cash deficit was over $66.7 billion.
  • To balance, seniors’ premiums for prescription drugs would need to increase by 684 percent meaning the annual drug premium cost to seniors would rise from $281 to $1922– an increase of $1,641.

 

President Obama’s Stewardship of Medicare

“Now, these steps will ensure that you -- America's seniors -- get the benefits you've been promised.  They will ensure that Medicare is there for future generations.”

President Barack Obama
Remarks to a Joint Session of Congress
September 9, 2009

An Evaluation of President Obama’s Medicare Stewardship

The 2015 Trustees Report provides a non-partisan evaluation of President Obama’s Medicare stewardship.  Prepared annually for Congress by the Office of the Chief Actuary, the Trustees Report offers unparalleled detail on the financial operations and actuarial status of the Medicare program.  In short, it’s where every President’s soaring Medicare rhetoric meets fiscal reality:

MEDICARE FINANCIAL OPERATIONS UNDER PRESIDENT OBAMA

 

2009

2010

2011

2012

2013

2014

2015*

2009-2014

Medicare Revenue

$253 B

$240 B

$261 B

$272 B

$293 B

$309 B

$325 B

$1,953 B

Medicare Spending

$509 B

$523 B

$549 B

$574 B

$583 B

$613 B

$649 B

$3,351 B

Cash Deficit

-$256 B

-$282 B

-$288 B

-$302 B

-$289 B

-$304 B

-$324 B

-$1,723 B

*2015 Projections

For President Obama’s Medicare policies, the fiscal reality is that they all but guarantee bankruptcy.  Since taking office, President Obama has run a Medicare cash flow deficit of over $1.7 trillion (2009-2014).  This includes $1.5 trillion in red ink accumulated since the passage of the President’s signature healthcare reform law.  By the end of 2015, the trustees project that the Obama Administration will have overseen a $2 trillion Medicare cash shortfall.

At such unprecedented levels of cash shortfalls, it’s evident that President Obama and the Affordable Care Act have failed to ensure that Medicare will be there for today’s seniors, let alone the next generations of older Americans. The law’s Medicare reforms have failed to meaningfully control costs and are harming seniors in the process:

13.3% Cuts to Medicare Advantage Benefits

The ACA has cut Medicare Advantage benefits per-enrollee by over $1,500

  • Between this year and next, Medicare Advantage benefits are being reduced by 3.1 percent.
  • In 2015, Medicare Advantage per-enrollee benefits will be 13.3 percent less than they have otherwise been without the ACA, a value of over $1,500. (Full AAF study)

Medicare and Medicaid will cost $2 trillion in 8 years

Medicare costs are going to continue to rise

  • At the current pace, the Medicare and Medicaid programs will reach an annual cost of $2 trillion in 8 years. (More information here)
  • Since 1970, Medicare spending per beneficiary has tripled since 1970.  (More information here)

 

The Solvency of the Social Security Trust Fund

On July 22nd, the board of trustees that oversees the Social Security program released their annual report. The report shows  that the nation’s primary safety net for retirees, survivors and the disabled remains in financial distress and proves that, absent reform, the program will fail to meet its promises to future seniors.

Wednesday’s report estimated that the combined (retirement and disability) Social Security Trust Funds will be bankrupt by 2034.  Two decades until bankruptcy is only the beginning of the bad news.  The Trustees reported more critical data that make clear the program’s structural imbalances:

$73.1 Billion

Social Security’s Contribution to the Debt in 2014

  • In 2014, Social Security spent $859.2 billion but only collected $786.1 billion in non-interest income.
  • This is the fifth year in a row that Social Security has been in cash deficit, with the program running a cumulative deficit of $292.8 billion since 2010.

$10.7 Trillion

Social Security’s Unfunded 75 Year Liability

  • Social Security’s promised benefits exceed projected payroll taxes and Trust Fund redemptions by over $10 trillion – $100 billion larger than estimated last year.
  • But for the last two years, Social Security faces the largest imbalance as a share of taxable payroll – 2.68 – since the program was overhauled in 1983.

19 Years

Years Until the Trust Funds are Exhausted

  • This is the shortest horizon to exhaustion since 1982.
  • The Trust Funds’ exhaustion date is unchanged from last year’s estimate but follows the 4 consecutive years of deterioration in the program’s actuarial balance.

The Trustees’ report paints a distressed picture of Social Security’s financial health and proves that the present course is unsustainable.  Social Security is now contributing to the annual deficit, while promised benefits vastly exceed planned funding. The implications of failing to reform the status quo are:

21 Percent

Reduction in Benefits in 2034

  • After the projected exhaustion of the Social Security Trust Funds, Social Security revenue will fund only 79 percent of promised benefits.
  • This deteriorates further, to 73 percent, by 2089.

21 Percent

Payroll Tax Increase

  • Absent reform, to meet promised benefits over the long-term payroll taxes would have to be immediately increased by 21.1 percent, from a rate of 12.4 percent to 15.02 percent.

 

The Solvency of Social Security Disability Insurance (DI)

On July 22nd, the board of trustees that oversees the Social Security program released their annual report. The report demonstrates the pressing collapse of the Disability Insurance (DI) program, which will require near-term attention.

The report estimated that Social Security Disability Insurance Trust Fund will be bankrupt in 2016. This is not the first time the DI program has faced near-term shortfalls. To avoid trust fund exhaustion, in 1994 Congress increased the allocation of payroll taxes devoted to the DI Trust Fund. However, as experience makes clear, absent long-term reform, similar measures will only provide short-term solvency.

$33.6 Billion

DI’s Contribution to the Debt in 2014

  • In 2014, DI spent $145.1 billion but only collected $111.5 billion in non-interest income.
  • This is the 10th year in a row that DI has been in cash deficit, with the program having added over $213.4 billion to the debt since 2005.

$1.2 Trillion

DI’s Unfunded 75 Year Liability

  • Social Security’s promised disability benefits exceed projected payroll taxes and Trust Fund redemptions by over $1 trillion.

1 Year

Years Until the DI Trust Fund is Exhausted

  • This is the shortest horizon to exhaustion since 1994, when Congress passed legislation to increase the payroll tax allocation to the DI Trust Fund.
  • The Trust Funds’ exhaustion date is unchanged from last year’s estimate.

 

11 Million

Number of Beneficiaries in 2015 

  • Over 11 million Americans are projected to receive DI benefits in 2015, the nearest year provided by the Trustees Report to the projected exhaustion date.
  • This figure is comprised of over 9 million disabled workers and nearly 2 million spouses and children receiving auxiliary benefits.

The Trustees’ report makes clear that the nation’s primary assistance program for disabled workers is facing imminent financial distress. Absent long-term reform, the program will remain on a financially precarious trajectory, undermining a critical feature of America’s safety net.

Solvency of Social Security Old-Age and Survivors Insurance (OASI)

On July 22nd, the board of trustees that oversees the Social Security program released their annual report. The report shows that the Old-age and Survivors Insurance (OASI) remains in distress following the material deterioration in its finances of the prior year, and will be unable to meet the needs of future beneficiaries absent reform.

The report estimated that Social Security’s retirement and survivors’ Trust Funds will be bankrupt by 2035. The report also makes clear several additional structural challenges that endanger the millions of current and future retirees and survivors who rely on this program.

$39.6 Billion

OASI’s Contribution to the Debt in 2013

  • In 2014, OASI spent $714.2 billion but only collected $674.6 billion in non-interest income.
  • This is the fifth year in a row that OASI has been in cash deficit, with the program having added $118.1 billion to the debt since 2010.

$9.4 Trillion

OASI’s Unfunded 75 Year Liability

  • Social Security’s promised retirement and survivor benefits exceed projected payroll taxes and Trust Fund redemptions by over $9.4 trillion.

20 Years

Years Until the OASI Trust Fund is Exhausted

  • This is the same horizon to exhaustion as last year, and is the shortest duration since 1982.
  • The Trust Funds’ exhaustion date has improved by one year from last year’s estimate, but follows 5 consecutive years of deterioration in the program’s actuarial balance.

78 Million

Number of Beneficiaries in 2035 (Trust Fund Exhaustion Year) 

  • Nearly 78 million Americans are projected to receive OASI benefits in the year the Fund is projected to become exhausted.
  • This figure is comprised of nearly 72 million retirees and nearly 6 million survivors (based on 2035 estimate).

The Trustees’ report makes clear that the principle federal retirement program is facing its worst financial outlook since the program was last overhauled. On its present course, the program is on track to slash the benefits of nearly 78 million Americans, or significantly raise taxes on future workers.


Executive Summary

Medicare Advantage (MA) offers seniors a one-stop option for hospital care, outpatient physician visits, and prescription drug coverage. MA is popular; enrollment has increased every year since 2004 and reached 16 million individuals in 2014, which represents 30 percent of the Medicare population. Since 2008 MA plan performance has been rated on a 5-star scale to inform beneficiaries of the quality of plan options, and since 2012 plans with higher ratings receive bonuses that are in part returned to beneficiaries.

A disproportionately high number of enrollees are lower-income and minority beneficiaries. Among minority beneficiaries, Hispanics are twice as likely and African–Americans are 10 percent more likely to enroll in MA. Concern has been raised that the Stars rating system penalizes plans that have large enrollments of Low-Income Subsidy (LIS) and dual-eligible beneficiaries.

In this short paper, we investigate this concern. Our primary findings are that:

·         On average, MA contracts with over 30 percent LIS enrollment have a lower rating than other plans by 0.5 Stars;

·         Using a more refined statistical technique does not eliminate this finding; we continue to find a statistically and quantitatively significant penalty to observed Stars rating from the presence of a significant LIS enrollment;

·         These findings imply a significant financial penalty to those MA plans with a significant LIS enrollment, which in turn reduces the resources available to those vulnerable populations; and

·         At the upper end of the spectrum, we estimate a total of more than $470 million in bonus payments lost due to reduced ratings. For individuals, our estimates range from a reduction of $380 to $410 per senior.

Introduction

Medicare Advantage (MA) offers seniors a one-stop option for hospital care (Part A in traditional Medicare), outpatient physician visits (Part B), and prescription drug coverage (Part D). MA is popular; enrollment has increased every year since 2004 and reached 16 million individuals in 2014, which represents 30 percent of the Medicare population.

Among those who enroll, a disproportionately high number of lower-income and minority beneficiaries opt for MA. This may be in part because MA plans tend to have lower co-pays and deductibles than traditional Medicare. Also, low-income enrollees are less likely to have supplemental coverage (employer plans or Medigap plans) that covers these costs. Among minority beneficiaries, Hispanics are twice as likely and African–Americans are 10 percent more likely to enroll in MA.

As an attempt to identify and reward quality care, since 2008 the Department of Health and Human Services (HHS) has rated Medicare Advantage Organizations’ (MAOs’) performance on a 5-star scale.  Beginning in 2012, payment adjustments have been made to plans based on their star rating, with higher rated plans receiving bonuses.  Over time the rating needed to receive bonus payments has risen. This year is the first in which an MA plan has to meet 4 Stars in order to receive the bonus (a 5 percent increase in the benchmark payment).[1]

In the MA Stars program, ratings are assigned at the “contract” level. An MA contract contains one or more MA plans across all areas in the county that each plan is available. As a result, star ratings for a contract may include quality measures of beneficiaries served by many different hospital systems and providers with varying levels of MA, per-beneficiary funding.

A concern has arisen that Stars ratings may reflect features other than the quality of care provided to beneficiaries.  In particular, plan carriers suggest that plans with significant numbers of low-income beneficiaries (those who receive the Low-Income Subsidy or LIS) receive lower ratings than their peer plans with fewer low-income enrollees. These lower ratings are not due to differences in plan quality but socio-economic barriers low-income enrollees face in achieving health outcomes. There is significant reason to believe that LIS beneficiaries could potentially be the cause of the rating deficit. Individuals with low income have higher mortality rates, higher incidences of disease, and poorer outcomes from health care.[2]

As detailed below, the data support this claim. In our data, the average Star rating for plans with low LIS enrollment is 3.81, and the average for plans with high LIS enrollment is 3.28 – a difference of over one-half Star. (For purposes of our investigation, we define “significant number of low-income beneficiaries” as at least 30 percent of a contract’s total enrollment.)

A rating deficit that results from low-income enrollment would have important public policy implications. The Affordable Care Act (ACA) includes substantial, phased cuts to MA. As those cuts continue, MA contracts that do not meet the criteria for a bonus payment will continue to be squeezed financially, further restricting the contract’s ability to provide the benefits and care management necessary to care for low-income seniors and improve quality measures.  In the extreme, plans may simply withdraw from MA. Alternatively, CMS has the authority to terminate at the end of 2016 any MA contracts that score consistently below 3 Stars, which may disproportionately impact low-income beneficiaries. There is a strong case for extending bonus payments to contracts for which the current targets are unfairly out-of-reach as a result of low-income enrollment.

At the same time, there are a number of reasons why this rating deficit may not solely be the result of low-income beneficiaries, some of which are still unrelated to plan quality. Plans with low ratings may be the lowest cost plans and attract low-income enrollees. It could also be the case that the areas in which low-income seniors seek care are generally poor performing areas. These problems should also be public policy concerns.

CMS Research

The Centers for Medicare & Medicaid Services (CMS) examined the relationship between LIS enrollment and low Star Ratings. (In addition to LIS, they studied dual-eligibility and Special Needs Plan (SNP) enrollment as possible sources of the rating deficit.) CMS described that LIS and non-LIS beneficiaries within poor-performing plans did not have significantly different outcomes at the individual level. CMS claims that if a high LIS contract were to be rated solely on non-LIS beneficiaries, the ratings would not substantially improve. Still, CMS identified 7 out of 46 total measures that might plausibly be affected by LIS enrollment. As an interim step, CMS proposed to reduce the weighting on these measures in the Star calculation, thereby increasing the importance of the other 39 measures. The proposal was not implemented and CMS continues to study the issue.

Data and Analysis

We obtained data on 692 contracts for the 2015 plan year. CMS makes publicly available detailed information on contract Star Rating scores for each individual measure and overall results of the CMS Star Rating calculation. We combined this information with other CMS data sets that describe where a given contract’s beneficiaries live, the number of low-income enrollees in a given contract, and the county-level benchmarks used to determine per-beneficiary payment to MA contracts. We also used information published by the Health Resources and Services Administration on Health Professional Shortage Areas (HPSA) to control for regional health care disparities. Our analysis focuses on MA plans that offer a drug benefit and have at least 1,000 beneficiaries.[3] Our data do not provide information on dual-eligibility and we are unable to investigate their impact on Star ratings.

To begin, we define “significant number of low-income beneficiaries” as at least 30 percent of a contract’s total enrollment. As noted above, there is a clear difference in the Star ratings between those without, those with, a significant number of LIS. In our data, the average Star rating for plans with low LIS enrollment is 3.81, and the average for plans with high LIS enrollment is 3.28 – a difference of over one-half Star.

The remaining question is whether this difference is a statistical artifact or a durable difference that merits policy attention.  To address this, our analysis divides the 46 rating measures into 23 measures that are unaffected by beneficiary behavior and 23 measures that could be plausibly affected by beneficiary behavior.[4],[5] We identified these measures based on whether the measure criteria required proactive action by the beneficiary, such as coming into the office for a flu shot or adhering to medication prescriptions, rather than criteria that were mostly in the control of hospitals, such as prescribing the correct medication in the event of a certain heart condition. We also assume that case-mix adjusted measures are unaffected by beneficiary behavior. This set of measures is not the only division of the measures that one could choose, but explorations of other approaches suggest that our results are relatively robust to these decisions.

Our strategy is to assume that MA contracts have inherent quality that can be measured by the weighted average of a contract’s ratings on the set of measures identified as unaffected by beneficiary behavior.[6] That is, we use this metric as a good prediction of the “true” quality of an MA contract. We find that contracts with high LIS enrollment perform worse, on average, than low LIS enrollment plans in this measure of quality.

Our analysis then employs a linear regression to see whether – contingent on our measure of inherent quality – there is a statistically significant reduction in the observed Stars ratings. In addition to plan quality, we also control for the number of beneficiaries that live in an HPSA. Since the overall Star Rating provided by CMS is rounded to the nearest half star, we reconstruct an un-rounded version of the overall star rating according to the CMS formula. Our reconstruction is not perfect. We correctly estimate 91 percent of the overall star ratings and estimate a rating within 0.5 of the remaining plans. We also performed our analysis using the CMS rounded star ratings and found similar results.

Our most important finding is that in the presence of our other controls LIS enrollment has a negative, statistically significant impact on the overall Stars Rating.[7] That is, the more sophisticated statistical approach confirms the finding in the raw data that contracts serving this demographic are at a disadvantage in receiving bonuses. [8]

Policy Implications

The fundamental concern is that the bias in Stars ratings will translate into fewer resources for LIS beneficiaries and reduced access to quality care. To get a sense of the magnitudes, we estimate that there are $7.3 billion in total possible payments if every MA plan received a bonus payment. Of this, roughly $2.3 billion in payments are not made because plans achieve less than 4 Stars, while 107 out of 128 high-LIS plans receive less than 4 stars.[9] Focusing more closely on those plans that might not receive bonuses strictly because of the LIS-induced bias, it appears that $470 million in payments are forfeited by high-LIS enrollment plans by only achieving 3.5 stars.

How large is the financial impact? Our analysis does not provide much insight into the remaining portion of the half-star gap between plans with high- and low-LIS enrollment. There are many variables that could influence a contract’s rating that we cannot observe. We do not have a perfect measure for plan quality, and we also lack detailed statistics on the demographics of contract enrollment, such as ethnicity, gender, age, disability, and detailed income information. In light of this uncertainty, we focus on a range of possible outcomes rather than a single number.

At the upper end of the spectrum, we estimate that in 2015, 44 MA contracts with high-LIS enrollment—covering nearly 1 million beneficiaries—will lose a total of more than $470 million in payment due to missing the bonus payment cutoff by less than a half-star. That payment reduction will correspond to benefit reduction of roughly $380 for each senior.

In our regression framework, moving from a low-LIS to a high-LIS enrollment reduces the Star rating by an estimated 0.18 Stars.[10] In addition, we find that contracts in which 20 percent of beneficiaries live in HPSAs could lead to a lower star rating of as much as 0.12 stars.  Using our regression-based estimate would translate to average additional benefits of $410 per senior.

These suggest that there is some merit to adjusting the Star Rating calculation. A simple adjustment, which would be the easiest to implement as a temporary legislative solution, would apply an adjustment to the overall rating calculation. A straightforward approach would be to increase the Star Rating of any MA contract on the basis of the fraction of LIS enrollment, ranging from a minimum of 0.18 Stars to a maximum adjustment of 0.5 Stars.

A better solution could establish a method for case-mix adjusting specific measures, rather that reducing weights as suggested by CMS. We identify six measures that are candidates for such an adjustment.[11] (The set of measures that we identify has some overlap with the set of measures identified by CMS). Adjusting specific measures for high-risk populations comes with a risk of reducing the incentive for hospitals and insurance companies to improve the care for those populations. Accordingly, these efforts should be carefully researched and tested.


[1] As a practical matter, the Stars bonus payments have served to in part offset ongoing reductions in the benchmark payments to MA plans under the Affordable Care Act.

[2] Centers for Disease Control and Prevention, “REACH 2010 Surveillance for Health Status in Minority Communities --- United States, 2001—2002,” August 24, 2004, available at: http://www.ncbi.nlm.nih.gov/pubmed/15329648; Smith GD et al, “Socioeconomic differentials in mortality risk among men screened for the Multiple Risk Factor Intervention Trial: I. White men,” April 1996, available at: http://www.ncbi.nlm.nih.gov/pubmed/8604778

[3] Out of 394 MA contracts that include a drug benefit, we drop 10 contracts for low enrollment.

[4] The 23 unaffected by beneficiary behavior are Monitoring Physical Activity, Adult BMI Assessment, Care for Older Adults – Medication Review, Osteoporosis Management in Women who had a Fracture, Rheumatoid Arthritis Management, Improving Bladder Control, Reducing the Risk of Falling, Plan All-Cause Readmissions, Getting Needed Care, Getting Appointments and Care Quickly, Customer Service, Rating of Health Care Quality, Rating of Health Plan, Care Coordination, Plan Makes Timely Decisions about Appeals,

Reviewing Appeals Decisions, Appeals Auto–Forward, Appeals Upheld, Rating of Drug Plan, Getting Needed Prescription Drugs, MPF Price Accuracy, High Risk Medication, and Diabetes Treatment. The remainder are Colorectal Cancer Screening, Care – Cholesterol Screening, Diabetes Care – Cholesterol Screening, Annual Flu Vaccine, Improving or Maintaining Physical Health, Improving or Maintaining Mental Health, Special Needs Plan (SNP) Care Management, Care for Older Adults – Functional Status Assessment, Care for Older Adults – Pain Assessment, Diabetes Care – Eye Exam, Diabetes Care – Kidney Disease Monitoring, Diabetes Care – Blood Sugar Controlled, Diabetes Care – Cholesterol Controlled, Controlling Blood Pressure, Complaints about the Health Plan, Choosing to Leave the Plan, Plan Quality Improvement, Complaints about the Drug Plan, Choosing to Leave the Plan, Plan Quality Improvement, Medication Adherence for Diabetes Medications, Medication Adherence for Hypertension (RAS antagonists), and Medication Adherence for Cholesterol (Statins).

[5] We tested the sensitivity of our results to classifying Plan All-Cause Readmissions as unaffected by beneficiary behavior. The results do not change based on this classification.

[6] Throughout the analysis, we weight particular measures according to the CMS Star Rating formula.

[7] Our results are not sensitive to the inclusion or exclusion of a measure of special needs or the HPSA variable.

[8] We focus in LIS status. CMS has the ability to identify dual-eligibles in contracts, allowing a broader investigation of the impacts of socioeconomic factors.

[9] Our estimate is somewhat smaller than one in a McKinsey study that indicates plans with less than 4 stars forfeit $3.7 billion. http://healthcare.mckinsey.com/sites/default/files/2015%20MA%20Stars%20Intel%20Brief%20-%20McKinsey%20Reform%20Center%20-%20110514B.pdf

[10] We tested whether there was an extra sensitivity of the Stars rating to very high (over 50 percent) LIS enrollment. The data show no difference in the impact on the Stars rating for those plans.

[11] We believe that the 6 candidate measures for adjustment are Colorectal Cancer Screening, Annual Flu Vaccine, Diabetes Care – Eye Exam, Diabetes Care – Blood Sugar Controlled, Diabetes Care – Cholesterol Controlled, and Medication Adherence for Hypertension. 


  • $175 billion in revenue has been untapped due to the nearly 7 year delay in approving the Keystone XL pipeline

  • The Keystone XL pipeline could gross over $15 billion in revenue a year.

  • Since 2009, the U.S. has paid over $1 trillion dollars to the top 5 countries from which the U.S. imports, including Russia and Venezuela.

Summary

  • $175 billion in economic activity has been untapped due to the nearly 7 year delay in approving the Keystone XL pipeline.
  • The Keystone XL pipeline could gross over $15 billion in revenue a year.
  • Since 2009, the U.S. has paid over $1 trillion dollars to the top 5 countries from which the U.S. imports, including Russia and Venezuela.

Untapped Revenue

The Keystone XL pipeline has the potential to bring huge gains to the United States, including energy independence, increased security and jobs. The $8 billion, 1,179 mile line, to be operated by Canadian firm TransCanada, would run from Montana to Nebraska and deliver an estimated 830,000 barrels a day of crude to refineries located along the gulf coast. At today’s price of crude at $51.76, this would gross over $42 million dollars a day or roughly $15 billion per year.

TransCanada has waited since September 2008 for authorization of the pipeline. With crude oil prices at a 10 year low, approximately $175 billion in economic activity has been unrealized due to the delay. 

Lessening Dependence

The U.S. would benefit significantly from increased oil imports from Canada as it would lessen our reliance on imports from more unstable areas of the world such as the Middle East, Russia and Venezuela.

According to the Energy Information Agency, the U.S. imported approximately 9 million barrels per day of petroleum in 2014 from 80 countries with the bulk of its oil imports from Canada, Mexico, Saudi Arabia, Venezuela and Russia.

Since 2009, the U.S. has paid over $1 trillion to these top five countries with just over half of it going to Russia, Venezuela and Saudi Arabia.

Executive Authority

In February 2015, President Obama vetoed legislation that would have authorized the construction of the Keystone XL Pipeline.  However, the president still has the authority to approve it through an executive order. The president has stated that he would sign off on the proposal only if it “does not significantly exacerbate the climate problem.”

Currently, the fate of the pipeline is in the hands of the State Department, which must review the proposal to determine whether it's in the national interest. The State Department has jurisdiction over the project because the pipeline stretches through both the U.S. and Canada. The State Department sought the oversight of eight additional Federal agencies to provide their views on the subject of national interest including the Departments of Defense, Justice, Interior, Commerce, Transportation, Energy, Homeland Security, and the Environmental Protection Agency. The review was set to be completed on February 2, 2015 however the permitting process has no set deadline.[1]

In January 2014 the State Department released the Final Supplemental Environmental Impact Statement for the Keystone XL Project in which it concluded that the proposed pipeline would not increase greenhouse gas emissions by a significant amount. Previous AAF research also indicated that carbon emissions stemming from the Keystone XL pipeline would be significantly less than that coming from rail transport, additionally, the risk of oil spills is greatly reduced as pipelines have a lower spill rate than rail.

More recently, the State Department back tracked saying that the original study was based on oil being in the $75 plus range noting that prices below the range of $75 dollars would pose a significant challenge to the project’s economic viability.

From the supplemental review:

“Over the long term, lower-than-expected oil prices could affect the outlook for oil sands production, and in certain scenarios higher transportation costs resulting from pipeline constraints could exacerbate the impacts of low prices….Above approximately $75 per barrel for West Texas Intermediate (WTI)-equivalent oil, revenues to oil sands producers are likely to remain above the long-run supply costs of most projects responsible for expected levels of oil sands production growth. …Oil sands production is expected to be most sensitive to increased transport costs in a range of prices around $65 to $75 per barrel….Prices below this range would challenge the supply costs of many projects, regardless of pipeline constraints, but higher transport costs could further curtail production.”

Conclusion

Fiscally, economically, environmentally and strategically speaking this project is a win. After nearly 7 years and $175 billion in lost economic activity, it is time for the construction of Keystone to come to fruition in order to allow Americans to experience the economic benefits.



[1] http://www.c2es.org/energy/source/oil/keystone

  • Our education system is not sustainable at its present level of productivity. Disruption by way of funding reforms is needed.

  • Simply providing funding to schools does not produce results; however, providing funding to schools that produce results is game-changing.

  • Significant improvements to incentives and academic performance can be made with even small changes to the funding structure.

  • There are a number of ways to incorporate PBF into the ESEA – especially through Title I.

Introduction

Nearly all public education funding is driven by how many and what type of students show up on so-called ‘count days.’  In other words, funding is determined by student attendance rather than outcomes and results. This unquestioned practice of funding schools based almost exclusively on attendance misaligns incentives, rewards sub-par performance, and diminishes the imperative for significant and sustained educational outcomes. The value of opportunities provided to students must consider equity of student outcomes – not just tax dollars allocated to education systems. Disruption is needed, for simply providing funding to schools does not produce results; however, providing funding to schools that produce results is game-changing. 

Government budgets pay for inputs rather than producing outcomes.  It has been almost impossible to redirect scarce public resources based on the comparative performance of different schools, or to stop spending money on programs that don’t produce results.  As Eric Hanushek and Alfred Lindseth wrote, “Questions as to whether or how to change existing ways of doing things to make education dollars more effective are rarely addressed.”[1]

Our education system is not sustainable at its present level of productivity. The more than $600 billion U.S. taxpayers spend every year on public elementary and secondary education equates to at least 5.4 percent of the nation’s GDP.[2]  Globally, according to George Mason University’s Mercatus Center, U.S. spending per pupil is second highest among the 34 member countries of the Organization for Economic Cooperation and Development (OECD). Yet, U.S. student performance on the OECD’s Program for International Student Assessment (PISA) was 17thin reading, 20th in science and 27th in mathematics.[3]                     

Fortunately, however, there has recently been a greater focus among the states on measurable academic results to drive school funding.  As Michigan Governor Rick Snyder wrote, “school funding should be based upon academic growth and not just whether a student enrolls and sits at a desk.”[4] Michigan and other states are implementing Performance-Based Funding (PBF), which seeks to better align funding for schools based on improved performance of schools individually and systemically.

PBF provides an opportunity to make strategic investments in schools by focusing school funding on desired results. This paper examines the potential of performance-based funding at the federal level.

LABORATORIES OF INNOVATION: PBF IN THE STATES

There is an emerging bipartisan consensus that it is no longer acceptable to throw good money after bad on ineffective government programs, because special interests manipulate government budgets to protect their interests regardless of actual results. 

PBF models are being implemented by some states in the context of higher education, as well as career and technical education. Additionally, some states are utilizing PBF as a funding mechanism in elementary and secondary education. Arizona began a statewide PBF program in 2013, called “Student Success Funding,” which expanded in 2014.[5]  Arizona Governor Doug Ducey is leveraging these past efforts with a new approach using an achievement school district model.[6]  Michigan has been implementing a PBF model since 2012.[7]  Pennsylvania took a slightly different approach, providing funding flexibility in exchange for performance based outcomes.[8]  Florida, Wisconsin, and Oregon have all recently began exploring PBF. In each case, the amounts of funding are modest, but the potential impact promises to be significant.

Colorado is the latest state to join the PBF movement. Colorado Governor John Hickenlooper signed into law a bill based on the Social Impact Bond approach, which is a variation on the notion of paying for success (i.e., PBF).   The Colorado law enables the state to partner with service providers and private sector investors or philanthropists to fund and provide interventions in order to increase economic opportunity, support healthy futures, and promote child and youth development. [9]  While the Colorado approach is different than other state approaches, it focuses on fundamentally altering the delivery of funding to one based on results.

These state initiatives ensure education funding is focused on key outcomes, experimentation with different interventions to produce results, and systemization of continuous improvement.

PBF DESIGN PRINCIPLES

While states are leading the way in adopting PBF strategies, it should be considered for federal education funding as well. When doing so, lawmakers need to consider key design principles that would apply to any PBF approach at the federal level.

Measuring Academic Achievement

To ensure there is no disincentive for schools to serve the students who need them most, a PBF model measuring student progress must account for student performance growth, which acts as a counter­weight to a sole focus on absolute performance, thereby equally including students who are significantly behind. By looking at growth longitudinally, there is further protection against penalizing schools who serve high(er) numbers of struggling students.

The vast majority of weight in a PBF model should be given to measurable, academic metrics and less weight to non-academic factors. In addition, consideration and potential weight could be part of mission-specific goals for schools that are focused on a particular student population or academic area.[10]

Funding Subject to PBF

Another key principle to consider is the amount of funding subject to performance consideration. Significant improvements to incentives and academic performance can be made with even small changes to the funding structure. Any PBF model that encompasses all – or significantly more than 10 percent – of a district or school’s funding would put that institution in serious jeopardy of not being able to operate, either because there is not sufficient cash flow to cover ongoing costs or because budget planning would be so contingent as to render resource allocations impossible. An effective PBF model still relies on fixed operating costs being borne by existing funding streams, but variable costs become subject to meeting performance targets on defined metrics. [11]

For example, both Michi­gan and Arizona have included a relatively small amount of funding in their PBF models. In Michigan, every school is potentially eligible for up to $100 per pupil in PBF. [12] In Arizona, under the previous Student Success Formula, the maximum per-pupil achievement payment would be $500. In addition to achievement pay­ments, the maximum per-pupil improvement payment would also be $500, for a total of $1,000 per student.[13]

PBF IN THE ELEMENTARY AND SECONDARY EDUCATION ACT

While the non-financial policy of the Elementary and Secondary Education Act (ESEA) has been radically overhauled since the law’s inception, there has not been a similar focus on the equally important financial policy in the ESEA.   Title I of the ESEA is the single largest K-12 investment that the federal government makes. At the heart of the Title I is a bargain: a significant investment of federal dollars for students in poverty coupled with a demand for significantly improved outcomes for those students.

There are a number of ways to incorporate PBF into the ESEA – especially Title I – and the following examples are a few of the different options Congress could take toward a truly revolutionary reauthorization.

Title I Funding Formulas

While there have been changes on the margins of Title I formulas, there has not been a comprehensive rethinking of how federal funding is provided to states, districts, and schools. This is a significant missed opportunity given that Title I provides supplemental education funding to nearly 24 million students in more than half of all public schools, including 68 percent of elementary schools.[14] With serious efforts to reauthorize the ESEA underway, now is the time to systemically reform how Title I dollars are allocated.

Currently, Title I funds go to states and districts based on four separate formulas: the Basic, Concentration, Targeted, and Education Finance Incentive Grant formulas. Once these funds reach districts, they are combined into one funding allocation to be used for the same Title I program purposes.  All four formulas focus on the number and/or percentage of students in poverty.  The four Title I formulas are overly complicated and fundamentally flawed.  They are the result of political compromise and outdated thinking, and do not take into account any consideration of actual performance.

Providing additional resources does not necessarily result in student success; moving from inputs to outputs by focusing on measureable achievement has characterized the modern era of education reform.  And while the No Child Left Behind Act moved in this direction with its non-fiscal policy changes, it left the job unfinished by failing to factor achievement results into the Title I formulas.

While some have suggested various tweaks to the Title I formulas, they all miss the basic point that funding should contain a performance component.  As policymakers consider reauthorization of the ESEA, incorporating PBF into Title I could deliver long-term, positive results.  The most fundamental way to achieve performance-based funding in Title I of the ESEA would be to consolidate the four Title I formulas into two funding streams: one that provides the vast majority of funding based on students in poverty and one that rewards performance.   

Impact of Title I Formula Changes

Simplifying the Title I formulas would create greater transparency and offer better accountability for federal funding.  Title I could be far simpler by giving states the ability to distribute Title I funds to districts through per-pupil allocations based on the actual number of poor students they serve.  This approach is simple, easy to understand, and consistent with the movement to personalize education.  Personalizing the Title I formulas by ensuring that every student in poverty is counted and receives an equitable share of Title I funding will create a “student-centered” funding formula that isn’t subject to progressive’s notions about what constitutes an appropriate concentration of funding to drive Title I funding.

The second formula would then incorporate PBF, and could potentially offset some of the funding changes under the core, student-centered funding approach by allowing those districts to recoup funding by outperforming other districts in the state. 

The recommendation is that not less than 90 percent of Title I funding be distributed through the student-centered formula already present in the current structure.  To provide an initial view of the impact of such a change, we used data analyses by the Education Trust[xv] and Center for American Progress.[16] [17] This student-centered formula can be illustrated with the following table, which shows the total gain or loss to districts in a particular state, by district child poverty rate:

Table 1. Estimated Change in Title I Allocations due to Student-centered Title I Formula

Gain or Loss to Districts by District Poverty Rate

 

<15%

15 – 30%

>30%

Colorado

$8,413,174

($7,611,341)

($801,833)

Louisiana

($80,304)

$6,809,941

($6,729,635)

Source: Center for American Progress

Then, up to 10 percent of the remaining Title I funds could be distributed using a performance component.  The impact of building in this second performance-based formula can be illustrated by adjusting the examples from above for the change in distribution of funds.  Specifically, incorporating PBF in this way would result in smaller funding increases for districts gaining under the student-centered funding approach and bigger deficits for districts decreasing under the student-centered formula (i.e., in Colorado, the districts with greater than 30 percent poverty would have 10 percent less available, or approximately $80 thousand fewer Title I funds).  

Table 2. Estimated Change in Title I Allocations due to Student-centered and PBF Title I Formulas

Gain or Loss to Districts by District Poverty Rate

 

<15%

15 – 30%

>30%

PBF $ Available (10% of Title I)

Colorado

$7,571,856

($8,372,475)

($882,016)

$15,245,104

Louisiana

($88,334)

$6,128,947

($7,402,599)

$29,204,762

*Original Table Edited by Authors

As can be seen from this table, the amount of funding available through a PBF Title I formula could potentially offset the reduction in funding to districts from the current formulas to the student-centered model provided the districts show academic improvement. 

Regarding what criteria should be used to gauge academic performance, the academic design principle described above is a starting point.  Policymakers could allow states to best define the academic performance criteria, only ensuring the criteria are simple, transparent, and focused on academic growth.

Finally, in order to maintain the accountability of this new funding structure, there should be no hold-harmless provisions to soften the impact of this fundamental transformation.  If students aren’t held harmless when they receive an inferior education, why should districts or schools be held harmless when they provide inferior results? Since the purpose of the funding change is to produce better results by creating the right incentives, those changes have to be allowed to take place.

PBF Title I Reservation/Set-Aside

Another approach to including PBF in the ESEA is by creating a reservation, or set-aside, of Title I funds to allow states like Arizona, which are focusing on expanding or replicating very high performing low-income schools, to have access to some Title I funds to drive performance. Too often, Title I funds go to low performing schools year after year, giving them little pressure to improve. Providing a set-aside is a good option if the goal is to promote a transition to high-performing low-income schools.

As an example, Arizona Governor Ducey’s Arizona Public Schools Achievement District initiative expands the impact of excellent schools through their expansion, replication, or a focus on teaching other schools to do what they do, with half of the projects in the district for low-income schools, which is where Title I funding would play a role.[18]

In fact, some Senators are trying to introduce this approach to the ESEA by allowing states to reserve a percentage of Title I funding to be directed toward statewide efforts to expand high performing, low income public district and charter schools.  This would allow states like Arizona to give funding to the district and charter school leaders who are producing results.

Alternatively, policymakers could consider amending the current law’s Section 1117 of Title I, dealing with school support and recognition.  This section contains the seeds of PBF that, with modification, could become a possible solution. 

The current Section 1117 allows a state to set-aside a portion of its Title I funding to reward successful schools.  This language could be tweaked to become a performance-based approach by requiring funds to be used to reward schools.  Second, the statute should be further modified to require a 10 percent set-aside of Title I funds regardless of funding increases (or decreases) year-over-year. Finally, the reward criteria should be aligned with state accountability systems, but should focus on growth in achievement over time, consistent with the design principle articulated above.

While this approach does not address the fundamentally flawed Title I formulas, it could achieve the same purposes. 

Impact of PBF Title I Reservation/Set-Aside

Using the 2014 Title I allocation tables from the U.S. Department of Education[19], initial modeling of the potential impact of setting aside 10 percent of state Title I funds under Section 1117 could be significant.  For example, by utilizing a state set-aside of 10 percent in Arizona, over $32 million would be available, while in Indiana, almost $26 million would be available to incentivize districts and schools to raise achievement and compete for these performance-based funds.

Table 3.  Estimated Impact of 10 percent Title I PBF Set-aside on Selected States

 

State

FY 2014 Actual Allocation for Districts

10% set-aside for PBF

Revised FY 2014 Title I Allocation for Districts

Arizona

 $325,175,411

 $32,517,541

 $292,657,870

Indiana

$259,897,172

 $25,989,717

 $233,907,455

To dive deeper, a state could require districts to set-aside 10 percent of their Title I funds for PBF.  To illustrate the analysis, district Title I allocations in Arizona and Indiana were modified to model the impact of such a set-aside.  At the district level, in Arizona, this set-aside could yield almost $2.6 million in total in Phoenix, while in Indiana, it could provide $1.4 million in Gary and $3.2 million in Indianapolis, if those districts showed significant improvement.  These amounts, while modest in the scheme of overall Title I budgets, if made available in a performance-based context, could yield better, faster improvement.

Table 4.  Estimated Impact of 10 percent Title I PBF Set-aside on Selected Districts

 

District

FY 2014 Title I Allocation

10% set-aside for PBF

Revised FY 2014 Title I Allocation

Phoenix School District

$25,916,398

$2,591,640

 $23,324,758

Gary Community Schools

$14,049,304

$1,404,930

 $12,644,374

Indianapolis Public Schools

$32,427,259

$3,242,726

 $29,184,533

The bottom line is that a 10 percent Title I set-aside would serve as a major incentivizing force for states, districts and schools to improve performance. 

Social Impact Bonds

Another approach to include PBF in the ESEA would be to create a pilot program utilizing Social Impact Bonds (SIBs), similar to Colorado’s approach.  SIBs are set up to direct public funding to those institutions and programs that are clearly demonstrat­ing their impact through rigorous results, thereby mitigating financial risk to the taxpayer, and providing an effective means for state and local governments to scale up successful innovations.[20] Senators Hatch and Bennett recently introduced legislation to provide for social impact partnerships through the Treasury Department that could serve as a model for the ESEA.[21]

In a typical SIB, the government con­tracts with an entity to provide services. The government only pays the service provider based on the achievement of performance targets.  If the service provider fails to achieve the performance target(s), the government does not pay. Payments can increase for performance that exceeds the min­imum, up to a pre-determined maximum. The service provider gets operating funds by raising capital from private commercial or philanthropic investors. These investors have the opportunity to earn both social and financial returns while putting their capital to work in service of society. In short, this approach is a performance-based contract.[22]

The federal government could become a strategic partner with state and local governments in establishing SIB projects.  There are a number of different ways the SIB model could be incorporated into the ESEA; for example, the federal government could match state and local funds in a combined funding pool that would only be paid out if a school (or district) met performance targets; the school or district would raise funds from investors to operate.  If the school or district met or exceeded its performance targets, the combined government funding pool would then be paid out to the investors.

As the understanding of SIBs develops, it is clear that any Federal effort most allow for flexibility in the structure and funding of SIBs, with state and/or local governments having a key role in determining how the funds are used in projects. For example, tiered performance payments based on degreed of success may be needed for a project to be considered. In other cases, some credit enhancement may allow projects to attract investors that may not be willing to risk so much capital initially.[23]

Figure 1. Example of a Social Impact Bond Model

 

 

Or, in a more limited fashion, the federal government’s contribution to a SIB could be to fund the return on investment payment to the investors, including any performance bonuses for outstanding results.

Incorporating a SIB into the ESEA would truly create a “Race to the Top,” unlike the Obama Administration’s competitive grant program of the same name, which poured more money into the usual policy approaches and promises of improvement, rather than based on actual evidence of improvement and without significantly changing the underlying fiscal policy.

CONCLUSION

The problem of misaligned incentives is a well-researched topic in numerous fields. It has not been a topic of deep research and reflection in education, where the misalignment between funding and performance is at best a drag on the system and student performance, and at worst a fundamental flaw that ensures our schools will never improve sufficiently for our nation to live up to its founding ideals of equality and opportunity.

PBF is a promising new approach that has the potential to improve results, overcome barriers to social innovation, and encourage investments in cost-effective approaches. It does this by ensuring that public funding goes to those schools that are clearly demonstrat­ing their impact through rigorous outcome-based performance measures and providing an effective launching pad for scaling schools and innovations that produce results.

PBF can provide a new approach to improving academic outcomes outside the traditional reform approaches, while addressing systemic inefficiency.   Nowhere is this more needed than in the antiquated and convoluted Title I funding formulas in the ESEA.



[1] Eric Hanushek and Alfred Lindseth, Hoover Institution, Stanford University, excerpt from Schoolhouses, Courthouses, and Statehouses: Solving the Funding-Achievement Puzzle in America’s Public Schools (Princeton University Press, 2009); http://hanushek.stanford.edu/sites/default/files/publications/Hanushek%2BLindseth%202009%20DefiningIdeas.pdf (accessed 2015).

[2] U.S. Department of Education, National Center for Education Statistics, Digest of Education Statistics, Summary of expenditures for public elementary and secondary education and other related programs, by purpose: Selected years, 1919-20 through 2011-12; https://nces.ed.gov/programs/digest/d14/tables/dt14_236.10.asp  (accessed 2015); The World Bank, Government expenditure on education, total (% of GDP), http://data.worldbank.org/indicator/SE.XPD.TOTL.GD.ZS  (accessed 2015).

[3] Veronique de Rugy, K-12 Spending per Student in the OECD, George Mason University, Mercatus Center, http://mercatus.org/sites/default/files/k-12-education-spending-pdf.pdf (accessed 2015);  Organization for Economic Cooperation and Development, Program for International Student Assessment, Results from PISA 2012, United States; http://www.oecd.org/pisa/keyfindings/PISA-2012-results-US.pdf (accessed 2015).

[4] Governor Rick Snyder, State of Michigan, Executive Office of the Governor, A Special Message from Governor Rick Snyder: Education Reform; 4/27/2011; https://www.michigan.gov/documents/snyder/SpecialMessageonEducationReform_351586_7.pdf (accessed 2015).

[5] Lisa Irish, Arizona Education News Service, AZEDNews, New Funding Proposal would reward Schools for Students’ Success, 11/14/2013; http://azednews.com/2013/11/14/new-funding-proposal-would-reward-schools-for-students-success/ (accessed 2015).

[6] Governor Doug Ducey, State of Arizona, Office of the Governor, Governor Doug Ducey Calls For An End To Education Inequalities, 1/12/2015; http://azgovernor.gov/governor/news/governor-doug-ducey-calls-end-education-inequalities.

[8] Commonwealth of Pennsylvania, 2014-15 Executive Budget, 2/4/2014; http://www.portal.state.pa.us/portal/server.pt/document/1394302/2014-15_budget_slide_presentation_pdf

[9] Patricia Levesque, Foundation for Excellence in Education, ExcelinEd Commends Colorado Gov. Hickenlooper for Signing Performance-Based Education Funding; 5/21/2015; http://excelined.org/news/excelined-commends-colorado-gov-hickenlooper-for-signing-performance-based-education-funding/.

[10] Doug Mesecar and Don Soifer, Lexington Institute,  Applying Performance-Based Funding To Public Education,  7/2013; http://lexingtoninstitute.org/wp-content/uploads/2013/11/Performanced-BasedFunding.pdf

(accessed 2015).

[11] Ibid.

[13] Lisa Irish, Arizona Education News Service, AZEDNews, New Funding Proposal would reward Schools for Students’ Success, 11/14/13; http://azednews.com/2013/11/14/new-funding-proposal-would-reward-schools-for-students-success/ (accessed 2015).

[15] Natasha Ushomirsky and David Williams, The Education Trust, 2/2015; http://edtrust.org/wp-content/uploads/2013/10/Likely_Effects_of_Portability_on_Districts_Title_I_Allocations_020915.pdf

[16] Max Marchitello and Robert Hanna, Robin Hood in Reverse, Center for American Progress, 2/4/2015; https://cdn.americanprogress.org/wp-content/uploads/2015/02/ESEAportability-brief2.pdf.

[17] This data was selected for two reasons: 1.These organizations provided solid data analysis on the impact of providing the same amount of funding per low-income student regardless of the overall district or school poverty rate; and 2. Because these organizations do not support allocating funds using this methodology, but did not consider PBF, using their analyses underscores how making the Title I formula simpler, student-centered, and performance based changes the funding conversation in ways that overcome tired arguments about Title I formula status-quoism.

[19] U.S. Department of Education, Funds for State Formula-Allocated and Selected Student Aid Programs, by Program, 7/2015; http://www2.ed.gov/about/overview/budget/statetables/16stbyprogram.xls.

[20] Jeffrey Liebman and Alina Sellman, Harvard Kennedy School of Government, Social Impact Bond Technical Assistance Lab, Social Impact Bonds: A Guide for State and Local Governments, 6/2013;  http://www.hks-siblab.org (accessed 2015).

[23]Response to the U.S. Department of Treasury Request for Information, “Strategies to Accelerate the Testing and Adoption of Pay for Success Financing Models” Harvard Kennedy School Social Impact Bond Technical Assistance Lab Harvard Kennedy School http://hks-siblab.org


Executive Summary

We examine the employment impacts and anti-poverty implications of raising the federal minimum wage to $12 per hour and $15 per hour, respectively, by 2020. In particular we focus on how raising the federal minimum wage would impact the very low-wage workers the policy is intended to help. Overall, we find significant tradeoffs of raising the federal minimum wage. While a minimum wage hike would benefit millions of workers with higher earnings, it would also hurt millions of others who would lose earnings because they cannot attain or retain a job. Our estimates show that raising the federal minimum wage to $12 per hour by 2020 would affect 38.3 million low-wage workers. Using our central estimate, we find that raising the minimum wage would cost 3.8 million low-wage jobs. In total, income among low-wage workers would rise by at most $14.2 billion, of which only 5.8 percent would go to low-wage workers who are actually in poverty.

Summary Table 1: $12 Minimum Wage

The Labor Market Effects of Raising the Federal Minimum Wage to $12 per hour

Workers Impacted

38.3 million

Jobs Lost

3.8 million

Net Income Change

$14.2 billion

Percent of Income Gained Going to Workers in Poverty

5.8%

Similarly, we find that increasing the federal minimum wage to $15 per hour by 2020 would impact 55.1 million workers, and cost 6.6 million jobs. Aggregate income among low-wage workers would rise by $105.4 billion after accounting for income declines from job losses. However, only 6.7 percent of the increase in income would go to workers who are actually in poverty.

Summary Table 2: $15 Minimum Wage

The Labor Market Effects of Raising the Federal Minimum Wage to $15 per hour

Workers Impacted

55.1 million

Jobs Lost

6.6 million

Net Income Change

$105.4 billion

Percent of Income Gained Going to Workers in Poverty

6.7%

Since the exact impact of the minimum wage on employment remains unsettled, we check the robustness of our results by employing a range of estimates from the literature that imply modest, moderate, and severe employment consequences. In each case, we analyze how the change in earnings resulting from a minimum wage increase would be distributed across income levels.

Introduction

Over the past few years, policymakers and labor advocates have argued for – and in many cases successfully enacted – increases in the minimum wage at federal, state, and local levels. At the federal level, President Obama initially proposed raising the minimum wage to $9 per hour in his February 2013 State of the Union address and later embraced a proposal in Congress to raise it to $10.10. Now, lawmakers are proposing to raise the federal minimum wage to $12 per hour by 2020. Meanwhile, a growing number of advocates are calling for more than doubling the federal minimum wage from $7.25 to $15 per hour. Indeed several cities such as Los Angeles, Seattle, and San Francisco, have approved raising the minimum wage to $15 per hour. In addition, New York City  is considering a $15 minimum wage.

Last year, the Congressional Budget Office (CBO) analyzed the employment and income effects of raising the federal minimum wage to $9 and to $10.10 per hour.  In this paper, we estimate the employment and income effects of increasing the minimum wage to $12 and $15 per hour, focusing on the low-wage workers they are intended to assist. In doing so, we project a range of job losses that would occur if lawmakers were to raise the federal minimum wage to $12 or to $15 per hour, the net change in total income for all low-wage workers in the country, and how the net change in earnings for low-wage workers would be distributed across income levels.

Previous Research

In general, the central policy goal of raising the federal minimum wage is to increase incomes for less-affluent Americans. This inherently brings up two main questions: (1) how does raising the minimum wage impact low-wage worker income? And, (2) are those who would be impacted by an increase in the minimum wage the most in need of assistance?

A minimum wage increase’s impact on annual income depends on how it affects employment. For instance, if the minimum wage increased to $12 per hour, many of those earning $7.25 per hour today would benefit from a wage increase of at least $4.75. Other workers who earn below $12 per hour, however, could lose their jobs and see their wage fall to $0 per hour. Additionally, those looking for work might not get hired and suffer the same fate. Thus, in order to estimate the total net impact of raising the minimum wage on low-wage worker income, one must project the total income gained by the workers who remain employed minus the total income lost by those who do not either attain or retain a job.

To estimate the impact of a $12 and a $15 minimum wage on employment and income, we utilize three previous studies, which provide a range of estimates. These include a report by the CBO (2014)[1], Meer & West (2015)[2], and Clemens & Wither (2014)[3]. These studies examined different labor market aspects of the minimum wage, resulting in differing conclusions on the policy’s impact on employment and income. Between these three studies, we consider the effects of the minimum wage under modest, moderate, and strongly negative employment scenarios.

CBO (2014)

In 2014, the CBO examined the impact of raising the federal minimum wage to either $9.00 or $10.10 per hour, which were two of the most popular proposals at the time. For the $10.10 proposal, CBO found that the policy would result in employment falling by 500,000 jobs relative to their projected 2016 baseline. CBO assumed that in addition to those earning between $7.25 and $10.10 getting a raise, those earning just above $10.10 would also see their wages increase. Specifically, those who earn up to 50 percent more than the minimum wage hike would see their hourly earnings rise. As a result, anyone earning below $11.50 (who stay employed) would benefit from a wage increase of some sort. CBO concluded that net earnings for low wage workers would increase by $31 billion. 19 percent of those additional earnings would go to families below the poverty threshold, 52 percent would go to families with incomes that are 1 to 3 times the poverty threshold, and 29 percent would benefit those who earn over three times the poverty threshold. We employ these findings when assuming our lower-bound employment consequences of raising the federal minimum wage.

Meer & West (2015)

While there is an ongoing debate regarding the impact of the minimum wage on the level of employment, Meer & West (2015) suggests that the negative impact of the minimum wage is best isolated by focusing on employment dynamics. Specifically, they find that a 10 percent increase in the real minimum wage is associated with a 0.30 to 0.53 percentage point decrease in the net job growth rate. Previously, AAF applied their work to California’s recent law that raises the state’s minimum wage to $10 per hour (effective 2016). Using Meer & West’s result, AAF found that this wage increase in California means a loss of 191,000 jobs that would never be created.[4] In addition, AAF found that if every state followed suit, over 2.3 million new jobs would be lost across the country. We employ the estimates found in Meer & West to characterize the most moderate employment consequences to raising the federal minimum wage.

Clemens & Wither (2014)

At the end of 2014, economists Jeffrey Clemens and Michael Wither from University of California in San Diego released research that examined what happened to low-wage workers the last time the federal government raised its minimum wage. The minimum wage rose from $5.15 to $7.25 per hour at the end of the 2000s. Using data from the Survey of Income and Program Participation (SIPP), they focused on how the minimum wage hike impacted employment and income among those who the minimum wage hike impacted the most: low-wage workers earning below $7.50 per hour.

Clemens & Wither found significant, negative consequences for low-wage workers. From 2006 to 2012, employment in this group fell by 8 percent, which translates to about 1.7 million jobs.[5] The job loss in this low-wage group accounted for 14 percent of the national decline in employment during this time period.[6] The minimum wage hike also increased the probability of working without pay (e.g., unpaid internships) by 2 percentage points. Workers with at least some college education were 20 percent more likely to work without pay than before the minimum wage increased. As a result of the reduction in employment and paid work, on net average monthly incomes for low-wage workers actually fell by $100 during the first year after the minimum wage increased and by an additional $50 over the following two years. We use the Clemens & Wither estimates as the upper bound of the employment consequences from raising the federal minimum wage.

Methodology for Identifying those Impacted by a Minimum Wage Hike

While there are a large variety of minimum wage proposals around the country, two jump out as receiving the most attention. In Washington, D.C., several Congressional policymakers have advocated for the Raise the Wage Act, which would increase the federal minimum wage to $12 per hour by 2020. In addition, all around the country, the “Fight for $15” movement has gained popularity in advocating for a $15 per hour minimum wage. In this paper, we analyze the labor market effects of raising the minimum wage to $12 or to $15 per hour by 2020.[7]

In estimating the number of workers the minimum wage hike would affect, we use methodology similar to what was employed in the 2014 CBO report. We assume that those who would be most directly impacted by the minimum wage increase are the workers who we project would earn between $7.25 per hour and the new minimum wage level in 2020 under current law.[8] For the $12 minimum wage, this includes all hourly workers who would earn between $7.25 and $12 per hour and for the $15 minimum wage, it includes everyone who would earn between $7.25 and $15 per hour. These workers stand to see the largest wage hikes. However, consistent across all minimum wage studies, this low-wage group also bears almost entirely the job losses. Like CBO, we assume that all of the job losses occur only among those who under current law would earn between $7.25 and the new minimum wage level in 2020.

CBO anticipated that a minimum wage hike would also increase earnings for those who earn just above the new minimum wage level. In particular, CBO assumed that the workers who earn wages up to 50 percent higher than the minimum wage hike would see their hourly earnings rise. This means that for the $10.10 option, the CBO projected that workers earning between $10.10 and $11.50 per hour would see an increase in their hourly earnings. CBO also assumed, however, that the minimum wage hike would not impact this group’s employment. To identify how many workers who will earn just above the new minimum wage level and would still be impacted by the minimum wage hike, we use the same method as the CBO. For the minimum wage hike to $12 per hour, we assume that workers who under current law will earn between $12 and $14.40 would see earnings rise to $14.40 without any negative employment consequences. Likewise, for the wage hike to $15 per hour, we assume that those who will earn between $15 and $18.90 under current law would get a raise to $18.90 without losing their jobs. While it is possible that workers in this group could experience a wage increase, it is unlikely that everyone will experience a raise all the way up to $14.40 or $18.90 per hour. Our results likely overestimate the income gains resulting from each minimum wage increase.

Finally, for identifying those who will be impacted by the minimum wage hike and the resulting net effect on annual income, CBO employed two approaches. In the first approach, CBO used monthly Current Population Survey (CPS) wage and hours data throughout 2012 to isolate those who would be impacted by the hike. In the second approach, CBO used the 2013 CPS March Annual Social and Economic Supplement, which surveyed a much larger sample of workers and collected detailed information on annual income and earnings data looking back on the previous year. In the latter approach, CBO estimated hourly earnings by dividing total earnings in 2012 by total hours worked. While the first approach (which we will refer to as the “wage approach”) has the benefit of directly recording hourly earnings, the second approach (which we will call the “annual earnings approach”) is based on a much larger sample of workers and more directly relates to annual income.

We present our estimates using the wage approach and use the regular monthly wage and hour data from the 2014 March CPS Annual Social and Economic Supplement.[9] The wage approach results in more positive benefits to raising the minimum wage than the annual earnings approach because it yields lower hourly earnings for each worker. As a result, the wage approach projects a much larger number of workers subject to the effects of raising the minimum wage. Thus, we view our net income figures as upward bound estimates.

Our estimates using the annual earnings approach can be found in the appendix. In the annual earnings approach, we use the supplemental annual income and earnings information from the same 2014 March CPS supplement.

Workers Impacted by Minimum Wage Hikes

$12 Minimum Wage

We estimate that raising the federal minimum wage to $12 per hour would impact 38.3 million workers total. These are the hourly workers we project will earn between $7.25 and $14.40 in 2020 under current law. Table 1 illustrates these findings.

Table 1: Number of Workers Impacted by $12 Minimum Wage

Wage Range

Workers

$7.25 to $12

25.8 million

$12 to $14.40

12.5 million

Total

38.3 million

We project that under current law about 25.8 million hourly workers will earn between $7.25 and $12 per hour in 2020. An additional 12.5 million workers will earn between $12 and $14.40 per hour. Consequently, we project that a minimum wage hike to $12 per hour would in total impact 38.3 million hourly workers.

$15 Minimum Wage

We project that raising the federal minimum wage to $15 per hour would impact 55.1 million workers. These are the number of hourly workers we project will earn between $7.25 and $18.90 under current law. Table 2 illustrates these figures.

Table 2: Number of Workers Impacted by $15 Minimum Wage[10]

Wage Range

Workers

$7.25 to $15

40.6 million

$15 to $18.90

14.6 million

Total

55.1 million

We project that about 40.6 million hourly workers will earn between $7.25 and $15 per hour in 2020. An additional, 14.6 million will earn between $15 and $18.90 per hour. In total, we project that a minimum wage hike to $15 per hour will impact 55.1 million hourly workers.

Employment Impact of the Minimum Wage

Many like the idea of increasing minimum wage. The potential employment consequences of a mandating a minimum wage hike, however, calls the merits of this policy into question. When the federal government increases the minimum hourly pay for workers, it effectively increases the per hour cost of low-wage labor. Employers have three main mechanisms to pay for this additional labor cost: lower profits, higher prices, and fewer workers. While many minimum wage advocates hope that employers pay for the additional cost with their own profits, unfortunately the evidence suggests that the vast majority of low-wage workers are in industries that have razor thin profit margins, like retailers and restaurants. In these industries, businesses tend to pay for minimum wage hikes by increasing prices, reducing current and future employment, or both. While the exact impact of a minimum wage hike on employment is debated, volumes of literature spanning from the 1950s to today conclude that raising the minimum wage damages the labor market. Moreover, the literature shows that the workers who tend to become jobless are the low-skilled, low-wage workers that the policy intends to help.

CBO (2014), Meer & West (2015), and Clemens & Wither (2014) demonstrate negative labor market consequences of raising the minimum wage, with varying degrees of severity. In this section, we apply their findings to the proposals to increase the federal minimum wage to $12 and to $15 per hour by 2020. As mentioned above, we follow the CBO’s methodology by assuming that all job losses occur within the group of workers who under current law will earn between $7.25 and the new minimum wage level in 2020. Specifically, we assume that no one projected to be earning above the new minimum wage level would suffer employment loss.  

To preview our estimates, we find that increasing the minimum wage to $12 per hour would cost 1.3 million to 11.4 million jobs. Raising the minimum wage to $15 per hour would cost 3.3 million to 16.8 million jobs.

$12 Minimum Wage

Overall, AAF estimates that low-wage employment would be 1.3 million to 11.4 million lower than under current law if the federal government were to raise the minimum wage to $12 per hour. Table 3 illustrates these findings for each of the three approaches.

Table 3: Jobs Lost from $12 Minimum Wage

Model

Jobs Lost

CBO

1.3 million

Meer & West

3.8 million

Clemens & Wither

11.4 million

Using the CBO report, our lower-bound employment scenario, we find that raising the minimum wage to $12 per hour by 2020 would cost about 1.3 million jobs nationwide. This means that there would be 1.3 million fewer workers than the 25.8 million workers who we project will earn between $7.25 and $12 per hour absent the minimum wage increase.

In our middle-range negative employment scenario, which is derived from Meer & West (2015), this minimum wage increase would reduce the net job growth rate significantly, costing 3.8 million low-wage jobs. As a result, almost 4 million fewer low-wage jobs would be created than under current law.

Finally, the Clemens & Wither (2014) estimate indicates severe labor market consequences. With this model, we estimate that there would be would be 11.4 million fewer low-wage jobs than under current law. The Clemens & Wither (2014) estimate results in such a large decline in employment because they find that the last federal minimum wage hike actually caused low-wage employment to fall from its initial level, whereas CBO (2014) projected the reduction in employment relative to current law and Meer & West (2015) measured the minimum wage’s impact on net job growth. So under the Clemens & Wither (2014) estimate, one finds that low-wage employment in 2020 would be 4.1 million smaller than today’s level. When one compares the resulting employment level to what it is projected to be under current law, which includes jobs created from economic growth, the minimum wage ends up costing 11.4 million jobs that would be lost or not created.

$15 Minimum Wage

We estimate that 3.3 million to 16.8 million fewer low-wage jobs would exist in 2020 if policymakers increased the federal minimum wage to $15 per hour. These figures are shown in Table 4.

Table 4: Jobs Lost from $15 Minimum Wage

Model

Jobs Lost

CBO

3.3 million

Meer & West

6.6 million

Clemens & Wither

16.8 million

Using the CBO (2014) estimate, we find that increasing the minimum wage to $15 per hour would cost 3.3 million low-wage jobs. The reduction in job creation captured by the Meer & West (2015) estimate reveals that in 2020, the nation would have 6.6 million fewer low-wage jobs than under current law. Finally, using the Clemens & Wither (2014) estimate leads to 16.8 million fewer low-wage jobs in 2020 than under current law.

Minimum Wage Increases and Income

In this section, we project how increasing the federal minimum wage to $12 and to $15 per hour by 2020 would impact total annual income earned by low-wage workers. This involves calculating the total earnings increase for those who are employed and the total earnings loss by those who are jobless. After subtracting total income lost from total income gained, we derive the net income change for all low-wage workers.

Methods and Assumptions

For everyone who keeps their job and will earn between $7.25 per hour and the new minimum wage level under current law, we assume their hourly pay rate would increase to the new minimum wage level. In the $12 minimum wage scenario, for every hourly worker we project will earn between $7.25 and $12 per hour in 2020 under current law, we assume their wages would rise to $12 per hour if they stay employed. For everyone who will earn between $12 and $14.40 per hour in 2020, we assume their wages would rise to $14.40. Likewise in the $15 minimum wage scenario, we assume that all hourly workers who will earn between $7.25 and $15 per hour in 2020 under current law would see their wages rise to $15 per hour if they stay employed. In addition, for everyone who will earn between $15 and $18.90 per hour, we assume their wages would rise to $18.90. Under both the $12 and $15 minimum wage scenarios, we assume that the minimum wage increase itself would have no impact on the hours worked per week and weeks worked per year for those who keep their jobs. Finally, we assume that anyone who is jobless as a result of the minimum wage increase would see their individual annual earnings fall to $0.

$12 Minimum Wage

The impact of raising the federal minimum wage to $12 per hour on low-wage worker income depends largely on how many become jobless. Table 5 illustrates the net income effects of raising the minimum wage to $12 per hour under each employment scenario.

Table 5: Net Change in Total Income from $12 Minimum Wage

Model

$7.25 to $12

$7.25 to $14.40

CBO

$30.2 billion

$49.6 billion

Meer & West

-$5.2 billion

$14.2 billion

Clemens & Wither

-$112.5 billion

-$93.1 billion

In the table, we illustrate the net income effect for both those who under current law will earn between $7.25 and $12 per hour in 2020 and everyone who will earn between $7.25 and $14.40 per hour.

First let’s look just at those directly impacted by the law (who will earn $7.25 to $12 per hour under current law). In the modest CBO (2014) employment scenario, the income gains for those who stay employed outweigh the income losses for those who lose their jobs. On net, income would increase in this group by $30.2 billion. However, in the two other employment scenarios, the earnings gained for those who would keep their jobs would be outweighed by the earnings lost by those who would become jobless. As a result, a $12 minimum wage would cause net income to fall in both of these employment scenarios. In the middle-range Meer & West (2015) employment scenario, total income would decline by $5.2 billion. The income lost in the severe Clemens & Wither (2014) scenario would be even worse as total net income would decline by $112.5 billion. These results highlight the importance of labor market policies that do not harm employment.

When one assumes that everyone earning just above the new minimum wage would also get a significant wage bump (and suffer no employment loss), then the net income changes become more positive. So in the case of a $12 minimum wage, this means including the income increases for those who under current law will earn between $12 and $14.40 per hour in 2020. For this group we assume that worker wages increase without any negative employment consequences. In the modest CBO (2014) employment scenario, raising the minimum wage to $12 per hour would increase income for low-wage workers by $49.6 billion on net. In the moderate Meer & West (2015) scenario, the minimum wage increase would slightly increase low-wage income by $14.2 billion. However, in the severe Clemens & Wither (2014) scenario, raising the minimum wage still has a net negative impact on income for low-wage workers, as their income would fall by $93.1 billion.

$15 Minimum Wage

Table 6 illustrates how raising the minimum wage to $15 per hour on net would impact total income for low-wage workers.

Table 6: Net Change in Total Income from $15 Minimum Wage

Model

$7.25 to $15

$7.25 to $18.90

CBO

$118.8 billion

$171.3 billion

Meer & West

$52.8 billion

$105.4 billion

Clemens & Wither

-$153.2 billion

-$100.6 billion

Parallel to the $12 minimum wage, for the $15 minimum wage proposal we project the impact on net income just for those who under current law will earn $7.25 to $15 per hour in 2020 and for all low-wage workers who will earn between $7.25 and $18.90 per hour. For those who will earn just above $15 per hour under current law ($15 to $18.90 per hour), we assume their wages would increase without any job losses.

Looking first at those who will earn between $7.25 and $15 per hour in 2020 under current law, the modest CBO (2014) and the moderate Meer & West (2015) employment scenarios both yield positive income changes. In the CBO (2014) scenario, increasing the minimum wage to $15 per hour would on net increase total incomes in this group by $118.8 billion. In the Meer & West (2015) scenario, we find that increasing the minimum wage would result in much smaller net income gains. In this case, the minimum wage hike would increase incomes by $52.8 billion total. In the severe Clemens & Wither (2014) employment scenario, however, the drastic employment losses suggest that raising the minimum wage to $15 per hour would cause a significant reduction in earnings for low-workers. Under this scenario, net earnings would fall by $153.2 billion for hourly workers who will earn less than $15 per hour under current law.

As Table 6 illustrates, including the income increases for those who would earn just above $15 per hour under current law significantly increases the net income gains under both the CBO (2014) and the Meer & West (2015) scenario. However, in the Clemens & Wither (2014) employment scenario, net income for low-wage workers would still decline by $100.6 billion.

Net Income Changes are Across Income Levels

While many hope that raising the minimum wage will greatly assist those in poverty, we find little evidence that raising the federal minimum wage would substantially increase incomes for those with family incomes below the poverty threshold. Specifically, we find that in most cases that result in net income gains from a minimum wage increase, only about 10 percent or less would go to workers who are currently in poverty.

$12 Minimum Wage

Table 7 contains our estimates for how raising the federal minimum wage to $12 per hour would increase (or decrease) earnings for low-wage workers by income level.

Table 7: $12 Minimum Wage's Resulting Net Pay Change by Income Level[11]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$4.0 billion

$0.8 billion

-$8.8 billion

1x to 3x

$23.3 billion

$5.7 billion

-$47.4 billion

3x to 6x

$16.5 billion

$5.7 billion

-$27.2 billion

6x Plus

$5.8 billion

$1.9 billion

-$9.7 billion

On net, earnings would increase for low-wage workers at all income levels in the modest CBO (2014) and moderate Meer & West (2015) employment scenarios and decrease at all income levels in the severe Clemens & Wither (2014) scenario. For instance, in the moderate Meer & West (2015) scenario, we find that low-wage worker net incomes would increase by a total of $0.8 billion for those with family incomes below the poverty threshold, $5.7 billion for those with incomes one to three times the poverty threshold, $5.7 billion for those three to six times the poverty threshold, and $1.9 billion for workers with incomes over six times the poverty threshold.

As a result, in the Meer & West (2015) scenario only 5.8 percent of all the income gained from increasing the minimum wage to $12 per hour would go to families in poverty, 40.5 percent would go to families with incomes one to three times the poverty threshold, 40.1 percent would go to families with incomes three to six times the poverty threshold, and 13.7 percent would go to families with incomes over six times the poverty threshold. Table 8 illustrates percent distribution of the income gained or lost in each employment scenario.

Table 8: Percent Distribution of Net Pay Change by Income Level from $12 Minimum Wage[12]

Poverty Level

CBO

Meer & West

Clemens & Wither

In Poverty

8.1%

5.8%

9.5%

1x to 3x

46.9%

40.5%

50.9%

3x to 6x

33.3%

40.1%

29.2%

6x Plus

11.7%

13.7%

10.4%

$15 Minimum Wage

Table 9 highlights our estimates for how raising the federal minimum wage to $15 per hour would change net earnings for low-wage workers by income level.

Table 9: $15 Minimum Wage's Resulting Net Pay Change by Income Level[13]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$11.9 billion

$7.0 billion

-$8.4 billion

1x to 3x

$77.2 billion

$44.9 billion

-$56.2 billion

3x to 6x

$59.9 billion

$38.0 billion

-$30.2 billion

6x Plus

$22.2 billion

$15.4 billion

-$5.8 billion

Similar to raising the minimum wage to $12 per hour, raising it to $15 would result in net earnings increasing at all income levels in the CBO (2014) and Meer & West (2015) employment scenarios and net income decreasing in the Clemens & Wither (2014) scenario. Using the moderate Meer & West (2015) employment scenario, we find that raising the minimum wage to $15 per hour would increase low-wage worker incomes by $7.0 billion for those in poverty, $44.9 billion for those with incomes one to three times the poverty threshold, $38.0 billion for those with incomes three to six times the poverty threshold, and $15.4 billion for those with incomes over six times the poverty threshold.

As a result, only 6.7 percent of the net income increase from raising the minimum wage to $15 per hour would go to families in poverty, 42.6 percent would go to families with incomes one to three times the poverty threshold, 36.1 percent would go to families with incomes three to six times the poverty threshold, and 14.7 percent would go to families with incomes over six times the poverty threshold.

Table 10: Percent Distribution of Net Pay Change by Income Level from $15 Minimum Wage[14]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

7.0%

6.7%

8.4%

1x to 3x

45.1%

42.6%

55.9%

3x to 6x

35.0%

36.1%

30.0%

6x Plus

13.0%

14.7%

5.8%

As illustrated in Table 10, in every model we run, only a small minority of the income benefits (or costs) from increasing the minimum wage to $15 per hour would actually go to families in poverty.

Conclusion

Today, lawmakers continue to debate the merits of a $12 federal minimum wage, while local officials and national labor leaders keep advocating for $15 per hour. In this paper, however, we find that any potential benefits from raising the minimum wage would be greatly offset by labor market consequences of the policy.

For the $12 federal minimum wage, when assuming moderate negative employment consequences we find that the policy would cost 3.8 million jobs. As a result, it would only at most increase low-wage worker earnings by $14.2 billion total. Additionally, only a small portion of that income gain would benefit families in poverty. We find that only 5.8 percent of the increase in pay would go to workers in poverty.

For the $15 federal minimum wage, when assuming moderate negative employment consequences we find that the policy would cost 6.6 million jobs. On net, it would raise low wage worker earnings by $105.4 billion at most.  However, again only a small minority of that additional income would benefit families in poverty. In particular, only 6.7 percent of the increase in earnings would go to workers in poverty.

Overall the income gains from raising the minimum wage would come at a significant cost to the large number of workers who would become jobless. In effect, raising the minimum wage transfers incomes from the low-wage workers who are unfortunate to become jobless to the low-wage workers who remain employed. And it accomplishes this without effectively helping those who are the most in need.

 

Appendix: Employment and Income Results from Using the Annual Earnings Approach

Workers Impacted by Minimum Wage Hikes

$12 Minimum Wage      

With the annual earnings approach, we estimate that raising the federal minimum wage to $12 per hour would impact 28.9 million workers total. These are the hourly workers we project who will earn between $7.25 and $14.40 in 2020 under current law. Table A1 illustrates these findings.

Table A1: Number of Workers Impacted by $12 Minimum Wage

Wage Range

Workers

$7.25 to $12

17.8 million

$12 to $14.40

11.1 million

Total

28.9 million

We estimate that under current law about 17.8 million hourly workers will earn between $7.25 and $12 per hour in 2020. An additional 11.1 million will earn between $12 and $14.40 per hour. As a result, with the annual earnings approach, we estimate that about 28.9 million workers would be impacted by a minimum wage hike to $12 per hour.

$15 Minimum Wage

Using the annual earnings approach, we project that raising the federal minimum wage to $15 per hour would impact 42.7 million workers. These are the number of hourly workers we project will earn between $7.25 and $18.90 under current law. Table A2 illustrates these figures.

Table A2: Number of Workers Impacted by $15 Minimum Wage

Wage Range

Workers

$7.25 to $15

30.4 million

$15 to $18.90

12.3 million

Total

42.7 million

We estimate that about 30.4 million workers will earn between $7.25 and $15 per hour in 2020. An additional 12.3 million will earn between $15 and $18.90. In total, with the annual earnings approach, we estimate that a minimum wage hike to $15 per hour will impact about 42.7 million hourly workers.

Employment Impact of the Minimum Wage

Under the annual earnings approach, we find that increasing the minimum wage to $12 per hour would cost 1.4 million to 9.7 million jobs. Raising the minimum wage to $15 per hour would cost 3.9 million to 13.8 million jobs.

$12 Minimum Wage

AAF estimates that low-wage employment would be 1.4 million to 9.7 million lower than under current law if the federal government were to raise the minimum wage to $12 per hour. Table A3 illustrates these findings under the annual earnings approach.

Table A3: Jobs Lost from $12 Minimum Wage

Model

Jobs Lost

CBO

1.4 million

Meer & West

3.8 million

Clemens & Wither

9.7 million

Using the CBO report, we find that raising the minimum wage to $12 per hour by 2020 would cost about 1.4 million jobs nationwide. In our middle-range negative employment scenario, which is derived from Meer & West (2015), this minimum wage increase would reduce the net job growth rate significantly, costing 3.8 million low-wage jobs.[15] As a result, almost 4 million fewer low-wage jobs would be created than under current law. Finally, with the Clemens & Wither (2014) model, we estimate that there would be would be 9.7 million fewer low-wage jobs than under current law.

$15 Minimum Wage

Under the annual earnings approach, we estimate that 3.9 million to 13.8 million fewer low-wage jobs would exist in 2020 if policymakers increased the federal minimum wage to $15 per hour. These figures are shown in Table A4.

Table A4: Jobs Lost from $15 Minimum Wage

Model

Jobs Lost

CBO

3.9 million

Meer & West

6.6 million

Clemens & Wither

13.8 million

Using the CBO (2014) estimate, we find that increasing the minimum wage to $15 per hour would cost 3.9 million low-wage jobs. The reduction in job creation captured by the Meer & West (2015) estimate reveals that in 2020, the nation would have 6.6 million fewer low-wage jobs than under current law. Finally, using the Clemens & Wither (2014) estimate leads to 13.8 million fewer low-wage jobs in 2020 than under current law.

Minimum Wage Increases and Income

$12 Minimum Wage

The impact of raising the federal minimum wage to $12 per hour on low-wage worker income is far less favorable under the annual earnings approach than under the wage approach. Table A5 illustrates the net income effects of raising the minimum wage to $12 per hour for each employment scenario under the annual earnings approach.

Table A5: Net Change in Total Income from $12 Minimum Wage

Model

$7.25 to $12

$7.25 to $14.40

CBO

$30.3 billion

$50.3 billion

Meer & West

-$16.4 billion

$3.6 billion

Clemens & Wither

-$133.3 billion

-$113.3 billion

In the table, we illustrate the net income effect for both just those who under current law will earn between $7.25 and $12 per hour in 2020 and everyone who will earn between $7.25 and $14.40 per hour.

First we examine those directly impacted by the law (who will earn $7.25 to $12 per hour under current law). In the modest CBO (2014) employment scenario, the income gains for those who stay employed would outweigh the income losses for those who lose their jobs. On net, income would increase in this group by $30.3 billion. However, in the two other employment scenarios, the earnings gained for those who would keep their jobs would be outweighed by the earnings lost by those who would become jobless. As a result, a $12 minimum wage would cause net income to fall in both of these employment scenarios. In the middle-range Meer & West (2015) employment scenario, total income would decline by $16.4 billion. The income lost in the severe Clemens & Wither (2014) scenario is even worse as total net income would decline by $133.3 billion.

When including the income increases for those who under current law will earn between $12 and $14.40 per hour in 2020, then the net income changes become more positive. In the modest CBO (2014) employment scenario, raising the minimum wage to $12 per hour would increase income for low-wage workers by $50.3 billion on net. In the moderate Meer & West (2015) scenario, the minimum wage increase would only increase low-wage income by $3.6 billion. However, in the severe Clemens & Wither (2014) scenario, raising the minimum wage still has a net negative impact on income for low-wage workers, as their income would fall by $113.3 billion.

$15 Minimum Wage

Table A6 illustrates how raising the minimum wage to $15 per hour on net would impact total income for low-wage workers. Again, the income outcomes are far less positive under the annual earnings approach than under the wage approach.

Table A6: Net Change in Total Income from $15 Minimum Wage

Model

$7.25 to $15

$7.25 to $18.90

CBO

$79.4 billion

$125.5 billion

Meer & West

$10.5 billion

$56.5 billion

Clemens & Wither

-$175.7 billion

-$129.6 billion

Looking first at those who will earn between $7.25 and $15 per hour in 2020 under current law, the modest CBO (2014) and the moderate Meer & West (2015) employment scenarios both yield positive income changes. In the CBO (2014) scenario, increasing the minimum wage to $15 per hour would on net increase total incomes in this group by $79.4 billion. In the Meer & West (2015) scenario, the minimum wage hike would increase incomes by $10.5 billion total. Under the Clemens & Wither (2014) scenario, net earnings would fall by $175.7 billion.

As Table A6 illustrates, including the income increases for those who would earn just above $15 per hour under current law significantly increases the net income gains under both the CBO (2014) and the Meer & West (2015) scenario. However, in the Clemens & Wither (2014) employment scenario, net income for low-wage workers would still decline by over $100 billion.

Income Change Disparities between the Wage Approach and Annual Earnings Approach

In general, the patterns in the net change in total income earned by low-wage workers are consistent across the wage and annual earnings approaches. However, the magnitudes are substantially different. This is particularly apparent in the Meer & West (2015) and Clemens & Wither (2014) scenarios. The disparity stems largely from the fact that the annual earnings approach yields higher hourly pay for each worker than the wage approach. Consequently, the annual earnings approach projects a much smaller number of workers earning at the lower end of the wage distribution. For instance, the annual earnings approach yields much fewer projected hourly workers who in 2020 under current law will earn between $7.25 and $12 per hour than the wage approach (17.8 million versus 25.8 million). The same is true for the number who will earn between $7.25 and $15 (30.4 million versus 40.6 million). In addition, the annual earnings approach results in estimating that slightly fewer workers who will earn between $12 and $14.40 than the wage approach (11.1 million versus 12.5 million) and slightly fewer earning between $15 and $18.90 (12.3 million versus 14.6 million).

These differences are important because they influence our estimates of the net income effects of the minimum wage hikes. When using the annual earnings approach, the large job losses that occur in the Meer & West (2015) and Clemens & Wither (2014) leave fewer workers who would keep their jobs and experience an increase in earnings. As a result, the net income gains tend to be smaller or more negative when using the annual earnings approach than when using the wage approach.

Net Income Changes are Across Income Levels

$12 Minimum Wage

Table A7 contains our estimates for how raising the federal minimum wage to $12 per hour would increase (or decrease) income for low-wage workers by income level.

Table A7: $12 Minimum Wage's Resulting Net Pay Change by Income Level[16]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$6.4 billion

$1.2 billion

-$11.9 billion

1x to 3x

$25.8 billion

$0.5 billion

-$62.8 billion

3x to 6x

$14.4 billion

$1.0 billion

-$32.7 billion

6x Plus

$3.6 billion

$0.9 billion

-$5.9 billion

On net, earnings would increase for low-wage workers at all income levels in the modest CBO (2014) and moderate Meer & West (2015) employment scenarios and decrease at all income levels in the severe Clemens & Wither (2014) scenario. For instance, using the annual earnings approach in the moderate Meer & West (2015) scenario, we find that low-wage worker net incomes would increase by a total of $1.2 billion for those with incomes below the poverty threshold, $0.5 billion for those with incomes one to three times the poverty threshold, $1.0 billion for those three to six times the poverty threshold, and $0.9 billion for workers with incomes over six times the poverty threshold.

As a result, in the Meer & West (2015) scenario 33.8 percent of all the income gained from increasing the minimum wage to $12 per hour would go to families in poverty, 14.2 percent would go to families with incomes one to three times the poverty threshold, 27.2 percent would go to families with incomes three to six times the poverty threshold, and 24.9 percent would go to families with incomes over six times the poverty threshold. Table A8 illustrates percent distribution of the income gained or lost in each employment scenario.

Table A8: Percent Distribution of Net Pay Change by Income Level from $12 Minimum Wage[17]

Poverty Level

CBO

Meer & West

Clemens & Wither

In Poverty

12.8%

33.8%

10.5%

1x to 3x

51.3%

14.2%

55.4%

3x to 6x

28.7%

27.2%

28.9%

6x Plus

7.2%

24.9%

5.2%

When evaluating the impact of raising the minimum wage to $12 per hour, raising the minimum wage appears to target more efficiently the population for combating poverty when we use the annual earnings approach. This occurs, however, because the estimated overall net income gains when using this approach are much smaller than when using the wage approach. Again, the differences between the wage approach and the annual earnings approach stem from the fact that they yield different low-wage worker population sizes.

$15 Minimum Wage

Table A9 highlights our estimates for how raising the federal minimum wage to $15 per hour would change net earnings for low-wage workers by income level.

Table A9: $15 Minimum Wage's Resulting Net Pay Change by Income Level[18]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

$12.3 billion

$6.8 billion

-$8.1 billion

1x to 3x

$62.9 billion

$26.9 billion

-$70.6 billion

3x to 6x

$40.9 billion

$18.6 billion

-$41.4 billion

6x Plus

$9.4 billion

$4.3 billion

-$9.4 billion

Similar to raising the minimum wage to $12 per hour, raising it to $15 would result in net earnings increasing at all income levels in the CBO (2014) and Meer & West (2015) employment scenarios and net income decreasing in the Clemens & Wither (2014) scenario. Using the annual earnings approach for the moderate Meer & West (2015) employment scenario, we find that raising the minimum wage to $15 per hour would increase low-wage worker incomes by $6.8 billion for those in poverty, $26.9 billion for those with incomes one to three times the poverty threshold, $18.6 billion for those with incomes three to six times the poverty threshold, and $4.3 billion for those with incomes over six times the poverty threshold.

As a result, only 12.0 percent of the net income increase from raising the minimum wage to $15 per hour would go to families in poverty, 47.5 percent would go to families with incomes one to three times the poverty threshold, 32.9 percent would go to families with incomes three to six times the poverty threshold, and 7.6 percent would go to families with incomes over six times the poverty threshold.

Table A10: Percent Distribution of Net Pay Change by Income Level from $15 Minimum Wage[19]

Poverty Level

CBO

Meer & West

Clemens & Wither

1x

9.8%

12.0%

6.3%

1x to 3x

50.2%

47.5%

54.5%

3x to 6x

32.6%

32.9%

32.0%

6x Plus

7.5%

7.6%

7.3%

As illustrated in Table A10, in every model we run, only a small minority of the income benefits (or costs) from increasing the minimum wage to $15 per hour would actually go to families in poverty. As is the case in for the $12 minimum wage, raising the minimum wage to $15 per hour appears to be most target efficient when we use the annual earnings approach. However, also like in the $12 minimum wage case, this is driven by annual earnings approach projecting much smaller overall income gains if the minimum wage were to increase to $15 per hour.



[1] “The Effects of a Minimum-Wage Increase on Employment and Family Income,” The Congressional Budget Office, February 2014, https://www.cbo.gov/publication/44995

[2] Jonathan Meer & Jeremy West, “Effects of the Minimum Wage on Employment Dynamics,” January 2015, http://econweb.tamu.edu/jmeer/Meer_West_Minimum_Wage.pdf

[3] Jeffrey Clemens & Michael Wither, “The Minimum Wage and the Great Recession: Evidence of Effects on the Employment and Income Trajectories of Low-Skilled Workers,” National Bureau of Economic Research, December 2014, http://papers.nber.org/tmp/87122-w19262.pdf

[4] Ben Gitis, “The Steep Cost of a $10 Minimum Wage,” American Action Forum, October 2013, http://americanactionforum.org/research/the-steep-cost-of-a-10-minimum-wage

[5] The 1.7 million jobs figure is based on authors’ analysis of Clemens & Wither (2014) estimates.

[6] Clemens & Wither (2014) accounted for the effects of the recession by using state, time, and individual effects and controlling for FHFA housing price index. For more information on their methodology see http://www.nber.org/papers/w20724

[7] While the implementation periods for a $15 per hour minimum wage differ by proposal, in this paper we assume it would be implemented by 2020.

[8] CBO’s baseline projection has low-wage worker hourly earnings rising at an average annual rate of 2.9 percent from 2013 to 2016. We assume the same and project wages to increase by 2.9 percent each year until 2020.

[9] Current Population Survey, 2014 Annual Social and Economic Supplement, retrieved from the National Bureau of Economic Research, http://www.nber.org/data/current-population-survey-data.html

[10] Numbers may not add to total due to rounding.

[11] Figures may not add to total reported in Table 5 due to rounding.

[12] Red indicates distribution of income lost.

[13] Figures may not add to totals reported in Table 6 due to rounding.

[14] Red indicates distribution of lost income.

[15] Just as in the other two estimates, we assume all job losses occur in the low-wage group. But, since Meer & West (2015) reports how increasing the minimum impacts the net job growth rate for all workers (not just low-wage ones), the projected job losses do not differ between the wage approach and annual earnings approach. While the number of low wage workers changes from each approach, the total number of workers does not. As a result, the projected job losses do not change between the two estimation approaches, even though the population size of low wage workers is differs.

[16] Figures may not add to totals reported in Table A5 due to rounding.

[17] Red indicates distribution of income lost.

[18] Figures may not add to totals reported in Table A6 due to rounding.

[19] Red indicates distribution of income lost.

  • The top 5 costliest pending regulations would add $7.8 billion in regulatory costs and 1.7 million in paperwork hours.

  • The number of firms with 10 to 19 workers and the number with 20 to 49 fell 0.3 percent and 1.0 percent.

  • Since Dodd-Frank, the number of jobs at federal financial regulatory agencies spiked 19.2 percent.

Congress passed the Dodd-Frank Act five years ago in an attempt to address the causes of the financial crisis. The law has imposed billions of dollars in costs with unclear benefits, with more regulations to be prescribed as regulators continue to slowly implement the law.

According to American Action Forum (AAF) research, Dodd-Frank has imposed more than $24 billion in final rule costs and 61 million paperwork burden hours. From a housing market still experiencing mediocre growth, to an uneven labor picture, it’s clear the law has fundamentally altered capital markets and added layers of complexity for consumers and financial institutions.

What’s Left for Dodd-Frank?

The law has imposed tremendous costs, but it appears the height of those impositions is in the past. The following chart examines rulemaking costs by year, from July 21, 2010, when Dodd-Frank was passed, to each later year.

Year 3 was clearly the high-water mark, and barring increased cost-benefit transparency from agencies, expect that year to continue claiming the highest costs. Given the pace of rulemaking, expect new regulatory burdens to easily extend into years six and seven. 

The next chart takes a broader view of the law, examining all final rule documents published in the Federal Register. It too reveals a steady decline of activity after years two and three. Dodd-Frank has already produced 456 final rule documents.

Although much of the law has already been implemented, there are still dozens of rulemakings in the proposed stage, and dozens more that have not yet been proposed. According to Davis-Polk, a law firm that tracks the law, 60.3 percent of Dodd-Frank has been finalized, with another 21.5 percent that remains to be proposed. This means that roughly 18 percent of the law is still in proposed form. In other words, after five years of implementation, two-fifths of the law is pending. These remaining burdens will doubtless add to the $24 billion in existing final burdens.

The table below highlights the largest rules still in proposed form. These are recent measures that could be finalized soon. Combined, they could add another $7.8 billion to the law’s tally and more than 1.7 million paperwork hours.

Regulation

Cost (in millions)

Paperwork Hours

Capital Requirements for Swap Entities

$5,200

24,747

Home Mortgage Disclosure

$2,161

90,000

Standards for Clearing Agencies

$225

14,124

Pay Ratio Disclosure

$218

545,792

Conduct Standards for Swap Dealers

$13.2

1,030,010

Totals:

$7,817

1,704,673


The largest rulemaking, Capital Requirements for Swap Entities, is a joint proposal from five different agencies. Despite its modest paperwork imposition, the Regulatory Impact Analysis estimates that institutions would need capital margin requirements ranging from $280 billion to $3.6 trillion. For the purpose of the analysis, the agencies selected $644 billion, because it falls “roughly in the middle of these estimates.” Of course, the exact middle of those estimates is $1.9 billion, or three times the figure that the agencies choose. As a result of higher capital, the opportunity costs of the proposal range from $3.1 to $5.9 billion, undiscounted. AAF selected the figure of $5.2 billion, which is discounted at a three percent rate.

Another notable proposal, Pay Ratio Disclosure, would force thousands of companies to list the ratio of the highest compensated officers to the median pay of all workers in the organization. Set to be finalized next April, the Securities and Exchange Commission (SEC) only estimates annual costs of $72 million, with long-term burdens approaching $218 million. However, outside estimates peg the costs at more than $710 million annually with 3.6 million paperwork burden hours.

The pay ratio rule isn’t designed to increase capital, protect investors, or halt the spread of financial contagion. It, like several other provisions of Dodd-Frank, is designed to embarrass companies. As recent commenters have noted, “The pay-ratio rule is an attempt to shame companies and their boards to advance the ‘social justice’ goal of more equitable income distribution.” If finalized, the rule will only serve to needlessly impose costs on institutions, and eventually their shareholders and customers, while utterly failing to improve capital markets.

Looking forward, the Unified Agenda of federal rulemakings lists 29 measures in the prerule or proposed rule stage that are directly related to Dodd-Frank. Below are the rulemakings and their expected publication dates.

Rule

Expected Publication Date

CFTC’s Repeal of Commercial Market Exemption

May 2015

FCA’s Farmer Mac Stress Test

May 2015

FCA’s Farmer Mac Investment Eligibility

May 2015

FDIC’s Fiduciary Powers

June 2015

FHFA’s Stress Testing

June 2015

Federals Reserve’s Regulation JJ

June 2015

Federal Reserve’s Regulation LL

June 2015

Federal Reserve’s Extension of Credit

June 2015

NCUA’s Incentive-Based Compensation

June 2015

SEC’s Rules on Incentive Compensation

June 2015

CFPB’s Arbitration Amendments

September 2015

CFTC’s Annual Stress Test

September 2015

NCUA’s Valuation Models

October 2015

Federal Reserve’s Capital Guidelines

November 2015

CFTC’s Clearing Requirement

November 2015

Treasury’s Sources of Strength

December 2015

Treasury’s Automated Valuation Models

December 2015

CFTC’s Requirements for Contract Markets

December 2015

CFTC’s Mitigation of Conflict of Interest

December 2015

FDIC’s Removal of Transferred OTS Rules

December 2015

FHFA’s Valuation Models

December 2015

Federal Reserve’s Regulation AA

December 2015

CFPB’s Supervision of Vehicle Loans

January 2016

SEC’s Compensation Clawback

April 2016

SEC’s Registration of Security-Based Swaps

April 2016

SEC’s Resource Extraction Disclosure

April 2016

SEC’s Stress Testing

April 2016

SEC’s Conflicts of Interest in Securitization

April 2016

SEC’s Transactions with Non-U.S. Persons

April 2016


Employment Trends in Financial Services Since Dodd-Frank’s Passage

By imposing 61 million paperwork burden hours and costing more than $24 billion, Dodd-Frank is restricting growth in the financial services industry. However, while many of the rules enacted under Dodd-Frank are intended to limit risk among the largest financial companies, small firms seem to be paying the price with stagnant job growth. Figure 3 illustrates the growth in financial businesses since lawmakers passed Dodd-Frank.[1]

The financial industry as a whole has struggled since 2010, with the number of all financial firms growing only 2.0 percent from 2010 to 2014. However, the lack of growth primarily appears among small financial businesses. For instance, the number of firms with 10 to 19 workers and the number with 20 to 49 fell 0.3 percent and 1.0 percent respectively. Meanwhile the number of businesses with fewer than 5 workers only grew 0.7 percent and those with 5 to 9 only increased 1.7 percent. Even regional banks appeared to struggle as the number of financial firms with 500 to 999 workers decreased 0.5 percent since Dodd-Frank’s passage. In addition, the largest companies have grown rapidly, as the number with 1,000 or more employees increased 11.9 percent since 2010. So while the largest financial companies seem to be unaffected by Dodd-Frank, small and regional firms appear to be absorbing most of the bill’s costs.

When one analyzes employment in the financial sector, a similar pattern emerges, as the lack of job growth since 2010 in the financial sector has been concentrated in small and regional firms.[2]

 

Mirroring slow growth in the number of businesses in the financial industry, the industry’s employment levels only increased 3.7 percent from 2010 to 2014. Within the industry, what types of firms grew? Again, it was the largest financial companies as employment in businesses with 1,000 or more workers advanced 13.2 percent. Meanwhile, employment in small and regional financial firms have either stagnated or contracted.

The continual decline in many financial services industries is illustrated in employment in savings institutions in Figure 5. Since Dodd-Frank became law, savings employment continued to fall.[3]

One of the largest illustrations of the increase in financial regulation since Dodd-Frank’s passage is the growth in employment in financial regulatory agencies, which stands in stark contrast to the lack of growth in banking and finance. Figure 6 illustrates the substantial increase in financial regulatory jobs in the federal government. 

While jobs in the financial industry only increased 3.7 percent since Dodd-Frank became law, the number of jobs at federal financial regulatory agencies spiked 19.2 percent. However, this understates the growth in employment in some agencies. For instance, the number of workers at the Federal Housing Finance Agency (FHFA) and employees at the entire Federal Reserve System (all Federal Reserve Banks and the Board of Governors) increased 37.1 percent and 32.2 percent since Dodd-Frank.[4]

Impact of Credit Availability and Housing

The purpose of the Dodd-Frank Act was partly to stem abuses and fix systemic weaknesses in the financial services sector made apparent when the housing bubble burst and helped bring about the 2008 financial crisis. Yet many of the regulations promulgated under Dodd-Frank, intended to create a safer financial system, have adversely affected industry and consumers as previously mentioned. Banks of all sizes have reported high Dodd-Frank compliance costs, invariably driving up the cost of banking and lending for consumers. In its effort to make the financial system safer, Dodd-Frank has also restricted the availability of financial products and credit particularly for low-income borrowers, young people, and minorities. 

Figure 7[5] shows how lending—whether commercial, real estate, or consumer—has not recovered in this economic recovery as quickly as the average post-WWII economic recovery due to the severity of the financial crisis and environment of tightened credit encouraged by the Dodd-Frank Act. Real estate lending has been the slowest to recover; it took over six years to reach the same level it was at the beginning of the recession.

Source: Federal Reserve; Based on Author’s Calculations and NBER

Anecdotally, many market participants have expressed concerns over burdens stemming from Dodd-Frank, but the law also has very real economic costs. A report by AAF in 2012 attempted to show the economic impact of Dodd-Frank and Basel III regulations, concluding that the rules, as proposed at that time, would reduce lending (as evidence above) and result in fewer home sales. That decrease in loans and sales negatively impacted housing starts, employment, and GDP. While those rules have since been altered, regulators continue to struggle to balance the need to protect the safety and security of consumers and investors while also encouraging the continued and sustainable housing and economic recovery.

Similarly, a more recent analysis by AAF President Douglas Holtz-Eakin looked at the economic growth implications of Dodd-Frank, specifically how Dodd-Frank burdens affected savings and investment, and their linkages to growth. This research found a significant impact—roughly $895 billion in reduced Gross Domestic Product (GDP) over the 2016-2025 period, or $3,346 per working-age person.

While numerous regulations issued through Dodd-Frank are changing the landscape of mortgage lending, housing finance reform continues to be the most important unfinished business of the recession. With legislation to fix the housing giants stalled in Congress, Fannie Mae and Freddie Mac remain a very real risk to taxpayers and the very definition of too-big-to-fail, matters unaddressed by the Dodd-Frank Act. The failure to address housing finance reform has contributed to the uncertainty surrounding the future housing market, which, when coupled with Dodd-Frank regulations, have worked to reduce the availability of mortgage credit particularly for traditionally riskier borrowers.

 

Source: U.S. Census Bureau; Based on Author’s Calculations and NBER

With tightened mortgage credit, high rates of foreclosure, and weak job and wage growth, many in this economic recovery turned to renting. Robust multifamily starts (buildings with more than 4 units) have helped pushed total starts to the same level they were at the beginning or peak of the recession in December 2007. Yet single-family housing starts stand at only 84 percent of that level, now almost 90 months since the recession began. Botched initiatives to help struggling homeowners, the Dodd-Frank Act, and its failure to tackle housing finance reform have contributed to the slow recovery of the housing market when compared with the average post-war recovery.

Conclusion 

Dodd-Frank is now five after imposing more than $24 billion in costs and 61 million paperwork burden hours. As time passes, the law becomes more expensive as regulatory agencies like CFPB and FHFA grow with the mission to implement burdensome rules. Meanwhile, small financial services firms continue to struggle as the law restricts the availability of financial products. With about 21 percent of the law still left to implement, one can only expect the costs to continue to rise. 



[1] Author’s calculations of change in number of establishments from 2010 to 2014 in Financial Activities. Establishment level data by number of employees come from the Bureau of Labor Statistics, http://data.bls.gov/cgi-bin/dsrv?en

[2] Author’s calculations of change in number of employees from 2010 to 2014 in Financial Activities. Employment level data by number of employees come from the Bureau of Labor Statistics, http://data.bls.gov/cgi-bin/dsrv?en

[3] Savings Institutions Employment, Bureau of Labor Statistics, http://data.bls.gov/cgi-bin/dsrv?en

[4] Figures are based on data provided by Susan Dudley and Melinda Warren in “Regulators’ Budget Increases Consistent with Growth in Fiscal Budget: An Analysis of the U.S. Budget for Fiscal Years 2015 and 2016,” Regulator’s Budget, http://regulatorystudies.columbian.gwu.edu/sites/regulatorystudies.columbian.gwu.edu/files/downloads/2016_Regulators_Budget.pdf

[5] Note: The Federal Reserve made a technical change to the way consumer loans are calculated. This figure corrects a noticeable jump due to that technical change, but maintains overall levels based on their historical increases.   

  • Mandating paid family leave would cost businesses $7.1 billion to $14.2 billion annually

  • From 2006 to 2008 4.7 percent of pregnant working women were fired, which was much higher than the national average

Over the past decade, we have witnessed significant shifts in the United States workforce. For instance, computers and the Internet have been largely responsible for the distinct rise in a nontraditional workforce, where an independent worker is able to make a living from home by selling individuals and companies his or her unique services. Meanwhile for the traditional salaried white collar worker, the line between work and leisure is becoming increasingly unclear as many now have the ability to work from home and can get late night emails from their bosses. The modern workweek is far more flexible than the rigid time sheets of the past.

Lost in these changes, however, is ensuring that working families maintain the flexibility necessary to have children and raise a family. In particular, in a competitive marketplace, how can we ensure that those who decide to go on maternity or paternity leave do not get left behind? The Family and Medical Leave Act of 1993 (FMLA) allows for up to 12 weeks of unpaid, job-protected leave for the birth of a child or a serious medical condition. Some have argued for expanding the number of people covered under FMLA and mandating paid leave for working mothers with newborn babies. While requiring paid leave may benefit some families, it would be harmful to those who lose their jobs because of the cost to employers. Rather than imposing a costly burden on all employers, it is important that policy keeps pace with the evolving modern workforce by embracing work/life flexibility and expanding it to working mothers and fathers.

This paper discusses a range of policy options that aim to enhance workplace flexibility for working families. We find that there is a crucial need to ensure that pregnant women have certain workplace protections, and that there are several incremental options for policymakers that do not constitute a costly federal mandate on paid leave for all employers.


Pregnant Women in the Labor Force

While the labor force continues to transition into a more flexible system, it appears that pregnant women are being left behind. AAF previously examined labor market data and found significant discrepancies between pregnant and non-pregnant workers.

First, we found that pregnant women are more likely to be let go from their job than other workers. According to a Census Bureau report, from 2006 to 2008 4.7 percent of working women who were pregnant with their first child were fired.[1] That is drastically higher than the economy wide average. Job Openings and Labor Turnover Survey (JOLTS) data reveal that during the same time period only 1.4 percent of all employees in the country were let go from work. Even more striking is that while the firing rate of all workers remained flat from 1996 to 2008, for first time pregnant women it more than doubled.[2]

 

Second, those who desire to return to the labor force soon after giving birth have a more difficult time securing a job. A separate Census Bureau report found that in 2012, women who had just given birth in the last year had an unemployment rate of 14.1 percent.[3] That was significantly higher than the 8.1 percent unemployment rate for the entire labor market in 2012.[4]

These labor market trends may contribute to the fact that women who had just given birth in 2012 were almost twice as likely to be in poverty as the entire United States. In 2012, 27.9 percent of all women who gave birth within the previous 12 months were in poverty.[5] That is almost double the percentage of all Americans (15 percent) who were in poverty in 2012.[6]

While mandating paid leave would increase the income for pregnant women who take time off and stay employed, it would reduce the income for the thousands of workers who would lose their jobs because of the mandate. Just like raising the minimum wage and expanding overtime pay coverage, mandating paid family leave would add a significant cost to the labor market. Table 1 shows the estimated annual costs of a range of mandated paid family leave options.

Table 1: Cost of Paid Leave[7]

Leave Length

Pay Replacement

Cost

FTE

All Workers

 

 

 

6 weeks

100%

$7.1 billion

146,684

6 weeks

50%

$3.6 billion

73,342

12 weeks

100%

$14.2 billion

293,368

12 weeks

50%

$7.1 billion

146,684

Full-Time Workers

 

 

 

6 weeks

100%

$5.9 billion

121,505

6 weeks

50%

$2.9 billion

60,753

12 weeks

100%

$11.8 billion

243,011

12 weeks

50%

$5.9 billion

121,505

 

If the federal government mandated that all female workers who give birth receive six weeks of paid time off and that their pay while on leave equal their regular weekly pay, we estimate it would cost businesses $7.1 billion per year. That is equivalent to the annual pay for about 147,000 full-time workers. In other terms, mandating six weeks of paid family leave could cost almost 147,000 full-time equivalents.

If the federal government mandated 12 weeks of paid family leave for every pregnant worker, it would cost employers roughly $14.2 billion. That translates to a reduction of about 293,000 full-time equivalents to afford paid family leave.

Moreover, it is possible that mandating paid leave could make these firing and unemployment trends even worse. Many of those who would be subject to the negative labor market consequences would likely be the very women that the policy is intended to help.

A better way to address these issues is through a combination of policies that enhance workplace protections for pregnant workers, increase workplace flexibility and leave options for new mothers and fathers (without burdening employers), and incentivize merit based workforce practices. Here are nine options for accomplishing these goals.

1.    Strengthen Workplace Protections for Pregnant Women

An important way to help working families is to ensure that pregnant and non-pregnant workers are treated equally. The statistics discussed above suggest that despite existing protections in the Pregnancy Discrimination Act, pregnant workers may continue to face discrimination in today’s workplace. In particular, significant discrepancies between pregnant and non-pregnant workers are found when examining firing and unemployment rates.

One approach to address concerns of discrimination in the labor market is to amend the Pregnancy Discrimination Act to more clearly state that pregnant women in the labor market must be treated the same as other workers in their temporary ability or inability to work. This type of a clarification could help reduce the rate at which pregnant workers lose their jobs. The standard should apply to both employees and applicants of a business so that pregnant women are also protected from discrimination in the hiring process.


2.    Working overtime to accrue time off rather than earn additional pay.

Another way to help working families is a proposal is to allow workers to accrue paid time off from working overtime in lieu of additional pay.

Proposed legislation would allow businesses to offer their employees the option to take additional paid time off instead of time-and-a-half wages for working overtime. This approach does not force employees to take paid time off instead of cash wages. Instead it would provide them with the option. With this option, workers who plan to take leave for a newborn (or for any other reason) would be able to accrue paid time off at a rate of 1.5 hours for each hour of overtime.  The bill would allow employees to accrue up to 160 hours or four 40 hour weeks of paid time off. As a result, working families would be able to take advantage of paid leave while their employer would not be burdened with additional costs. This bill could be particularly helpful for hourly employees, who likely have the least access to paid family leave.

3.    Tax credits for employers offering at least 4 weeks of paid family leave.

Another legislative proposal would allow for a refundable federal tax credit for businesses that provides at least four weeks for paid leave at 25 percent for every hour of paid leave. This proposal would come at a significant cost to the federal government, but would encourage the provision of paid family leave without mandating it nationally.

4.    General Leave Bank: Employers encouraged to allow workers to bank all unused personal and sick days, to be redeemed for paid leave.

In today’s labor market, many working men and women obtain paid leave to care for newborns through a combination of sick days, vacation days, and paid maternity (or paternity) leave. From 2006 to 2008, 50.8 percent of working women who became pregnant with their first child received some form of paid time off through a combination of maternity, sick, and vacation leave. In particular, 40.7 percent received paid maternity leave, 9.8 percent used sick leave, and 10.8 percent used paid vacation days.[8]

An effective way to expand access to paid family leave could be to formalize this process. In particular, the federal government can encourage employers to allow workers to bank all unused personal, vacation, and sick days to be used at a later date for paid family leave. In fact, Oxford Economics has found that $52.4 billion worth of paid vacations days go unused every year. For some workers, perhaps the best way to utilize those unused vacation days would be to save them for paid family leave. This system could apply to both salaried and hourly workers.  

5.    Paid leave savings fund

Similar to the General Leave Bank, the federal government could enable businesses to offer workers the option to divert a portion of their pretax earnings to a paid leave savings account. This would be similar to a standard 401(k) retirement savings account, in which employees are able to invest a portion of their pretax earnings in a retirement savings fund that they can access upon retirement. For the paid leave savings fund, instead of accessing the savings during retirement, workers would be able to draw from them whenever they decide to take leave.

The benefit to the paid leave savings fund is that it would be no cost to an employer, unless the employer offers a matching program. Those who contribute to the fund as personal insurance but never end up taking family leave could simply cash out after a certain age or roll the savings account into their retirement plan.

6.    Liability coverage in exchange for meeting national best practices

Due to federal protections under a number of laws (such as the Fair Labor Standards Act (FLSA), National Labor Relation Act (NLRA), Family and Medical Leave Act (FMLA), and Pregnancy Discrimination Act (PDA)), employers are at risk of being sued by their workers for countless reasons. Due to these liability risks, businesses face significant litigation, legal insurance costs, and employment law attorney fees.

The establishment of national best practice standards and implementation guidelines that incorporate paid family leave policies, parental time, and other personnel policies could be useful. If the employer embraces these standards, the federal government could award a certain degree of labor liability coverage. The standards would be completely voluntary and the decisions for which employers would be awarded coverage could be determined by a national trade association or an experienced board of professionals in the Department of Labor. The employers who embrace these standards would benefit from increased liability coverage (and hopefully lower litigation costs) and their workers would benefit from better family workplace policies, such as paid family leave with clear and consistent protocols.

7.    Recognize Family-Friendly Employers Nationally

To promote and encourage employers of all sizes and industries who have established consistent family-friendly policies, the Department of Labor could work with experienced ranking publications or consultants to recognize companies with exceptional leave policies. By recognizing companies in various sectors who provide helpful policies for all employees, competitors may have an incentive to increase their own standards and achieve similar recognition. Short of a mandate, tax credit, or liability waiver, this is a simple way to acknowledge the efforts of employers and raise the standard in a voluntary and competitive way.

8.    Federal workforce focus: pay for performance, promotion based on merit

The federal government could improve the workplace environment for working women and parents by promoting a performance based system that determines pay and promotions based on merit, not seniority. And the government could do this by shaking up their own workplace policies. With 4.2 million workers, the federal government is the nation’s largest employer. However the government does not often make hiring, promotion, and pay decisions based on skills, contributions, and experiences. Rather it uses a rigid bureaucratic GS-level system, in which workers get promotions based on how long they have been working for the government, not the quality of their work. As a result, the federal government is rife with a “jobs for life” mentality that results in only 0.18 percent of the federal workers being let go every year and makes it extraordinarily difficult for a young working professional to move up the organization, regardless of his or her talents.

Perhaps the best way to increase female pay within the federal government would be to instill a competitive system that promotes workers based on merit and applies equally to men and women. In this system, pay increases would not be automatic and based on time spent in service. Instead, pay hikes would be based on actual contributions to the government and public. Promoting based on performance will help even out opportunity for men and women in the government and bring accountability and integrity back to federal hiring and personnel practices. Policies that promote the professional advancement of qualified employees—be they for women or men—benefit families, promote value to the taxpayer, and keep quality employees in the federal system.

9.    Changes to House and Senate staff leave policies

Finally, if federal policymakers desire to increase workplace flexibility and access to paid family leave, then our elected members of Congress should lead by example. Congress has a long history of ad hoc leave policies for their own staffers, which has created a wide disparity among offices. Before enforcing further leave requirements on private sector employers, Members should consider applying the same standard to Capitol Hill employees who are not federal GS-level workers. Step one would be setting a paid leave “floor” or an otherwise consistent vacation and sick leave transfer policy for Capitol Hill staff, so that all Legislative Branch staffers have access to a consistent, minimum length of paid family leave.



[1] “Maternity Leave and Employment Patterns of First-Time Mothers: 1961-2008,” Census Bureau, October 2011,https://www.census.gov/prod/2011pubs/p70-128.pdf

[2] JOLTS data obtained from the Bureau of Labor Statistics, http://www.bls.gov/data/

[3] Lindsay M. Monte & Renee R. Ellis, “Fertility of Women in the United States: 2012,” Census Bureau, July 2014,http://www.census.gov/content/dam/Census/library/publications/2014/demo/p20-575.pdf

[4] Bureau of Labor Statistics, http://www.bls.gov/data/

[5] Lindsay M. Monte & Renee R. Ellis, “Fertility of Women in the United States: 2012,” Census Bureau, July 2014,http://www.census.gov/content/dam/Census/library/publications/2014/demo/p20-575.pdf

[6] Carmen DeNavas-Walt, Bernadette D. Proctor, & Jessica C. Smith, “Income, Poverty, and Health Insurance Coverage in the United States: 2012,” Census Bureau, September 2013,https://www.census.gov/prod/2013pubs/p60-245.pdf

[7] Figures based on author’s analysis of Survey of Income and Program Participation (SIPP) data.

[8] “Maternity Leave and Employment Patterns of First-Time Mothers: 1961-2008,” Census Bureau, October 2011,https://www.census.gov/prod/2011pubs/p70-128.pdf

Summary

·         Ensuring low-income individuals and families adequate access to transactions services is a perennial policy goal, whether that access is traditional banking (checking accounts) or newer innovations like prepaid cards.

·         The Durbin Amendment to the Dodd-Frank Act raised the cost of checking accounts and associated debit cards which led to the unintended consequence of dramatically limiting free checking in the United States.

·         Low-income individuals and families migrated away from now-expensive checking accounts toward prepaid cards and other services. Recently-proposed Consumer Financial Protection Bureau (CFPB) rules would likely eliminate these options as well.

The Durbin Amendment and the Demise of Free Checking Accounts

New technology, like Apple Pay and other forms of contactless payment, is reigniting old arguments over the Durbin Amendment. For anyone who may have forgotten about this small yet vastly consequential regulation, the Durbin Amendment to the Dodd-Frank financial reform law set a limit on interchange fees charged to retailers when a customer makes a purchase using a debit card. These so-called “Durbin fees” are generally less than two percent of the total purchase price, and the fee revenue is split between the card-issuing bank, the card company (Visa, MasterCard, etc.), and the retailer’s merchant account provider. Durbin argued that limiting these fees would reduce retailers’ costs and that those savings would be passed on to consumers.

In its wake, among other things, retailers promised to lower the costs of essentials, and hotels promised to charge lower rates to customers using debit cards. Not surprisingly, those promises never came to fruition. In fact, Dodd-Frank author Barney Frank, an unexpected opponent of the Durbin Amendment, warned that “[he] believe[d] that a free market approach in this area will be better for the economy and all concerned parties…”

Frank called for the repeal of the Durbin amendment in 2011 citing “unintended consequences for consumers.” The unintended consequences are costs being shifted from those customers who use bank cards to everyone with a bank account, thereby increasing costs for everyone and pricing many low and middle-income bank customers out of maintaining a bank account.

To put it in practical terms, banks previously relied on revenues from interchange fees to cover the costs related to debit card operations. Since the Durbin Amendment took effect, banks are experiencing losses between $6.6 billion and $8 billion annually and are no longer able to cover the costs of debit card operations through interchange fees. As a result, banks are increasing fees to consumers and drastically cutting down on services, like checking accounts, that were previously free. One study shows that in 2009, before Durbin, 76 percent of all bank accounts were free, charging no usage fee to the consumer. However, in 2013, a mere 38 percent of bank accounts are offered free of charge. 

Banks also began requiring higher minimum balances on their free or low-fee accounts as well as charging higher card replacement and other administrative fees. JPMorgan Chase estimated that these increased minimum balances and ensuing fees would result in as many as five percent of its banking customers getting pushed out of the banking system. This five percent drop is reasonable considering that Durbin has mainly impacted individuals and families at the lowest end of the income spectrum.

 

Low-Income Families and Loss of Access to Free Checking

Even by taking a conservative approach to JPMorgan’s five percent estimate, the Federal Reserve in a 2011 consumer impact study said, “We would expect that a significant portion of the customers that would abandon checking accounts would be lower-income households since those are the ones most likely not to be able to want to pay for the more expensive accounts. To get an understanding of the potential significance of these closures we note that a one percent decline in checking accounts would result in the loss of checking access for roughly 1 million households; an increase in the number of households by 1 million would increase the percent of unbanked individuals by 12 percent.”

 

The Growing Use of Alternative Financial Services

As a result, these unbanked and underbanked households are forced to turn to alternative financial services (AFS) to carry out their day to day transactions. AFS may be divided between transactional AFS and credit AFS, the former representing non-bank money orders, non-bank check cashing, and non-bank remittances and the latter representing pay day loans, pawn shops, rent-to-own stores, and refund anticipation loans. According to a 2012 Federal Deposit Insurance Corporation (FDIC) study on unbanked and underbanked households, nearly two-thirds (64.9 percent) of unbanked households have used at least one AFS product in the last year and close to half (45.5 percent) have used AFS in the last 30 days. Over 60 percent (62.1 percent) have used a transactional AFS product in the last year while 16.8 percent have used an AFS credit product. Only 29.5 percent of unbanked and underbanked households have not used any AFS in the past year which points to their reliance on cash usage and other informal transactions. Among unbanked and underbanked households that have used AFS, 61.2 percent have used it within the past 30 days which suggests that these households rely on AFS regularly and often.

The average income of the more than 90 million underbanked Americans is $25,500, less than half of the median income in the United States. Of that $25,500, the underbanked population spends about 10 percent each year – approximately the same amount they spend on food – simply gaining access to their funds through AFS. In recent years in an effort to avoid both the fees and minimum balances from traditional bank accounts as well as the high fees and services charges related to AFS, unbanked and underbanked households have increasingly been turning to prepaid debit cards and payroll cards. In fact, the Federal Reserve found that they are the fastest growing form of non-cash payment.

Regulating – and Limiting Access to – Alternative Financial Services

Unfortunately for those households who rely on prepaid cards as an alternative to bank accounts, the Consumer Financial Protection Bureau (CFPB) is proposing new regulations that would have a significant impact on the availability of prepaid cards as well as the benefits they bring. In the 870 page proposal on prepaid cards, including 156 pages of actual proposed rules, the CFPB seeks to mandate new disclosures on behalf of the card companies, new error resolution procedures, consumer liability limits for unauthorized transactions, fee limits, and added requirements for cards with overdraft or credit features.

The proposed rule covers “prepaid accounts” which is defined as “a card, code, or any other device that is capable of being loaded with funds, is not otherwise an account under [the proposed] Regulation E (such as a deposit account), and is redeemable upon presentation at multiple unaffiliated merchants for goods or services, or usable at either ATMs or for person to person transfers.” For those accounts, the proposed rule would require a short-form and long-form fee disclosure along with other information at the time a consumer acquires a prepaid account. Issuers also would be required to deliver statements and both electronic and paper account histories dating back at least 18 months. Additionally, the rule proposes to require overdraft and credit extension features on prepaid cards, thus subjecting these prepaid accounts to the Credit Card Accountability Responsibility and Disclosure (CARD) Act requirements, as well as CFPB’s vast authority over any cards with credit features as it applies to their reporting, marketing, and disclosure rules. The rule also sets standards for error resolution and limits consumer liability.

Costs of Regulation and the Impact of Cost-Shifting

Buried 700 pages into the proposed rule, CFPB admits the hefty time and dollar burden it believes will come as a result of the new requirements. For Regulation E alone, CFPB estimates a one-time burden of 35,398 hours and an ongoing or annual burden of 10,376 hours to the prepaid card providers. Even at a conservative $33 per hour, that’s a one-time compliance cost of $1,168,134 and an annual compliance cost of $342,408 to these prepaid card companies. And that’s just one regulation. The proposal also estimates costs of $17 million simply to dispose of and replace the prepaid cards currently in stores in order to comply with the pre-acquisition disclosure requirements. If we’ve learned anything about government-mandated costs to providers of financial services, it’s that they will, in one way or another, get passed onto the consumers. It’s hard to imagine that these would be any different.

In its 46 page comment letter submitted in response to CFPB’s proposal, the American Bankers Association (ABA) raises several concerns, most of which center around the way the proposal’s treatment of overdrafts on prepaid cards.  Among other problems, the proposal would treat overdrafts as an extension of credit, which categorically violates the Truth in Lending Act (TILA). In short, TILA conveys a right to defer payment to credit account holders, but anyone who overdrafts on a prepaid card is not afforded such a right even though CFPB seeks to equate those overdraft allowances with credit extensions. CFPB is attempting to paint prepaid cards with a broad brushstroke, and instead should be defining these prepaid accounts more narrowly. Without a more narrow definition, the proposal would effectively ban overdraft services or credit extension on the cards because the card issuers will be faced with unsurmountable compliance costs and risks.

ABA’s summary closing paragraph is a compelling argument – one supported by both numbers and precedent: “In short, the cumulative operation and compliance costs and risks of the proposal will significantly hinder banks’ ability to offer prepaid cards. The result will be the suppression of a promising option to move people without bank accounts into financial products offered by insured depository institutions. Thus, rather than creating guardrails, the proposed overdraft services treatment, coupled with other onerous provisions, will create regulatory potholes and barriers to needed and valuable financial products.”

Conclusion

It’s bad enough that millions of Americans are left unbanked or underbanked as a direct result of government action through the Durbin Amendment, but now CFPB wants pull the last rug out from under their feet and impose high regulatory costs that will result in higher fees for consumers or constrict their options. Prepaid card companies and their users were doing just fine (apart from being forced to use prepaid cards as a bank alternative) without CFPB’s intervention. CFPB should be reminded that if it ain’t broke, don’t fix it.

  • At $25 billion, greenhouse gas regulations' costs surpass even cap-and-trade!

  • The Obama admin's greenhouse gas regulations have topped $25 billion in annual costs

Recently, the Environmental Protection Agency (EPA) released its second round of greenhouse gas (GHG) standards for heavy-duty engines and trucks. The $30 billion regulation comes on the heels of a 2011 regulation that also addressed fuel efficiency and imposed $8.1 billion in long-term costs. EPA’s patchwork of GHG rules under the Clean Air Act is quickly growing into a regulatory behemoth. 

Waxman-Markey, the cap-and-trade bill that narrowly passed the House in 2009 but stalled in the Senate, would impose tremendous burdens on states and private entities. However, there was a real concern at the time that even that flawed bill was preferable to EPA’s ad hoc, point-source by point-source approach to GHG regulation. With several years of EPA regulation as a baseline, it’s likely that the regulatory approach imposes more economic burdens than even Waxman-Markey. EPA’s GHG rulemakings could impose at least $25.1 billion in annual economic costs, omitting Department of Energy rulemakings and the most recent proposal for trucks. Waxman-Markey would have imposed roughly $22 billion in unfunded private sector and local government burdens. 

Waxman-Markey

H.R. 2454 was a massive regulatory overhaul of the energy and manufacturing sectors. The 1,428 page-bill  would have added $845 million in additional taxes during a ten-year period and more than $820 million in spending during the same time. To date, EPA’s GHG regulation has consumed more than 2,100 pages of regulatory text. The Congressional Budget Office (CBO) analyzed the regulatory implications of Waxman-Markey as part of its obligations under the Unfunded Mandates Reform Act in 2009.

This CBO analysis was able to quantify and monetize some of the unfunded mandates placed on the private-sector and local government from Waxman-Markey:

  • Cap-and-Trade Program for Greenhouse Gases: CBO estimated that this program, where facilities submitted allowances for carbon dioxide emissions, would cost “tens of billions annually” for private-sector entities and roughly $1 billion annually for local governments. Estimated total cost: $21 billion.
  • Reporting Requirements: this would have required entities to report information on greenhouse gas emissions. CBO estimated the cost at $50 million annually, but EPA already required some reporting. Estimated total cost: $50 million.
  • Carbon Capture and Sequestration: Estimates here ranged up to $175 million for public entities and $925 million for private companies. Estimated total cost $1.1 billion.
  • Performance Standards for Coal Plants: There was a lingering question about whether coal plants would opt for carbon capture and storage technology, so CBO wrote, “the cost of the mandate is uncertain.”
  • Emission Reduction Standards: Since these measures depended on further action from EPA, the cost here was uncertain.
  • Limitations on Transactions in Commodities: The bill imposed mandates on “position limits” for energy commodities. Again, CBO had no basis for quantifying this impact.
  • Combined Energy Efficiency and Renewable Electricity Standard: CBO actually estimated the cost for this would be relatively small and did not monetize the figure.
  • Hydrofluorocarbon (HFCs) Restrictions: This would have applied to private entities only, establishing a cap-and-trade system for HFCs, a potent greenhouse gas. CBO estimated a cost of $600 million in the first year. Estimated total cost: $600 million.
  • The bill also called for lighting and appliance efficiency standards, in addition to heightened motor vehicle standards, but because they were dependent on future regulation action, CBO did not quantify them.

Total: At least $22.7 billion 

As noted, there were several provisions that would have imposed additional costs, but CBO was unable to monetize these estimates. Regardless of the total amount, consider that in 2013 the total net present value of all final regulation was $29.4 billion, or approximately $9.4 billion annually. Waxman-Markey would have easily imposed two year’s worth of regulation from just one bill. Despite its scope and cost, recent EPA regulations are actually more expensive and never received a single “yes” vote in the House or Senate. 

EPA Regulation

It’s easy to forget that during Senate debate over Waxman-Markey, many were eager for a legislation solution as opposed to an “impractical” regulatory approach from EPA. Although EPA was essentially ordered to regulate GHGs, there were plenty of glaring legal and implementation hurdles. Below is just a sample of commenters’ unease about using the Clean Air Act, an invention of the 1970s, to regulate carbon.

  • Mother Jones called EPA regulation “an imperfect” tool to address GHG’s, compared to a legislative approach.
  • Resources for the Future, a consortium of center-left environmental economists, wrote, “EPA action under the act is a clear second-best option to new legislation from Congress, especially over the long term.”
  • Columbia Law School’s Center for Climate Change Law argued EPA’s regulatory path suffered from “inconclusive legal precedent and practical limitations.” 

After Democrats and Republicans rejected the administration’s legislative approach, the executive branch has imposed their “top-down” approach ever since. In many ways, it resembles the broad approach of Waxman-Markey, without the legal certainty.

EPA has finalized at least 11 rules covering GHGs, including measures for reporting, two major rounds of efficiency standards for cars, and a rule on fuel efficiency for heavy-duty trucks and engines. The 2017 to 2025 standards for cars and light-trucks is one of the most expensive rulemakings in U.S. history, with $10.8 billion in annual costs and more than $150 billion in long-term burdens. Below is a snapshot of the notable GHG final rules from EPA to date.

Rule

Annual Cost

Paperwork Hours

2017-2025 Vehicle Efficiency Standards

$10.8 billion

5,667

2012-2017 Vehicle Efficiency Standards

$4.9 billion

39,940

Heavy-Duty Truck Efficiency Standards

$600 million

58,064

Reporting of GHGs for Natural Gas

$22 million

396,474

Reporting of GHGs from Manufacturing

$5.5 million

981,032

Totals

$16.3 billion

1.48 million hours


These are only the current final measures, however, and looming on the regulatory horizon are some of the most expensive measures yet. No one expects the administration to scuttle its GHG standards for new and existing power plants, for example. At $8.8 billion, it would push the administration’s GHGs agenda to roughly $25.1 billion in annual costs and two million paperwork burden hours, arguably more than Waxman-Markey.

The above figure also excludes new measures or those without public cost estimates. The administration’s second round of efficiency standards for heavy-duty trucks will cost $30 billion. In addition, EPA has signaled a willingness to go after methane emissions for fracking, even though fracking emissions are already regulated by EPA and the Department of the Interior regulates possible groundwater concerns. Not to rest on its laurels, EPA will soon begin regulating aircraft emissions, despite annual efficiency improvements of one to two percent and the incredible incentives that airlines have to reduce fuel consumption.

Don’t Forget about DOE

Buried in the morass of Waxman-Markey was a call for greater energy efficiency measures. Although the Department of Energy (DOE) was already active on this policy front, the pace has accelerated since the demise of a legislative solution to climate change.

The chart below details the number of “major” DOE rules that the Office of Information and Regulatory Affairs (OIRA) has approved from 2007 to 2014, with the corresponding net present value (NPV)(unadjusted for inflation) published cost of all DOE measures.

 As the chart displays, DOE has imposed substantial burdens on the manufacturing sector, and ultimately consumers who must eventually pay higher prices. The above figure even excludes two significant final rules from 2015. The agency is now averaging 3.25 major regulations annually since 2007 (compared to five a year from EPA). The eight major DOE rules approved in 2014 was a record, according to OIRA, and there does not appear to be a slowdown pending. The latest Unified Agenda outlined 11 new major rules from DOE that could be completed before 2016. For comparison, the Clinton Administration approved just six major DOE measures during its eight years in office. Adding the annualized cost of DOE measures since 2010 ($8.2 billion) to EPA’s tally of $25.1 billion yields $33.3 billion annual burdens.

Conclusion

Regardless of what Waxman-Markey would have looked like in practice, and it probably would not have been pretty, after five years of regulating it’s clear that the piecemeal regulatory approach to GHGs is even uglier. At least $25 billion in annual burdens is essentially an entire year’s worth of regulation and EPA has devoted these burdens to one policy problem, with the promise of more regulation in the future. 

  • The new overtime pay regulations will only impact 3 million workers

  • 69 percent of the workers impacted by the rule change are at least three times the poverty threshold

  • 67.3 percent of those impacted by the expansion in overtime pay are in families with two or more workers

Executive Summary

American Action Forum (AAF) examines the impact of the recently announced Department of Labor (DOL) rule that will expand the number of workers eligible for overtime pay. We find that expanding overtime pay requirements would impact very few workers and would minimally affect those at the lower end of the income distribution:

  • The new overtime pay regulations will only impact 3 million workers.
  • 69 percent of workers impacted by the rule change are in families with incomes at least three times the poverty threshold.
  • 67.3 percent of those affected by the expansion in overtime pay are in families with two or more workers.

And this quick analysis is the good news—companies with fixed payrolls will have to use money that would be otherwise be spent on other employees and growing their business to pay these new overtime costs.

Introduction

Yesterday evening, the White House announced important details for a long awaited Department of Labor (DOL) rule change that will expand who is eligible for overtime pay. In particular, we now know that DOL will more than double the threshold for exempting salaried workers from overtime pay from the current $455 per week to $970 per week. This means that the salaried workers who are entitled to receive time-and-a-half pay for working over 40 hours per week will expand from those earning below $23,660 per year to those earning below $50,440 per year. AAF previously examined the implications for changes in the overtime rule for a range of options. Now that we know the exact change in the salary threshold, we can more precisely examine the workers who will be impacted by the rule change and how successful it will be as a tool to increase incomes and fight poverty.

The Workers Impacted by the Overtime Pay Expansion

By raising the salary threshold from $455 to $970 per week, the DOL is effectively expanding overtime pay eligibility to all of the salaried workers whose pay falls within that range. How many salaried workers earn between $455 and $970 per week? And, how many actually work more than 40 hours per week and could benefit from the rule change?

First, let’s examine the number of salaried workers who earn between $455 and $970 per week, which is illustrated in Table 1.

Table 1: Salaried Workers who Earn Between $455 and $970 per week

Earnings

Percent

Number

$455 to $970

33.6%

17.9 million

 

Using 2013 data from the Survey of Income and Program Participation, AAF estimates that 33.6 percent of all salaried workers earn between $455 and $970 per week. This means that about 17.9 million workers earn salaries and are paid between $455 and $970 per week.

While 17.9 million people may seem like a lot, those salaried employees will only benefit from the rule change if they work more than 40 hours per week. We find that only 17 percent of the 17.9 million workers actually work overtime and could earn an additional time-and-a-half pay. As shown in Table 2, this means that the total impact of the DOL rule change will be quite limited.

Table 2: Total Workers Impacted by DOL Rule Change

Earnings

Percent

Number

$455 to $970

5.7%

3.0 million

 

In total, only 5.7 percent of all salaried workers earn between $455 and $970 per week and work overtime. As a result, the overtime pay rule change will only impact about 3 million workers.

Income and Family Characteristics of Workers Impacted by Expanded Overtime Pay Coverage

One of the main arguments for raising the salary threshold is that it would be an effective way to boost incomes for lower- and middle-income families. An examination of the data, however, reveals that those impacted by the rule change are not necessarily those in need of the most assistance.

Table 3: Family Income Levels of Workers Impacted by Expanded Overtime Pay Coverage

Percent

Poverty Level

0.6%

1x or less

30.4%

1 to 3x

53.3%

3 to 6x

15.7%

6x or greater

 

As shown in Table 3, only 0.6 percent of the workers impacted by the rule change are in poverty. Moreover, 69 percent are in families with incomes over triple the poverty threshold (at least $72,750 for a family of four). Clearly, raising the salary threshold is not an efficient way to assist low- and middle-income families, as a large portion of the impacted workers are in families with fairly high incomes.

It is important to understand that a family’s well-being not only depends on the earnings of a single person, but also on the earnings of all family members. As a result, working families who tend to be most in need are those with only one worker. When it comes to the expanded overtime pay rule, however, in most cases the workers impact are not the sole earners in their families. Table 4 reveals the number of workers in families with an impacted worker.

Table 4: Number of Workers in Families Impacted by Expanded Overtime Pay Coverage

Workers

Percent

1

32.7%

2 and up

67.3%

3 and up

16.2%

4 and up

4.1%

5 and up

1.1%

 

In the majority of cases, those who earn between $455 and $970 per week and work overtime are the second or third earner in the family. In particular, 67.3 percent of impacted employees are in families with two or more workers. Meanwhile, only 32.7 percent of the workers impacted by the rule change are the only workers in their respective families.

Finally, the majority of workers impacted by the rule change do not have dependent children.

Table 5: Number of Children in Families with Workers Impacted by Expanded Overtime Pay Coverage

Children

Percent

0

63.6%

1

14.5%

2

14.8%

3

4.8%

4

1.9%

5

0.2%

6

0.2%

 

Table 5 reveals that 63.6 percent of the salaried workers who will be impacted by the salary threshold increase have no children. Meanwhile only 14.5 percent care for one child, 14.8 percent care for two, and 4.8 percent care for three. This indicates that the DOL’s planned overtime pay expansion will help very few children.

Conclusion

It is clear that the DOL’s impending rule to expand overtime pay coverage will do little to assist lower- and middle-income workers and families. In particular, the rule will only affect about 3 million workers. Moreover, the rule will do little to help those who are actually struggling. Among the 3 million workers impacted, 69.0 percent are in families with incomes at least three times the poverty threshold. Meanwhile 67.3 percent are in families with 2 or more workers and 63.6 percent have no dependent children. Instead of imposing regulatory burdens to help only a few people, policymakers need to address the root causes of stagnant wages: lackluster economic growth and a troubled labor market. Expanding overtime pay coverage addresses neither.

  • US stands to gain $21 billion (yes BILLION) by ending crude oil export ban http://bit.ly/1HbSicC

  • Win-Win-Win Policy: Gain $21 billion for U.S., decrease Russian oil exports by 1/3, & give other countries a choice: http://bit.ly/1HbSicC

  • The numbers you need to know behind lifting the oil exports ban: http://bit.ly/1HbSicC

Summary

  • Lifting the ban on U.S. crude oil exports could result in the displacement of over one-third of Russia’s exports to Eastern Europe alone.
  • A bill was recently introduced in the U.S. Senate that would lift the crude oil export control ban. This could account for more than $21 billion per year in gross export revenue.

Introduction

Crude oil can easily be labeled as one of the most prized commodities and has the potential to give the United States more global leverage than almost any other asset in its possession. However, since 1975 the U.S. has followed a policy that bans crude oil export in response to the 1973 reaction to the Organization of the Petroleum Exporting Countries (OPEC) oil crisis. Since then, the United States has been hesitant to utilize this prized resource and is to date the only country in the world that does not allow for the export of crude oil.

In the past decade, falling oil imports and increased product exports have been caused by a lackluster demand for unrestricted petroleum product exports and rapidly rising crude oil production. This coupled with issues of national security have heightened the attention paid to the oil export ban debate.

The following data show that the oil export ban has hindered domestic growth and has created an international market barrier that is in desperate need of being removed.

From Russia with No Love

If the crude oil export ban was lifted tomorrow, U.S. supplies headed to domestic refineries could be rerouted and placed on ships almost immediately to be exported overseas. This quick reaction would have immediate consequences for the Russian crude market which supplies most of Eastern Europe with their crude needs. As evidenced in the chart below, Eastern Europe dominated the list in terms of Russia’s crude and condensate exports in 2012 accounting for over 3 million barrels per day.

In 2013 Russia received almost four times as much revenue from exports of crude oil and petroleum products as from natural gas. That same year, roughly 33 percent of Russia gross export sales were to Europe. According to EIA, crude oil exports alone were greater in value than the value of all non-oil and natural gas exports.[1]

 

Methodology

U.S. crude exports alone will not bring the Russian crude market to its knees. However, bringing a large amount of U.S. crude exports on line will have a significant impact on the market.

A country like Poland, which gets 96 percent of its oil imports from Russia, can narrow the margins on those imports, not entirely, but enough to loosen the stranglehold that is currently in place [2].

The U.S. has an estimated capacity to export between 500,000 and 1 million barrels of crude per day. This could reduce Eastern Europe’s dependence by nearly one-third.

The chart below examines the price loss that Russia would see based on the amount of crude oil the U.S. would be able to supply (based on barrels produced per day). Using EIA’s predicted $61.00 2015 Brent Crude Price averaged with EIA’s predicted 2015 WTI price at $55.35[3], each barrel would amount to $58.17 in today’s market. This translates to a loss of almost $58,710,000 per day. Calculating at 1,000,000 bbl/d dollars a day in turn amounts to over $21 billion (gross) a year. At 750,000 bbl/d the loss would be around $44 million per day or $15.9 billion per year. At the low end of the spectrum, a 500,000 bbl/d the loss would be in the arena of $29.25 million per day or $10.7 billion per year. These numbers don’t reflect the positive U.S. domestic implications which would come along with these hefty price tags. (These estimates are, of course, subject to change as oil prices are highly volatile.)

Economic Benefits of Exports

The Energy Information Agency is piecing together parts of a larger analysis including an ongoing economic analysis which includes chemical characteristics analysis, causation of gasoline prices, cost of refiner upgrade to process light and other factors; these pieces will be part of a larger analysis that is set to be released in late June or early July and the findings will allow the EIA to take a position on lifting the ban.  EIA has not forecasted what the production increase would be and it is unsure if those figures will be released with the larger analysis this summer.

One study released by the Brookings Institute stated that lifting the ban could contribute  between $600 billion to $1.8 trillion to the U.S. economy and save drivers up to 12 cents a gallon.

Why is the U.S Importing?

According to the EIA, the United States imported approximately 9 million barrels per day of petroleum of which 80 percent of that was crude. The United States is still importing crude today due to a “misalignment” between what domestic refineries are able to process and what they are producing. The United States is producing light sweet crude. Domestic refineries, however, are set up to handle heavy crude, hence the imports. Some refineries are making adjustments to how they refine their product but the retrofit to go from light to heavy is complicated and expensive as proven by the fact that a refinery hasn’t been built in the United States in the last thirty years.

Lifting the crude export control ban would allow for the inclusion of additional markets for U.S. light sweet crude.

Legislative Efforts

On May 12th, 2015, Senator Murkowski and Senator Heitkamp introduced the “Energy Supply and Distribution Act of 2015”   to repeal the oil export restriction and promote the efficient exploration, production, storage, supply and distribution of energy resources without a Federal license to countries that are not restricted by sanctions from the United States.

Section 13 of the bill specifically states:

“Notwithstanding any other provision of law, to promote the efficient exploration, production, storage, supply, and distribution of energy resources, any domestic crude oil or condensate (other than crude oil stored in the Strategic Petroleum Reserve) may be exported without a Federal license to countries not subject to sanctions by the United States.”

Conclusion

In an effort to increase domestic production and support our allies abroad, the economic data argues for an end to  the crude export control ban. Similarly, global power balance concerns support lifting the ban and loosening the stranglehold that hostile countries such as Russia have on large portions of Eastern Europe and others around the globe. Lifting the crude oil export ban and allowing market access in places that have been off limits for decades will create a flurry of production that will benefit both American consumers and the global oil market.  

 

 

 

 

 

 

 

 

 

 



 [1] http://www.eia.gov/todayinenergy/detail.cfm?id=17231

[ii] http://www.iea.org/publications/freepublications/publication/ENERGYSUPPLYSECURITY2014.pdf

[iii] http://www.eia.gov/forecasts/steo/report/prices.cfm

Related Research

Executive Summary

After being designated a systemically important non-bank financial institution (SIFI), MetLife sued the Financial Stability Oversight Council (FSOC) on the grounds that the process leading to its designation was “arbitrary and capricious.” Both the FSOC and MetLife have filed arguments in the court case. A review of these documents indicates that MetLife has solid arguments in favor of its case, merits its day in court, and may well prevail.

---

On December 18, 2014, the Financial Stability Oversight Council (FSOC) voted to designate MetLife as a systemically important non-bank financial company. Unfortunately, being “systemically important” in the government’s eyes is no honor. When FSOC decides to designate an entity like MetLife as systemically important, that entity is immediately subject to myriad regulations that impose high costs of compliance, costs which are, more often than not, passed on to the consumer.

Those regulations are often summed up as “enhanced prudential standards” and are the result of regulatory rulemakings by the Federal Reserve. Such standards include increased supervision by the Federal Reserve Board, heightened capital requirements, the requirement that the company produce a “living will” each year, increased reporting, stress testing, credit exposure limits, debt-to-equity ratio requirements, and early remediation requirements. In its primer on FSOC’s designation process, AAF estimated that these requirements will force designated entities to hire additional staff, increase data and technology infrastructure and set aside capital. AAF goes even further to argue that subjecting only certain non-bank financial companies to such regulation could have impacts on the structure of the market and economy. Perhaps even more troublesome is the level of uncertainty surrounding FSOC’s firm-specific designation process and the lack of attention by regulators to the costs inflicted as a result of a systemically important designation. Of utmost uncertainty regarding is the process is the fact that none of the enhanced prudential standards have been written for non-banks; so how can FSOC make a determination that enhanced supervision for an entity is necessary when they don’t know what enhanced supervision will actually mean?

This uncertainty is evident in FSOC’s designation of MetLife. For example, according to its “Basis for the Financial Stability Oversight Council’s Final Determination Regarding MetLife, Inc.,” (“the Basis”) FSOC voted to designate MetLife as systemically important solely based on its conclusion that “MetLife’s material financial distress could lead to an impairment of financial intermediation or of financial market functioning that would be sufficiently severe to inflict significant damage on the broader economy.” Its decision failed to take into account the Second Determination Standard as required by Section 113 of Dodd-Frank which includes a study of whether the “nature, scope, size, scale, concentration, interconnectedness, or mix of the activities of the nonbank financial company could pose a threat to the financial stability of the United States” regardless of whether the company were experiencing material financial distress. Roy Woodall, FSOC’s own Independent Member with insurance expertise said, “By not considering the Second Determination Standard, the Council has continued its practice of not informing a company of those aspects of its business that were the primary factors associated with a designation.”

Mr. Woodall went on to advocate for an activities-based approach to designation – one that considers the overall mix of services, offerings, and dealings of MetLife as the precursor that could affect the potential for material financial distress – as opposed to the arbitrary size-based approach, which considers little more than the financial size of a company in concluding “that the origin of the company’s systemic risk would stem from a sudden and unforeseen insolvency of unprecedented scale, of unexplained causation, and without effective regulatory responses or safeguards.” Coincidentally, FSOC is, in fact, considering this activities-based approach for asset managers, but has never done so for insurers or any other non-banks.

Further complicating matters, in an effort to expose the lack of reason and understanding among the Council, Adam Hamm, FSOC’s State Insurance Commissioner Representative said that “[his] staff sought to correct basic factual errors regarding the operation of the state regulatory system just days before the vote on the final designation of the company. Even though some errors were corrected, it is unclear whether the Council fully considered the nature and scope of the state insurance regulatory system.” In addition to pointing out factual errors on the Council’s Basis, Mr. Hamm argues that “the Council should have sought to match the areas of concern to the authorities of existing regulators to address those concerns. The Basis fails to do this. As a result, the Basis fails to acknowledge that most, if not all, of the concerns it identifies (several of which have questionable merit) are addressed by the existing regulatory structure. This omission makes the Council’s rationale for its decision fundamentally flawed.” Indeed, the Basis submits no proposal for additional regulatory tools beyond those already in place as it fails to consider the fact that the risks the Council identifies are already overseen by state insurance regulators that were specifically designed to address those concerns. In closing, Mr. Hamm points out that through this Basis, FSOC has created an impossible burden of proof for companies being considered for a designation as a company would have to prove that there are absolutely no circumstances under which the material financial distress of the company could possibly pose a threat to the financial stability of the country. Mr. Hamm states that “it remains to be seen whether this approach is legally tenable,” and that is just what MetLife is seeking to clarify in its lawsuit.

MetLife isn’t the first insurance company to end up with this tenuous designation. In fact, of the four total non-bank financial company designations that FSOC has made in its four and a half year existence, three have been insurance companies. In addition to MetLife’s designation, on July 8, 2013, FSOC voted to designate AIG, and on December 18, 2014, it voted to designate Prudential as systemically important. This supposed concentration of systemic risk in the insurance industry has led many to question whether FSOC is treating other non-bank financial companies differently both in its analysis and process. Under Dodd-Frank, within 30 days of a designation, companies may file an action in a U.S. Federal court asking that the determination be overturned. Prudential appealed its designation to FSOC and lost, whereas AIG accepted the designation without challenge. It is also interesting to note that these three companies, AIG, Prudential, and MetLife, are already designated as Global Systemically Important Insurers (“G-SIIs”) by the international regulatory body, the Financial Stability Board (FSB), recently in the news for its proposed designation of certain asset managers. Many experts argue that a company cannot be designated by FSB and avoid being designated by FSOC, although the two groups do have different standards and processes for determining systemic importance.

After FSOC rejected MetLife’s initial challenge of its designation, MetLife filed suit in federal court challenging FSOC’s designation, calling it “arbitrary and capricious” – the standard that a plaintiff must meet in order to overturn an agency action. This standard is based on, among other things, a showing of sheer speculation instead of logic and evidence in a decision-making process. The complaint details the bureaucratic hassle MetLife went through in attempts to convince FSOC that it should not be designated as systemically important. It also underlines the importance of the arguments made in Mr. Woodall’s and Mr. Hamm’s dissenting and minority opinions discussed above and sets the grounds that the designation process violates both Dodd-Frank and the Administrative Procedure Act.

MetLife makes ten claims against FSOC, each of which is merited on its own, but together form a cohesive and convincing case against the Council. The first claim argues that the designation is arbitrary and capricious because MetLife is technically not a non-bank financial company under Dodd-Frank standards because less than 85% of its revenues and assets relate to “financial activities” and therefore it is not “predominately engaged in financial activities.” Second, MetLife argues that the designation is fatally premature because FSOC has not yet promulgated a handful of standards and processes that were required under Dodd-Frank. Third, MetLife argues that FSOC failed to consider alternatives to the designation and provide a reasoned explanation for rejecting those alternatives. To that end, MetLife cites to numerous Supreme Court decisions on the matter as well as letters and statements from Members of Congress – including those involved in the writing of Dodd-Frank. Next, MetLife argues that FSOC failed to assess the company’s vulnerability to material financial distress and goes on to explain that the designation was inconsistent with the statutory criteria set forth in section 113 of Dodd-Frank. Finally, the last four claims argue that the designation was based on “unsubstantiated, indefinite assumptions and speculation that failed to satisfy the statutory standards for designation and FSOC’s own interpretive guidance;” failed to consider the economic effects of designation; violated MetLife’s due process; and violated the separation of powers by assigning legislative, prosecutorial, and adjudicative powers to the same individuals.

FSOC responded, as expected, with a motion to dismiss that would terminate the lawsuit before even going to trial. The motion, which was initially filed under seal and later released to the public as a redacted version, counters each of MetLife’s arguments in turn and defends its process and ultimate designation. One might question the strength of arguments that have the tone of  “We’re right; you’re wrong.” Perhaps the most interesting aspect of the back-and-forth between MetLife and FSOC is the due process issue. MetLife, in its complaint, argues that its due process was violated by the Council never identifying the thresholds that result in designation or how the various statutory and regulatory factors are balanced against one another; by being denied access to the full record on which the designation was based even after sending ten Freedom of Information Act (FOIA) requests to FSOC; and by FSOC’s reliance on new evidence and analysis in the Final Designation that was not included in the Notice of Proposed Determination. In response, the government summarily says that it believes MetLife “received ample opportunity to be heard” as it cites to the hundreds of pages of material that it sent to the company.

On June 16, 2015, MetLife filed a cross-motion for summary judgment asking the court to enter summary judgment for MetLife on all claim stating that “FSOC’s errors are many, and grave. And because FSOC made clear that ‘[n]o single consideration [was] dispositive’ in its decision to designate MetLife…each one of those errors requires rescission of the Final Determination in its entirety.” As is usually the case in these types of filings, MetLife restated many of their arguments contained in the original complaint and beefed up other arguments. In particular, MetLife focused on its concept that it technically is not a U.S. non-bank financial company eligible for designation based on the premise that its non-U.S. insurance activities are not financial activities according to the Bank Holding Company Act (“BHCA”). MetLife further argued that FSOC’s attempts to circumvent the requirements of BHCA were “unavailing.” In support of its argument that the final designation was in fact arbitrary and capricious, MetLife submitted evidence that FSOC disregarded accepted risk analysis methodologies and failed to define criteria to guide its analysis; predicated its assumptions on illusory risks; assumed the “utter ineffectiveness” of state regulators; and, among other things, based its “predictive judgments” on “pure speculation, rather than on evidence and reasoned analysis.” Convincing arguments, to say the least.

So what happens next? If the District Court grants FSOC’s motion to dismiss, this case is over, although MetLife has the right to an appeal, and the precedent essentially is set that companies have almost no opportunity for relief after a designation of systemic importance. In that situation, a dismissed case in favor of FSOC means that the court believes that, even if MetLife’s claims are true, there is no law that provides a legal remedy, essentially saying that MetLife has no merit. That is simply not the case. Dodd-Frank and the FSOC’s own rules provide companies the ability to challenge a designation. A ruling in favor of FSOC would give the Council nearly free-reign to make designations as it pleases, even when the numbers don’t add up and the process is opaque.

On the other hand, if the court decides in favor of MetLife’s latest cross-motion for summary judgment, MetLife’s systemically important designation will be rescinded, and FSOC will have a higher burden of proof along with a ruling that favors more transparency throughout the designation process. At the very least, the evidence indicates that MetLife should be able to have their day in court, and, based on the arguments in the filings thus far, has good grounds to prevail.

Related Research

The Need for a New Approach to School Funding

U.S. taxpayers spend at least 5.4 percent of the nation’s Gross Domestic Product (GDP) funding elementary and secondary education, and the current Federal practice for funding schools is based almost exclusively on attendance. This funding method is a fundamentally flawed model that misaligns incentives, rewards sub-par performance, and diminishes the imperative for significant and sustained educational outcomes. School funding, as Michigan Governor Rick Snyder wrote in 2011, “should be based upon academic growth and not just whether a student enrolls and sits at a desk.”

In this paper we examine a different approach to fund schools: one that rewards schools for both achievement and improvement to promote classroom innovation, competition, and student performance.

Performance Based Funding

Performance Based Funding (PBF) is a funding policy that allocates money to schools based on the improvement of student achievement. PBF provides an opportunity to make strategic investments in schools by focusing school funding on desired results.

Misaligned incentives can have negative, and at times devastating, impacts. In education, the misalignment between funding and performance is, at best, a drag on the system and student performance, and at worst, a fundamental flaw that prevents our schools from improving as widely and deeply as necessary for this country to be competitive internationally and live up to our founding ideals of equality and opportunity. PBF is a first step in aligning the incentives in the educational system and breaking the current funding structure that pumps money to all schools regardless of performance.

In recent years, over thirty states that fund higher education institutions have transitioned to PBF from attendance based funding policies.  It is also prevalent in vocational education.  Policymakers have turned to PBF as a way to incentivize educational institutions to improve educational outcomes.  However, in our nation’s elementary and secondary schools, attendance based funding is still the most prevalent type of funding.

PBF Efforts For Elementary and Secondary Schools At The State Level

States are leading the effort to implement PBF for public schools. Arizona began a statewide PBF program in 2013. Likewise, Michigan has been implementing a limited PBF model since 2012. Pennsylvania took a slightly different approach, providing funding flexibility in exchange for performance based outcomes. In addition, Florida, Wisconsin, and Oregon have all recently been exploring PBF. In each case, the amounts of funding are modest, but the potential impact promises to be significant over time.

Arizona’s Performance Funding Efforts

Governor Doug Ducey is embarking on a bold initiative to support the best public schools by providing increased student funding for “A” grade performance on the statewide school ratings system.  His proposal seeks to ensure student funding is based on a per-pupil formula weighted for specific needs; increased for “A”-grade performance; directly available to the student’s public school of choice. This approach seeks to leverage past efforts to provide performance based funding through traditional district funding channels. For example, in the last fiscal year, Arizona distributed $21.5 million through an initiative called Student Success Funding, which was centered upon a district or charter school’s achievement profile, improvement category, and high school graduation number.  For student achievement, there were five categories of achievement tied to different funding amounts per student.

Michigan’s Performance Based Bonus

Since 2012, Michigan has provided performance based funding as an extra incentive for elementary and secondary schools. Using a student academic performance change metric, a school district can earn up to $30 per pupil for both mathematics and reading in elementary and middle school and $40 per pupil for all tested subjects in high school.

PBF at the Federal Level

Allocating dollars based on educational results is gaining traction because of its potential to drive student performance higher at a scalable level that has system-level implications. Rewarding schools for both achievement and improvement (i.e., longitudinal growth) can promote innovation and achievement.  

While the non-financial policy of the Elementary and Secondary Education Act (ESEA) has been radically overhauled since the law’s inception, there has not been a focus on reforming Title I funding for schools.  There have been changes on the margins of federal education funding policy, but there has not been a wholesale rethinking of the formulas that drive Title I dollars to states, districts, and schools. With serious efforts to reauthorize the ESEA underway, policymakers should take this opportunity to consider alternatives to how Title I dollars are allocated.

Currently, funds go to states and districts through ESEA Title I based on four separate formulas: the Basic, Concentration, Targeted, and Education Finance Incentive Grant formulas. Once these funds reach districts, they are combined into one funding allocation to be used for the same Title I program purposes. The four Title I formulas are overly complicated and fundamentally flawed.  They are the result of political compromise and outdated policy. 

As policymakers consider reauthorization of ESEA, incorporating performance based funding into Title I could deliver long-term beneficial results.  One way to achieve performance based funding in ESEA would be to consolidate the four Title I formulas into two funding streams:  one that provides the majority of funding based on students in poverty and one that rewards performance.  

Conclusion

PBF is a policy innovation that is deserving of more attention and analysis, and which can provide a new approach to improving academic outcomes outside the traditional reform approaches, while addressing systemic inefficiency.   Nowhere is this more needed than in the antiquated and convoluted Title I funding in the ESEA.


In what has become an unfortunate biannual tradition in failed transparency, the administration released its regulatory agenda on the eve of a holiday weekend, this time, the Thursday evening before Memorial Day. An American Action Forum (AAF) review of the agenda found more than $110 billion in potential costs, with billions more in unknown burdens.

Once again, there are few surprises in the regulatory agenda. The administration listed 18 new “economically significant” regulations, down from 23 in the previous agenda. It appears that August and October will be busy this year. The administration plans to finalize its greenhouse gas standards for new and existing sources, protections for agricultural workers, and food safety measures. By October, the schedule calls for three final energy efficiency standards, produce safety regulation, and a final ozone rule.

The two largest measures are EPA rules for greenhouse gas standards at existing power plants and revised ozone regulations. Currently, final silica standards and fiduciary rules for investment advisors do not have dates for publication. Below, there are approximately 40 notable rulemakings and their scheduled publication dates.


May 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

EPA

Final

Definition of Waters of the U.S.

2040-AF30

$166

EPA

Final

Underground Storage Tank Rules

2050-AG46

$210

 

June-July 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

Treasury

Final

Margin and Capital Requirements

1557-AD43

$5,200

EPA

Proposed

*Phase 2: GHG Standards for Heavy-Duty Engines, Vehicles

2060-AS16

 

HHS

Final

Regulation on the Sale of Tobacco Products

0910-AG38

$1,010

Energy

Proposed

Conservation Standards: Central Air Conditioners

1904-AD37

 

 

August 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

EPA

Final

GHG Guidelines for Existing Sources

2060-AR33

$21,700

EPA

Final

GHG Guidelines for New Sources

2060-AQ91

 

CFPB

Final

Home Mortgage Disclosure

3170-AA10

$2,161

HHS

Final

Updated Standards for Labeling of Pet Food 

0910-AG10

$690

DOL

Final

Personal Fall Protection Systems

1218-AB80

$173.2

EPA

Final

Agricultural Worker Protection Standards

2070-AJ22

$640

HHS

Final

Covered Outpatient Drugs

0938-AQ41

$104

FDA

Final

Controls for Human Food

0910-AG36

$2,579


 

September 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

EPA

Final

Effluent Guidelines for Power Plants

2040-AF14

$954.1

DOJ

Final

Implementation of ADA Amendments

1190-AA59

$451

 

October-November 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

HHS

Final

Produce Safety Regulation

0910-AG35

$2,703

DHS

Final

Worker Identification Credential

1625-AB21

$186.1

EPA

Final

Review of NAAQS for Ozone

2060-AP38

$15,000

Energy

Final

Conservation Standards: Warm Air Furnaces

1904-AD11

$62

Energy

Final

Conservation Standards: Dishwashers

1904-AD24

$7,100

Energy

Final

Conservation Standards: Air Conditioners

1904-AC85

$790

FDA

Final

Verification for Imported Human Food

0910-AG64

$3,974

EPA

Final

Formaldehyde Emissions Standards for Composite Wood Products

2070-AJ92

$512

 

December 2015

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

Energy

Final

Conservation Standards: Large Commercial A/C

1904-AC95

$8,800

DOL

Final

Persuader Agreements

1245-AA03

 

Treasury

Final

Assessment of Fees for Large Banks

1505-AC42

 

Energy

Final

Conservation Standards: Hearth Products

1904-AD35

$1,004

Energy

Final

Conservation Standards: Gas Furnaces

1904-AD20

$12,270

 

January 2016

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

DOT

Final

Commercial Driver’s License Clearinghouse

2126-AB18

$1,634

CFPB

Final

Requirements for Prepaid Cards

3170-AA22

$17

DOL

Final

Unified and Combined State Plans

1205-AB74

$1,471

Interior

Final

Arctic Drilling Regulations

1082-AA00

$1,324

 

March-April 2016

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

HHS

Final

Revision of Nutrition Labels

0910-AF22

$2,008

HHS

Final

Sanitary Transportation of Human and Animal Food

0910-AG98

$179

HHS

Final

Serving Sizes of Foods

0910-AF23

$2,008

SEC

Final

Standards for Covered Clearing Agencies

3235-AL48

$225

SEC

Final

Rules for Security Swap Dealers

3235-AL12

$210

 

Long-Term

Agency

Proposed/ Final

Rule

RIN

Cost (in millions)

DOJ

Final

ADA Guidelines for Passenger Vessels

3014-AA11

$1,088

DOL

Final

Exposure to Crystalline Silica

1218-AB70

$6,524

DOL

Final

Conflict of Interest-Investment Advice

1210-AB32

$5,700

Possible Cost: $110.8 Billion 

The $110 billion estimate contains just 37 monetized figures and an incredible amount of uncertainty. The public does not yet know the cost of proposed efficiency standards for heavy-duty trucks and engines or the dozens of other major rules without a public cost-benefit analysis. The previous heavy-duty rule cost more than $8 billion. In fact, the new rule is slated for final publication on January of 2017, a “midnight” period for presidents when there has historically been a rush of regulatory activity.

Final costs could balloon or diminish significantly compared to earlier versions. The administration’s time in office is coming to a close, so regulators will surely rush to finish its greenhouse gas and ozone standards before the next administration takes power.

  • Since 2009, states have incurred nearly $35 billion and 75 million paperwork hours in unfunded mandates

  • In 2010 alone, the federal government imposed 86 unfunded mandates on the states

  • The ACA alone accounts for 27.1 million paperwork burden hours on the states

Since President Obama took office, his regulators have added $35 billion in unfunded regulatory costs and at least 75 million paperwork burden hours on state and local governments. Regulation of industries or businesses tends to receive a majority of the coverage, but the mandates placed on cash-strapped states and local governments can result in far more profound impacts.

The genesis of recent burdens is easy to pinpoint. In 2010, when the president signed the Affordable Care Act (ACA) and the Dodd-Frank financial reform, the federal government imposed 86 unfunded mandates on state and local governments, easily a modern record. More impactful, however, might have been the seven mandates that exceeded the statutory threshold, which currently stands at $77 million annually. For comparison, from 2002 to 2006, there were only seven unfunded mandates that exceeded the threshold and Congress and the president managed to match that total in a single year. The graph below displays the history of unfunded state mandates according to the Congressional Budget Office

Once the majority party in the U.S. House changed at the beginning of 2011, the number of mandates that exceeded the statutory threshold disappeared, but regulators have slowly been integrating the busy 2010 legislative session into expensive regulations for states and local governments. According to the administration’s math, there have only been 12 regulations that contained substantial burdens on local governments. However, earlier work from the American Action Forum (AAF) found the White House’s search function is not entirely perfect when identifying costly mandates. In one instance, the administration claimed, and still claims, that a $3.2 billion school lunch regulation does not contain unfunded mandates.

Judging from the returns that the White House’s website displays, AAF found $1.7 billion in unfunded state regulatory burdens; but, this is just the tip of the iceberg. RegRodeo.com, AAF’s database that tracks every regulation with a cost or paperwork burden supplements the initial $1.7 billion figure. For example, AAF found an additional $33.2 billion in regulatory mandates on states, for a total burden of $34.9 billion. This figure only examines the largest regulations that impose unfunded state mandates and there are doubtless other rules that impose costs on local governments that might not be counted in these data.

The Paperwork

State and local governments cooperating with federal directives employ a small army’s worth of compliance personnel just to handle the paperwork, to say nothing of other capital costs. According to the Bureau of Labor Statistics (BLS), there are 33,600 state compliance officers, whose duties are to ensure conformity with laws and regulations. An examination of federal paperwork requirements on states reveals a motivation for this workforce.

The Office of Information and Regulatory Affairs (OIRA) allows users to search federal paperwork by “affected public” entity. Those complying with federal forms include private industry, individuals, and local governments. For example, of the 9,200-plus federal paperwork requirements, more than 1,900 directly affect states. For this study, AAF only examined paperwork affecting local governments that imposed more than one million paperwork hours.

This query returns a more manageable 78 requirements. The total burden for these paperwork burdens is more than 419 million hours, but some of the forms affect private industries, as well as local governments. Fortunately, OIRA allows users to determine which hours are apportioned to states and which are borne by individuals and industry.

More than half of all hours, 223 million, are imposed directly on state and local governments annually. Per state (including Washington, D.C.) yields a burden of 4.3 million hours. Assuming a 2,000-hour work year, the average state will have to employ 2,187 workers to complete all of the required forms.

The figures above exclude paperwork that imposes less than one million hours, so it’s a conservative figure. For perspective on the price of this time, the federal government reports that this paperwork costs state and local governments $2.2 billion annually. However, there were several paperwork collections that were never monetized. Assuming the average wage for a compliance officer, $32.69, multiplied by 223 million hours, yields an annual compliance burden on states of $7.2 billion. To put this in perspective, all executive agencies reportedly imposed $3 billion in total regulatory burdens in FY 2013. The burden of 223 million hours annually is a tremendous hurdle that states must navigate every year, and unfortunately, the Obama Administration continues to add new regulatory obstacles.

From the sample of 78 paperwork requirements on states, it’s easy to determine which ones were implemented during this administration. The 17 new burdens added by the Obama Administration have added 75.5 million hours of paperwork for state and local governments. By comparison, the Department of Housing and Urban Development imposes just 57 million hours.  This shouldn’t seem like a huge surprise when one considers that the administration recently reported a 362 million-hour paperwork increase from fiscal years 2011 to 2012. Monetizing these 75.5 million hours yields $2.4 billion in new burdens on states since the start of the Obama Administration. With parts of the ACA left to implement and EPA’s pricey ozone and “Waters of the United States” rules on schedule, these burdens will only escalate.

The Affordable Care Act

The ACA has played an outsized role driving up burdens for the states. Despite several states’ reluctance to expand Medicaid or create exchanges, according to administration figures, the new burdens on local governments are tremendous. There are six paperwork requirements related to the ACA that impose more than one million paperwork hours on local governments. Combined, these six collections impose 27.1 million paperwork burden hours. In dollars, this translates to about $880 million.

One of the largest culprits for these escalating burdens is the “Essential Health Benefits; Exchanges: Eligibility and Enrollment” requirement. According to the administration, the measure will add more than 12.8 million hours of paperwork and cost $336.9 million annually. Per state (including D.C.), this amounts to 251,000 hours and $6.6 million, for one ACA paperwork requirement.

A similar burden is added through the Medicaid eligibility changes requirement. Administration data reveal that could cost states more than 12.7 million hours. Although they never monetized these time commitments, each state could easily spend 250,000 hours and $6 million, if they complied with the measure.    

Conclusion

Burdens on large corporations and small businesses receive plenty of attention in the regulatory world, but it is states that often to have to carry the heaviest load dealing with federal requirements. At $35 billion and more than 75 million hours from the Obama Administration alone, there is a reason that many states employ hundreds of compliance personnel dedicated to navigating red tape. Given recent history, it’s unlikely that these burdens will dissipate soon. 

There has been a great deal of attention of late on the state of federal infrastructure spending, in part due to recurring reauthorization of the highway trust fund, and its pending bankruptcy. This report provides an assessment on the state of federal infrastructure spending over time and by function, and notes that while such spending is down of late, current expenditures are roughly on par with historical levels, and outpace depreciation.

Time trends

According to the Congressional Budget Office (CBO), public spending, including federal, state and local governments, transportation and water infrastructure amounted to $416 billion in 2014, of which $96 billion came from the federal government.[1] In real terms, this reflects a higher expenditure than the historical average of $83 billion.

Figure 1

Roughly three quarters of total public spending on infrastructure is expended by state and local governments, net of federal grants – a ratio that has held constant for the last 30 years.

Figure 2

The more recent trend however reflects a decline in real federal expenditures. Between 2003 and 2014, nominal federal spending increased 38 percent, largely due to increases in material prices. In real terms, federal infrastructure spending decreased by 19 percent.

Figure 3

Another measure of federal investment in infrastructure includes the effects of depreciation. By this measure, net federal domestic investment amounted to about $10 billion in 2013. This is roughly half of the 20 year average, though it remains positive, outpacing depreciation.[2]

Figure 4

Federal expenditures on transportation and water infrastructure comprised to 2.74 percent of federal spending in 2014. This share of federal expenditures is below both long and near term historical averages. This dynamic reflects changing features of public policy, such as the post-War boom and the creation of the interstate highway system beginning in 1956, and the increasing share of federal expenditures comprised of transfer payments.

Distribution across activities

Federal spending on infrastructure can be broadly categorized as either capital expenditures or operation and maintenance expenditures.  Capital spending includes spending for the purchase of new structures, such as roads and sewer systems, and equipment, such as trains and other public vehicle, as well as expenditures for the improvements and modernization of these existing assets.

Operation and maintenance includes the cost of maintenance and upkeep as well as administration of public infrastructure – such as air traffic controllers. Associated education and research and development devoted to infrastructure is also included in this category of expenditure.

Figure 5

Since 1956, capital purchases have comprised an average of 78 percent of federal infrastructure spending. In the last 10 years, this has declined slightly to 75 percent. As of 2015, capital purchases comprised 71 percent of federal infrastructure spending. 

Federal infrastructure spending can be broadly grouped into 7 types of projects: highways, mass transit, rail, aviation, water transportation, water resources, and water utilities.

Figure 6

 

Spending on highways comprised 48 percent of the federal share in 2014, broadly consistent with historical averages. Other types of infrastructure have received varied levels of funding. For example, water resources spending, such as dams and levees, has declined as a share of the budget steadily since 1956, when it was the largest form of federal infrastructure expenditure. The federal government did not invest any funding in mass transit until 1962.

Conclusion

The current disposition of federal infrastructure spending by project type is broadly consistent with the pattern of spending over the last two decades. While state and local governments bear the bulk of public infrastructure costs today, their proportional share of total costs has changed little over the past 30 years.

In a rare showing of bipartisanship, policymakers are working together to achieve a universal goal: providing for more rapid development of and increased access to life-saving health care treatments. The House Energy and Commerce Committee passed H.R. 6, the 21st Century Cures Act, 51-0 on May 21, 2015. This legislation, introduced by  Chairman Fred Upton (R-MI) and Rep. Diana DeGette (D-CO), combines a number of National Institutes of Health (NIH) and Food and Drug Administration (FDA) reforms and grants new authority to both in order to assist in more timely “discovery, development, and delivery” of new medical cures and treatments. While the bill is expected to be considered on the floor of the House in June, and the Senate is working on similar legislation, competing agenda items and limited time on the congressional schedule may keep it from being considered by the Senate until the fall at the earliest. President Obama voiced support for such work during his State of the Union address in January and requested $215 million for investment in a “Precision Medicine Initiative” in his FY2016 budget released in February. 

FUNDING AND OFFSETS

The legislation is estimated to cost $13.2 billion over the next decade, most of which is from increased funding of $10 billion over the next five years for the NIH. The FDA will receive $550 million in additional funding through a transfer from the Treasury’s General Fund into a newly created Cures Innovation Fund between 2016 and 2020; however, this is shy of the $880 million the FDA has estimated it will cost them to implement all of the bill’s provisions.

The cost of the bill will be offset through various mechanisms. Much of the bill will be paid for by gradually delaying monthly Medicare reinsurance payments to Part D plan sponsors beginning in 2020. Currently, Part D reinsurance payments are paid in advance at the beginning of each month; transitioning the timing of such payments to the first of the following month is estimated to save $5 to $7 billion over the next ten years. Limiting federal Medicaid reimbursement for durable medical equipment to Medicare rates is estimated to save $2.8 billion, and reducing Medicare payments for x-rays using old equipment by up to 20 percent will save $200 million while encouraging the transition to more modern imaging technology. Finally, sales of oil from the Strategic Petroleum Reserve—8 million barrels per year between 2018 and 2025—will generate an estimated $5.2 billion.

TITLE I- DISCOVERY

The biggest investment in the bill is through the NIH—$10 billion over the 2016-2020 period—for basic, translational, and clinical biomedical research. This appropriation will support a newly created NIH Innovation Fund and stipulates that at least some of the money should be spent on: the newly-created Accelerating Advancement Program (which requires a dollar-for-dollar funding match by a participating institution); “early stage investigators”; “high-risk, high-reward research”; and no more than 10 percent for intramural research.[1] The legislation specifies that such research should focus on addressing unmet medical needs by expanding knowledge regarding biomarkers, precision medicine, infectious diseases, and antibiotics. The NIH is also directed to establish and continuously update a five-year biomedical research strategic plan that ensures rare and pediatric diseases and conditions remain a priority. A loan repayment program and Capstone Award grant program are intended to provide support to scientists and encourage work in the biomedical field.

While the bill is intended to assist in the development and delivery of new medical cures and treatments for all, there is particular attention given to deadly and debilitating diseases disproportionately impacting children. The NIH is directed to establish a National Pediatric Research Network and is encouraged to establish a global pediatric clinical study network. Further, appropriate age groupings are to be determined for conducting clinical research studies and presenting findings.

Several reforms are intended to ease regulatory burdens and allow for improved data collection and information sharing. Scientific findings which result from NIH-funded research will be required to be publicly shared (with exceptions so as not to violate laws protecting privacy, confidentiality, proprietary information, and intellectual property rights). A Clinical Trial Data System will be created, and all information contained in clinical trial data registries must be standardized and easily usable by the public. Health privacy regulations will be amended to allow for use of protected health information by covered entities for research purposes, provided that such information is treated in a manner similar to if it were used for public health activities.

The Council for 21st Century Cures, a nonprofit public-private partnership, would be created to coordinate the efforts of all the stakeholders working on this mission and to disseminate information relating to such activities. The Council would also be responsible for identifying gaps and recommending further collaborative and developmental opportunities.

TITLE II- DEVELOPMENT

The provisions relating to the development of new drugs and medical devices mostly impact the FDA. One interesting provision is a new requirement for the FDA to establish a framework for utilizing patient experience data in order to assess the risks and benefits of new treatments and incorporate such preferences into the approval decision-making process; essentially, this would allow for consideration of whether or not patients feel the risks and unwanted side effects of a drug are worth the potential benefits.

There is a Sense of Congress that the FDA should approve “breakthrough therapies” as quickly as possible. A provision amending current rules regarding patient access to investigational drugs would attempt to provide terminally-ill patients with a more transparent process and expanded access to therapies still awaiting approval. In order to accelerate the approval of new drugs, FDA would be required to issue guidance, based on input from the industry, for developing and qualifying biomarkers and other drug development tools which may be used as evidence demonstrating a drug’s effectiveness, as opposed to waiting for results from a lengthy clinical trial, as must be done now. FDA is instructed to issue periodic guidance to assist in the development of precision drugs and biological products. An FDA-certified third-party quality assessment program should be established to allow device companies to have their products reviewed in a more efficient manner and more quickly get products to patients. Registry data, peer-reviewed studies, and data collected in other countries would all be allowed to serve as valid scientific evidence to be considered in the FDA’s approval process for medical devices. Reporting requirements for medical devices will be eased if the Secretary determines the reports are no longer necessary, allowing for more targeted oversight of devices where it is most needed, and the humanitarian device exemption will be increased from 4,000 to 8,000 people affected.

Drugs being developed for a rare medical condition affecting 200,000 people or fewer, known as orphan drugs, are already eligible for grant funding to assist in their development and deployment, and enjoy extra years of market exclusivity compared with non-orphan drugs. This legislation would allow for the use of evidence garnered from other drugs or clinical trial data for accelerated approval of new indications for a drug. Market exclusivity will be extended for 6 additional months for any drug which is approved for a new indication for the prevention, diagnosis, or treatment of a rare disease or condition. The Rare Pediatric Disease Priority Review Voucher (Pediatric PRV) Program, which allows for any one of a manufacturer’s drugs being developed for a limited set of conditions to be reviewed for approval in just six months, rather than the typical ten months, is reauthorized. While this program, currently set to expire next year, can speed the drug approval process for certain types of conditions, the cost of a voucher is not cheap and may limit the program’s effectiveness. The priority voucher fee has ranged between $2.5 and $5.5 million and must be paid in addition to the normal new drug user fee.[2] Only three Pediatric PRVs have been granted since the program was created in 2012, and both have subsequently been sold from the grantee to another pharmaceutical company.[3],[4]

Other provisions aim to help in the development of new antibacterial or antifungal drugs, particularly for treating infectious diseases with high mortality rates, and to improve access to and utilization of vaccines. Recent news stories regarding increased antibiotic resistance and the spread of measles throughout the country earlier this year highlight the importance of such provisions.

Information about drugs and devices that may be provided by manufacturers to the public is highly regulated and limited, often making it difficult for patients to fully grasp the intended uses of a drug and its potential risks and benefits. This legislation calls on the FDA to issue guidance to facilitate the dissemination of responsible, truthful, and non-misleading scientific and medical information not included on the approved label for a drug or device. Regulatory burdens relating to software used by health care companies for administrative and financial purposes—not relating to the actual provision of care—should be eased so as to allow for continued innovation and to provide clarity to software developers; such administrative software should not be regulated in the same manner as medical devices. Additional provisions relating to the meaningful use of electronic health records are included in the “Delivery” title, discussed below.

Finally, the limitation on the number of people FDA may hire, currently set at 500 employees, for biomedical research will be lifted to allow for increased staffing as necessary to implement all of these new requirements imposed on the agency under this legislation. Further, the ban on FDA employees attending scientific conferences will be lifted with the intent of keeping them informed of the most up-to-date developments in the industry.

TITLE III- DELIVERY

Under this legislation, in order for health information technology to be deemed “interoperable,” it must: have the ability to transfer and allow access to all of a patient’s medical record to any and all other authorized health information technologies and authorized users, including the patient; and not block information in any way, including by charging excessive fees for the transfer of information. Standards will be established by the Office of the National Coordinator for Health Information Technology (ONCHIT) through contracts with standards development organizations by 2017 regarding “vocabulary and terminology”, “content and structure”, the “transport of information”, “security”, and “service”. A Health IT Policy Committee will be created to make recommendations to the Secretary of Health and Human Services (HHS) regarding such standards, but will not have the authority to actually make changes. Any electronic health records (EHR) that do not meet all certification standards by 2019 will be decertified, which would result in payment penalties for providers using such records; however, the Secretary will have the authority to exempt providers from such penalties for up to one year in order to allow the provider time to switch to a new vendor. By 2017, guidance will be published clarifying the relationship between Health Insurance Portability and Accountability Act (HIPAA) privacy and security protections and the interoperability information sharing requirements. Hardship exemptions will be permitted on a case-by-case basis for providers unable to meet EHR meaningful use requirements. A demonstration program will allow for studying the effects of implementing similar EHR incentive payments for Medicaid providers.

One year after enactment, the Centers for Medicare and Medicaid Services (CMS) must provide Congress with information relating to which Medicare patients, particularly dual-eligibles and those with chronic conditions, may benefit most from improvements in quality care by the expansion of telehealth services; which services are best suited for delivery through telehealth; and any barriers that might prevent or impede the expansion of telehealth services. The Medicare Payment Advisory Commission (MedPAC) is instructed to make recommendations on providing payment for telehealth services under Medicare Parts A and B, based on what Medicare Advantage (MA) insurers are doing. It is recommended that states collaborate and create licensure compacts with one another in order to facilitate the delivery of telehealth services across state lines. (One such type of compact has already been signed by seven states.[5])

A new pharmaceutical and technology ombudsman will be created at CMS in order to respond to complaints and grievances of drug and device manufacturers regarding coverage of new technologies in order to help ensure timely access to life-saving treatments.

A public website will be created which will allow Medicare beneficiaries to easily search and review pricing information, including the estimated beneficiary liability, for covered services at various facilities in order to shop around and find the best price.

CONCLUSION

As this bill continues to move through the legislative process, several provisions (or the lack thereof) will continue to garner attention, and efforts will be made to revise the legislation further. Regulators continue to insist that the FDA will not be adequately funded to meet all of its new demands, thus limiting the effectiveness of the legislation. While some are thrilled at the possibility of getting new therapies to patients more quickly through the allowance of new surrogate endpoints and expedited review pathways, some patient advocates are concerned that, rather than helping patients, the legislation will allow for “drugs and devices [to be approved] faster based on weaker evidence”, putting patients’ health at risk.  Pharmaceutical and device manufacturers are hoping to restore a provision which would update the antiquated regulations limiting a manufacturer’s authority to communicate information about a product on social media beyond what is specifically approved by the FDA for its product label. While vast revisions to the 340B discount drug program were floated, the most controversial amendments to the program were ultimately not included in the Committee draft; though, some are concerned those proposals may be included in future legislation.

While some stakeholders will undoubtedly be opposed to specific provisions in the legislation and more could always be done, overall this legislation marks a reasonable first step towards an important goal. Easing regulatory burdens, accelerating access to new medical treatments, facilitating greater access to patient data in order to allow for greater coordination of care and new research opportunities for the development of new treatments, and providing for increased price transparency to spur competition and drive down costs are all much-needed changes.



[1] Intramural research is research completely funded by and conducted by the federal government in federal labs by federal employees, as opposed to research done by an outside organization funded by a federal grant.

[5] http://www.modernhealthcare.com/article/20150520/NEWS/150519873


Introduction

For years, policymakers and health insurers have looked for ways to simultaneously reduce federal health care expenditures and ensure better quality care for patients. For both hospital services (Part A) and physician services (Part B), the Centers for Medicare and Medicaid Services (CMS) has implemented multiple programs to track providers’ performance on various metrics and adjust payments accordingly—similar to efforts being imposed by private insurers. For Medicare Advantage (MA or Part C), CMS operates the Star Rating System. This system provides a relative quality score to Medicare Advantage Organizations (MAOs) on a 5-star scale based on their plans’ performance on selected criteria, and is now used to determine whether or not an MAO will receive bonus payments and/or rebates for their enrollees.

How Stars are Calculated

The 5-star rating system was first implemented by CMS for MA plans in 2008 serving as a tool to inform beneficiaries as to the quality of the various plan options and assist them in the plan selection process. Ratings are set at the MAO contract level—not the plan level—meaning all plans under the same contract receive the same score. Stars are assigned to each contract for each individual measure being evaluated, based on relative performance compared to the other contracts. The overall summary score for each contract is then calculated by averaging the star ratings for each individual measure for a contract.

Performance is not weighted by plan enrollment; a contract performing well with many enrollees does not receive any extra credit for providing high-quality care to more people than a contract with lower enrollment.  Further, for the majority of measures in the Stars Rating program, performance is not adjusted for patient characteristics or socioeconomic status. There are a few lower-weighted Consumer Assessment of Healthcare Providers and Systems (CAHPS) measures, which measure patient satisfaction with the care they received that include some adjustments for age, education, mental and physical health, income, and state of residence.[1] However, adjustments are not made for the higher-weighted Healthcare Effectiveness Data and Information Set (HEDIS) or the Health Outcomes Survey (HOS) clinical measures which more closely and objectively measure the quality of health care provided through reviews of patient medical charts and insurance claims, and which are more likely to be impacted by those adjustment factors. 

Since 2011, CMS has set thresholds (based on historical trends) which must be attained to achieve 4-star status for roughly half of the measures. However, they are eliminating the thresholds beginning in 2016 as CMS no longer believes the target indicators are needed and that the thresholds increase the risk of rating misclassification. Analysis by CMS has shown that greater improvement is typically achieved for measures which do not have predetermined thresholds than those that do. While this may be because the incentive to improve any further is significantly diminished once the threshold for receiving the bonus payment is achieved, it may also result from underlying differences between measures which have been given thresholds and which have not, as they are not randomly selected.[2]

In 2014 and 2015, measures were based on five broad categories, with weights varying based on the category’s level of importance as determined by CMS[3]:

Metric Category

Weight

Improvement

5[4]

Outcomes

3

Intermediate Outcomes

3

Patient Experience

1.5

Access

1.5

Process

1

 

Compared to the first year bonuses were given—when clinical quality metrics accounted for only 49 percent of the total rating—such metrics now account for 63 percent of the rating.[5] Additionally, the “Reward Factor” (previously the Integration Factor or “i-Factor”) which measures a contract’s quality rating consistency across all measures relative to other plans will continue to be used.[6]

How Rewards are Calculated

Under a provision of the Affordable Care Act (ACA), these star ratings began to be used to adjust payments to MAOs beginning in 2012. Bonuses were to be awarded for contracts receiving 4 or more stars. However, at the same time base payments to MA plans were scheduled to be reduced as part of the Medicare cuts provided in the ACA, CMS also launched a three-year demonstration project from 2012-2014 providing bonuses to plans achieving 3 or 3.5 stars in order to determine if providing bonuses at this level would lead to “more rapid and larger year-to-year improvements”.[7] That demonstration project has now ended.

Rewards are two-part: direct bonus payments to the plan operator and rebates which must be returned to the beneficiary in the form of additional or enhanced benefits, such as reduced premiums or co-payments, expanded coverage, etc.

Bonus payments—like base MA plan payments—are paid per enrollee and are calculated as a share of the MA benchmarks, which vary by county[8], and thus bonus payments vary by county. In 2014 and subsequent years, bonuses for 4-star plans or higher are 5 percent of the area’s benchmark.[9] New plans (offered by an organization which has not had an MA contract in the three preceding years and thus do not have a sufficient amount of data upon which to qualify) are awarded a 3.5 percent bonus. Contracts in counties with certain demographic factors receive double bonuses.[10] Plans that fail to report are treated as having less than 3.5 stars and thus do not receive any bonus payment.

Rebates in the MA plan existed prior to the Star Rating System, and operate in virtually the same way under this new system, though not at the same percentage as before. Traditionally, MA plans have received a rebate equal to a percentage (previously 75 percent) of the difference between the plan’s bid and the benchmark for that area if the bid is below the benchmark; plans bidding above the benchmark receive no rebate and are only paid the benchmark amount per beneficiary by CMS—plan beneficiaries selecting that plan have to pay an additional premium to make up the difference. Under the Star Rating System, pursuant to the ACA, plans bidding below the benchmark now receive their rebate based on a percentage of the difference between the bid and the benchmark, adjusted to include the amount of any bonus payment received, as follows:[11]

Plan Rating

Bonus Payment

New Benchmark

Rebate Payment

4.5 & 5 Stars

5%

105% of Benchmark

70%

4 Stars

5%

105% of Benchmark

65%

New Plans

3.5%

103.5% of Benchmark

65%

3.5 Stars

None

Benchmark

65%

3 or Fewer Stars

None

Benchmark

50%

Plans Not Reporting

None

Benchmark

50%

 

Including the bonus payment amount in the benchmark against which rebates are calculated allows for higher rated plans to increase their bids (and get a higher payment from CMS) while still receiving a rebate for their enrollees. Rebates must be returned to enrollees in the form of reduced premiums or increased benefits.

The MA Stars system is not a typical pay for performance program. Since CMS does not directly pay care providers in MA, but rather pays insurers offering private coverage to Medicare beneficiaries, the reward is actually being paid to an intermediary in the provision of care. Thus, MAOs, relying on the care providers who see their patients in order to earn a reward, must educate the providers in their networks as to which metrics are being evaluated; though, as discussed later in this report, the best they can do is inform providers as to which measures were evaluated in the year prior.

Additionally, regulators and health care providers should pay attention to quality metrics that develop under the new payment system which will result from the recently passed Medicare Access and CHIP Reauthorization Act of 2015. Hopefully the various systems will evaluate similar metrics so doctors are not given conflicting indicators as to how they should be treating their patients.

Results Thus Far

In 2012, 91 percent of MA contracts received a bonus payment, but only 4 percent of the total bonus payments came from funds designated for these bonuses by the ACA—the rest of the bonuses were paid through the demonstration project which allowed for bonuses to be paid to 3-star plans.[12] Two thirds of total payments went to plans with less than 4-star ratings.[13]

On average, higher ratings are correlated with longer length of time operating an MA contract,[14] possibly suggesting that over time MAOs learn how to best achieve the results desired by CMS. Generally, average scores have been increasing and the number of plans with higher ratings has been increasing. All plans will not be able to achieve top ratings, however, because the system uses relative scoring, essentially ranking plans in order of achievement—not everyone can be the best.

Potential Problems with Current System

Effective Tool for Patients?

While it is likely that the star ratings have been a somewhat useful tool for beneficiaries in differentiating between otherwise similar plans, it seems that individual preferences do not exactly line up with the criteria CMS has decided to use in evaluating MA plans under the Star Rating System. In 2012, 51 percent of MA-eligible beneficiaries had the option of choosing a plan with a 4-star rating or better, but only 29 percent chose such a plan.[15] Though, one study found a 1-star-higher rating is associated with a 9.5 percent increased likelihood that a new beneficiary will enroll in that plan and a 4.4 percent greater likelihood of enrollment among current enrollees switching plans.[16] (Once a year, except for a short window from December 1 to December 7, enrollees may elect to use a Special Enrollment Period to switch into a 5-star plan.[17]) Further, ratings were found to be directly correlated with voluntary attrition rates (22 percent for 2-star plans, on average, and only 2 percent for 5-star plans).[18]

Adverse Impact on the Poor

Many have expressed concern that the Star Rating System—because of how measures are evaluated and rewards are paid—unfairly punishes both low-income enrollees and the plan sponsors primarily serving such enrollees. It is argued that a significant portion of the measures evaluated are influenced by a patient’s socioeconomic conditions, yet very few of the measures are risk-adjusted to neutralize the impact of such differences between patients, thus not allowing for a fair comparison between plans with high versus low enrollment of low-income individuals. This concern has led to calls for either establishing a separate rating system for Special Needs Plans (SNPs) or any MA plan in which enrollees are predominantly low-income, or providing a score adjustment for such plans in order to compensate for those patient differences.[19] The National Quality Forum, in its report released in August 2014, notes the well-documented link between patients’ sociodemographic conditions and health outcomes, and recommends that such factors be included in risk adjustments for performance scores.[20]

An association has been found in various studies between dual-eligible status and performance on specific MA and Part D measure ratings, and there exists a “significant and growing performance gap” between dual-eligibles and non-dual-eligibles in MA plans.[21] Because duals use services at least as much as non-duals, some believe this performance gap results from a lack of compliance with treatment plans (which may be due to a lack of resources or understanding) rather than a lack of access to care.[22] Where a beneficiary lives may also be a key factor in what plans are available to them, and conversely, how well the plans in their area score. Geographic variation in fee-for-service (FFS) costs is associated with geographic variation in plan ratings which will result in lower benefits in areas that disproportionately have higher poverty rates; thus, benefits will be lower where patients are poorest.[23]

Contrary to the norm for the MA population overall—where enrollment tends to be highest among higher rated plans—enrollment was not as strongly associated with star ratings for African American, rural, low-income, or the youngest beneficiaries.[24] It is possible that this is an example of CMS choosing criteria which does not properly align with beneficiary needs, or it may be the result of a lack of access to higher-rated plans. In 2012, based on CMS data, the American Action Forum (AAF) found that higher-rated plans are less likely to be available in counties with higher poverty rates; a non-poor county is 2.6 times more likely to have bonus-eligible plans than a poor county (with a 25 percent or higher poverty rate).[25]

In 2013, one analysis by Inovalon found that “contracts with a high percentage of SNP members performed worse [than plans without a high percentage of SNP members] 86 percent of the time”.[26] While SNP members are not necessarily low-income or dual eligible, SNP membership is limited to people who live in certain institutions (such as a nursing home or intermediate care facility) or require home health care, dual-eligibles, or people who have specific chronic or disabling conditions.[27] As low-income individuals are more likely to be dual-eligibles and to have multiple chronic conditions, SNP members are often low-income.[28]  Further, in a follow-up analysis in 2015, the same organization analyzed seven Star measures and found sociodemographic characteristics contributed to at least 30 percent of the performance gap between dual and non-dual eligible MA plan members.[29] Community resource characteristics, which are often linked to an area’s economic wellbeing, also accounted for a large share of the performance gap.[30] More specifically, another analysis found that while “results show continued improvement among Chronic-SNPs and Institutional-SNPs, that [improvement] has not been mirrored by D[ual]-SNP focused contracts”.[31] However, seven plans in which duals account for 85 percent or more of their enrollees achieved 4 or more stars, indicating that it is not impossible for such plans to achieve a bonus under the current system.[32]

In response to requests to address the discrepancies that many have found, CMS admits in its 2016 Call Letter that there are differences in performance for dual-eligible beneficiaries; however, they do not believe the differences or evidence are robust enough to warrant adopting a separate measurement system at this time and call for continued research on the issue. It is worth pointing out that while CMS notes they controlled for characteristics such as age, sex, and race/ethnicity in their analysis, they do not claim to have controlled for income, language, or education, all of which are more strongly correlated with likelihood of being dual-eligible, and thus may have muted the magnitude of the impact contributable to dual status.[33]

Poor Program Structure Creates Misaligned Incentives and Unintended Consequences

The Star Rating System has had other unintended consequences resulting from poor program structure and misaligned incentives. Some of the biggest problems with the program structure relate to timing. The measurements that will be evaluated each year are determined and announced after both the period from when the measurements are taken and after contract submissions for the following year are due. This leaves plans unaware of what they’re being evaluated on, which makes it difficult to know what they should be doing or to make appropriate changes for the next year resulting in a two-year lag on adjustments by plans and their providers, at best. Another concern is that the retrofitting of the evaluation criteria could allow for CMS to pick winners and losers by selecting criteria that specific companies perform particularly well (or poor) on. Further, the bonus payments are based on the benchmark price and enrollment in the following year from when the measures were taken, which means plans are rewarded for patients they weren’t necessarily covering at the time the reward was earned. Finally, not making the evaluation criteria known ahead of time and delaying the reward is inconsistent with all theories on how to make reward incentive programs effective.

The rebate structure is also poorly designed and may reduce plan choice. It leads to benefits increasing for plans with higher ratings, rather than giving high ratings to plans with more benefits. Beneficiaries will thus be incentivized to go into a subset of plans (even if those plans aren’t truly the best option for the beneficiary), and competition and the range of plan options available may be reduced.[34] The increased rebate rewards the beneficiary for enrolling in a high-quality plan, rather than rewarding the operator of the plan for achieving high-quality standards. The rebate can be viewed as an indirect reward to the plan operator if the rebate providing better benefits increases enrollment since bonus payments are based on enrollment, but if beneficiaries prefer a plan that does not have a high rating, for reasons not being measured by CMS’s rating criteria, the beneficiary is essentially penalized by not getting the enhanced benefits that beneficiaries in plans with high ratings are afforded.

As happens with most reward programs, plan sponsors are focusing on the metrics which they can control.[35] Given that the things they are least able to control are patient outcomes, this may not be the desired result. As plan providers become more familiar with how the Star Rating System works, they may be able to unfairly take advantage and manipulate their scores. For example, only a small sample of patients is taken to assess for treatment of mental health issues and at least one company can predict which patients will be sampled, thus allowing them to remind doctors to assess these specific patients and game the system, without properly evaluating all of their patients.[36] Additionally, conflicting interests may ensue. One challenge for plans which has arisen is trying to ensure they are only networking with high quality providers, while simultaneously not limiting access to care.

Conclusion

The Star Rating System appears to be increasing the quality of the plans available and care provided to Medicare Advantage beneficiaries. However, it is not clear that the criteria being evaluated by CMS is necessarily the criteria of most importance to MA beneficiaries, and thus may not be accurately reflecting enrollee preferences. This mismatch of preferences and criteria may be causing more problems than just weakening the effectiveness of the star ratings as an informational tool for patients. Inadequate risk adjustment and consideration of patients’ socioeconomic status may be resulting in ratings which do not accurately reflect the quality of care and service provided, particularly for plans enrolling high proportions of low-income beneficiaries. The corresponding bonus and rebate payment structure may actually be harming the most vulnerable beneficiaries as a result.


[4] In 2014, the improvement metric had a weight of 3; in 2015, it was 5.

[8] MA benchmarks are based on average fee-for-service (FFS) expenditures per beneficiary in a given rating area. Because FFS costs vary by geographic area, MA benchmarks will also vary by geographic area.

[10] Double bonus counties: the metropolitan statistical area has a population of more than 250,000; at least 25 percent of eligible beneficiaries are enrolled in an MA plan; and Medicare fee-for-service (FFS) costs in that area are lower than the national average. http://americanactionforum.org/research/medicare-advantage-star-ratings-detaching-pay-from-performance


  • Retail sales declined from the end of 2014 through the beginning of 2015

  • Multiple measures show that the growth in manufacturing began slowing well before 2015

  • The March spike in imports puts downward pressure on the first quarter's real GDP growth rate

Last month the Bureau of Economic Analysis (BEA) released its first estimate of the growth in real Gross Domestic Product (GDP) during the first quarter of 2015. According to the report, real GDP’s annualized growth rate was only a dismal 0.2 percent. So what exactly caused this slow growth in the first three months of the year? Some argue that this low estimate is due to an underlying methodological issue that results in a significant underestimate for the first quarter. However, several economic indicators have decelerated or declined over the last few months, indicating tepid growth in the first quarter. Given the questions surrounding the validity of the last GDP estimate, it is particularly important to examine economic metrics from a variety of sources and analyze their implications for growth.

What exactly is the debate over the recent GDP report? At issue are the BEA’s seasonal adjusting methods. Almost every economic raw estimate features significant variation throughout the year due to typical changes in weather and holiday breaks. As a result, the raw estimate tells us very little about the underlying trend that is impacting people’s well-being. So, in order to uncover those trends throughout the year, officials publish seasonally adjusted estimates, which account for these variations and generally are the headline figures in any report. The real GDP growth rate estimates are no different. However, economists have puzzlingly noted that even after seasonal adjustment, over several years the BEA’s first quarter GDP growth estimates have been consistently lower than the rest of the year. This might suggest that the BEA does not adequately adjust for seasonal factors in the first quarter, which could result in “residual seasonality” and underestimating first quarter growth. To almost ensure confusion for the casual follower, within a few days of each other researchers at the Federal Reserve Board in Washington reported finding no evidence that the latest GDP estimate was impacted by this data problem and researchers at the Federal Reserve Bank of San Francisco found that this statistical issue did indeed impact the estimated figure. This debate makes it all the more important to understand what other economic indicators have been telling us.

Retail Sales

The Census Bureau’s report on retail sales reveals significant trends in consumer spending, which makes up roughly two-thirds of GDP.

The shaded region in Graph 1 represents the first quarter of 2015. After rising throughout the most of 2014, total retail sales began to fall at the end of the year. The decline continued into 2015 and hit bottom in February, right in the middle of the first quarter.

 

While this is not a good sign, this chart alone does not illustrate weakening in spending because it could be due to declines in vehicle and gas sales, which are highly volatile and do not represent the underlying trends in retail sales.

Graph 2 illustrates the trend in total retail sales after stripping out automobile and gasoline station sales, which we can call “core retail sales.”

 

While not as exaggerated as in Graph 1, there was significant growth in core retail sales throughout most of last year and a noticeable deceleration at the end of the year. This weakening was then followed by a slight decline during the first quarter. So even when excluding volatile portions of retail sales, the measure indicates that the first quarter featured a significant slowdown in spending that likely hampered economic growth.

Consumer Confidence

Paralleling the weakening in retail sales was a decline in consumer confidence during the beginning of the year.

 

After rising throughout all of 2014, consumer confidence began declining during the first quarter of 2015 and continued declining afterword. Changes in consumer confidence reflect households’ willingness to buy goods or make investments. As a result, the decline in confidence during the first quarter could be another indication that consumers were less willing to spend during the first quarter and a signal that they were also less willing to invest.

Manufacturing

Perhaps one of the most significant drags on the U.S. economy in recent months has been the stark decline in manufacturing. One common way to measure performance in manufacturing is to examine purchases of durable goods, which are long lasting manufactured products like machinery. After excluding defense equipment and aircraft orders, volatile goods that do not represent underlying trends in durable goods, it is clear that capital goods orders have been suffering for quite some time.

 

Durable goods orders excluding defense and aircraft have been falling since last summer and continued to decline into the first quarter of 2015. While some may argue the decline in durable goods is due to a harsh winter, it is clear that weather cannot be the main cause because the decline started during the summer.

Mirroring the decline in durable goods orders is a noticeable deceleration in another common metric of the manufacturing sector.

Every month, the Institute for Supply Management (ISM) estimates a series of indices that measure growth in both manufacturing and non-manufacturing sectors. ISM’s composite index of manufacturing growth, the PMI, has fallen significantly since last summer and continued to fall through the first quarter of 2015. While the measure remained above 50 and indicates that manufacturing still grew during this period, the sharp decline in the index reveals a significant deceleration in manufacturing growth. This likely contributed to the slowdown in overall economic growth during the first quarter.

When manufacturing began to stall last year, growth in manufacturing employment decelerated as well.

 

ISM’s index for manufacturing employment coincides directly with the fall in the PMI. However, while the PMI has remained above 50, the employment index hit 50 in March and fell below 50 in April. This indicates that manufacturing employment did not grow in March and it actually decreased in April.

Non-Manufacturing

Even though ISM indicators suggest non-manufacturing is in better shape than manufacturing, non-manufacturing was still growing at a decelerated rate at the beginning of 2015.

 

ISM’s composite index for non-manufacturing growth, the NMI, fell slightly at the end of 2014 and remained at that lower level through the first quarter of 2015. While the index did not continue to fall and remained only a few points below its peak in 2014, it does indicate that the slight deceleration persisted into 2015’s first quarter and is yet another indicator that growth slowed last quarter. Fortunately there was a slight increase in the index in April (the beginning of the second quarter). Hopefully this suggests that the deceleration was temporary and non-manufacturing will bounce back the rest of the year.

International Trade Deficit

Finally, the latest data on international trade is a further indication that economic growth during the first quarter was fairly weak.

 

After the BEA released its advanced first quarter GDP estimate, the Census Bureau reported that in March the trade deficit spiked to $51.4 billion, as imports surged by the largest margin on record ($17.1 billion) and exports barley rose. Illustrated in Graph 8, this was way above any estimated trade deficit over the past year. The surge in imports came after the West Coast ports settled its months long labor dispute and began processing its backlog of imported items. Meanwhile this abrupt expansion in the trade deficit provides further evidence first quarter economic growth was weak. In fact, the high trade deficit in March has led to many analysts now projecting that the BEA’s first quarter estimate will be revised down to show that the economy actually contracted at the beginning of the year.

Conclusion

While questions remain regarding the validity of the latest GDP report on the first quarter of 2015, declines in retail sales, consumer confidence, durable goods orders, manufacturing, non-manufacturing, and relative exports all point to weak growth. Harsh weather during the winter likely played a role in slowing the first quarter’s growth, but many of the slowing trends began occurring months before cold weather. The fact that cold weather could be all it takes to completely derail economic growth indicates that the US economy still has a long way to go to recover from the Great Recession.

  • Finds Title II Regulation Cost $7.1 Billion in Investment, Contrary to FCC Statements http://bit.ly/1EVz2dOAAF

  • Title II regulation during the dot com boom era hurt investment to the tune of $7.1 billion http://bit.ly/1EVz2dO

  • A brief history of Title II regulation from 1995-2005 & how it costs billions in investments http://bit.ly/1EVz2dO

Recently the Federal Communications Commission (FCC) issued a new set of regulations for the Internet reclassifying broadband as a utility under Title II in order to achieve “network neutrality.”   In part, the change was supported by the view that it would “not have a negative impact on investment and innovation in the Internet marketplace as a whole.”[1] But the logic and the record that the FCC lays out is filled with egregious errors.

Even though no economic analysis was conducted, the agency asserts telecommunications companies would not suffer because investment increased from 1996 to 2005 when Title II was applied. Yet, the agency doesn’t even mention the Dotcom bubble era, let alone control for the capital buildup from other Internet players leading up to 2001 bubble. AAF’s economic analysis finds that $7.1 billion of investment is missing from this industry, in part to regulation.

Because the agency falls short on financial theory and skips over the history, the broader economic trends are misunderstood. Most importantly, the real benefit to consumers occurred in the post Title II world when investment hit its pre-boom levels.  

Why Finance Is Important for the Network Neutrality Order

Understanding how investment and regulation interact in the broadband industry is key to detailing the impact of the FCC’s action. Near the beginning of the section on investment, the agency claims that "the key drivers of investment are demand and competition.” Although they are related concepts, neither demand nor competition drives investment. Investment is actually driven by the expectation of a return, a nuance that causes confusion later in the Report and Order.

Every financial manager is fundamentally pressed with two separate questions: where to invest, and how the funds should be raised. The first question is called the investment decision, or capital budgeting decision, while the second is the financing decision. When a business decides to invest for future benefits, it is marked as a capital expenditure (capex for short) and can be directed to either maintain the current business (maintenance capex) or build out new projects (growth capex). Wholly separate from this decision is the financing decision, which is driven by investors and other market actors looking for returns.   

Thus the FCC muddles core concepts when they claim that, “Major infrastructure providers have indicated that they will in fact continue to invest under the [Title II utility-style] framework we adopt, despite suggesting otherwise in their filed comments in this proceeding.” As many have estimated, these new rules are likely to have a destructive effect on returns, which is different than an internal company level decision to invest. While the major infrastructure providers will continue to invest to maintain current networks, new and potentially disruptive projects with thinner margins both within the company and within the same industry will find it harder to get off the ground. But more importantly, the small infrastructure players will be hit the hardest since their already thin returns will make it harder for them to expand to take on the big guys. As AAF recently concluded, at least 90 percent of the businesses that will be burdened by the new utility-style network neutrality regulations will be small businesses.[2]      

In his roadshow to drum up support for the new rules, FCC Chairman Tom Wheeler said that “AT&T, Verizon, and Qwest actually increased their capital investments as a percentage of revenue immediately after the Commission expanded Title II requirements pursuant to the ‘96 Telecom Act.” Wheeler is right; capital expenditures as a percentage of revenue increased from 21 percent in 1992 to 27 percent in 2001 only to fall back down after 2005.

However, Wheeler’s statements, which are echoed in the Report and Order, actually hint at a serious problem. Even though hardly a word is mentioned of it in either the report or his speeches, the Dotcom Bubble is located squarely in the middle of Wheeler’s Title II timeline. Leading up to the burst in 2001, investment far outpaced actual consumer demands, which is why capex as a percentage of revenue increased so dramatically. In other words, if Wheeler wants to claim that Title II caused the rise in investment, then he is actually suggesting that Title II caused the Dotcom Bubble, which threw 200,000 people out of jobs and wiped $2 trillion of wealth off the books.

As actual data from the period attest, the Dotcom Bubble seriously complicates the simple story laid out by supporters of Title II, including the FCC. As basic economic analysis will show, the positive case for Title II ultimately doesn’t lie on empirical ground.  

The History of the Dotcom Bubble

Most think of the 2001 bubble as driven by the meteoric rise in the stock evaluations of companies like Pets.com and Startups.com. But the Dot-com bubble was actually comprised of over-evaluations in two groups of companies.

The first group included all the upstream firms like Google, Yahoo!, eBay, and Amazon, that ultimately survived this crucible. The first .com was registered in 1985, but it took the privatization of the Internet backbone ten years later to actually spark the Dotcom rush. Coincidentally, just a year after the commercialization of the Internet, Congress passed the Telecommunications Act of 1996 overhauling the legal regime for telephone while leaving the Internet lightly regulated. The primary focus of the act was to create a more stable telecommunications regime, but it came just as the Internet was developing, complicating many of the current folk theories about Title II and investment.   

The second group of firms affected by the Dotcom Bubble were the downstream infrastructure players like WorldCom and Global Crossing. Even though many companies were racked with scandals afterwards, far less importance has been paid to the real investments that went to build out the infrastructure for the new economy. Far and away the biggest recipients of this cash were the variety of companies that built fiber networks including telecommunication firms regulated under Title II, cable companies not regulated under this law, and a whole range of other providers.[3]

Optimism marked this period of the Internet’s development typified by Bill Gates’ claim in PC Mag that, "We’ll have infinite bandwidth in a decade’s time."[4] Everyone knew consumer behavior was fundamentally changing. At the time it was apocryphally said that Internet traffic was doubling every 100 days, a trend that seemed to follow a hyped Moore’s law.[5] Throughout the late 90s, then-FCC Chair Reed Hunt repeated this claim, making an impassioned case for its implications in his 2000 book.[6] In March of that same year, the next FCC chairman, Bill Kennard, reiterated the notion, saying that, “Internet traffic is doubling every 100 days. The FCC’s ‘hands–off’ policy towards the Internet has helped fuel this tremendous growth.” At the height of the bubble in 2000, the New York Times cited the stat approvingly five times; the heads of AT&T, Global Crossing, and Level 3 all made similar growth projections; and traders were investing according to the estimate.[7] Moreover, countless business plans in Silicon Valley based their business models on the 1998 Department of Commerce “The Emerging Digital Economy” report, which teased out the implications of the rising demand for broadband.[8]

There was logic to this new bolder investment cycle. Upstream firms, like the ill-fated Broadcast.com, would provide video and online products while network providers would build out the Internet connections to serve that content to consumers. The FCC has long called this the virtuous cycle, but the term that has been used in economics for over a hundred years is complementary good. Goods that complement each other, like razor blades and razor handles, are worth more in total. Similarly, as more content flows over Internet infrastructure, Internet access itself becomes more valuable. On the investment side, the firm thus receives a higher return due to this complementarity, which is then partially invested to build out more robust pipes.

In expectation of these higher returns, at least $100 billion was dumped into the construction of new fiber to satiate the perceived need.[9] Nearly overnight, massive companies like Global Crossing and Level 3 sprang up to provide network services. Soon fiber was being strung up in cross country networks. The pace of development was so feverish that the amount of fiber sold would not recover from its 2001 high until 2012. The enthusiasm also extended into undersea cables; nearly $12 billion was invested in 2001 at the height of the market, compared to just over $1 billion in 2013 and 2014.[10]

According to the last fiber report released by the FCC, the Regional Bell Operating Companies increased their deployments of fiber by nearly 47 percent between 1995 and 1998.[11] While there are no official reports on the total fiber deployed after 1998 by the FCC, estimates place the industry wide increase at 46 percent from 1998 to 2001.[12]

As everyone learned, expectations were out of line with the fundamentals, especially lower consumer demand. [13] Level 3 lost significantly, and only survived via a $500 million cash infusion from Warren Buffett. Global Crossing, which managed to build one of the largest fiber backbones and was once worth billions, went bankrupt.[14] Worldcom became mired in an accounting scandal after they tried to cover up losses. E.spire Communications, XO Communications, Velocita, and McLeod all filed for Chapter II as well.

The downturn was abrupt, but the fiber was still there.  A year after the crash, just 2.7 percent of the fiber capacity was being used.[15] As analyst Jonathan Lee points out, the market was still so low in 2006 that it was cheaper for Level 3 to buy capacity via other companies rather than use their own dormant capacity because operational and replacement costs were that much higher.[16] Over the next decade, ISPs would buy up these systems to extend their footprint.

An Economic Survey of the Damage

Only with the 2001 Dotcom Bubble as background can investment changes be properly understood.

The table below displays investment data from a number of communication companies from 1996 to 2005 including cable companies and Local Exchange Carriers (LEC), which were regulated most heavily by provisions in Title II. From 1996 to 2001 official FCC ARMIS data is used, thus the designation A, while 2002 on are estimates and thus receive a designation E. The last column includes the percentage change from the 1996 to 2005 time period.

Table 1 Investment in Millions

In millions

1996A

1997A

1998A

1999A

2000A

2001A

2002E

2003E

2004E

2005E

1996-2005 Change

Local Exchange Carriers

$18,138

$20,125

$21,592

$27,446

$30,972

$29,392

$18,500

$15,000

$15,501

$16,516

-9%

CLECs

862

1,471

2,752

5,064

8,528

4,458

1,500

600

500

400

-54%

IXCs

16,634

21,620

26,447

35,097

50,956

39,105

12,800

11,500

11,842

12,134

-27%

ISPs

147

391

1,016

2,135

4,739

2,290

1,000

600

600

500

240%

Cable Companies

6,681

6,484

9,046

12,595

17,920

17,338

14,800

12,500

11,875

12,172

82%

U.S. Total

42,462

50,091

60,852

82,337

113,115

92,583

48,600

40,200

39,958

41,340

-3%

The 2001 Dotcom Bubble is clearly evident in the numbers. In the time period after 2001 until 2005, when the FCC officially placed DSL in a lightly regulated regime of Title II, the largest telecom firms sharply reduced their investment from a high of $30 billion in 2000 to $16 billion in 2005. As for the competitive local exchange carriers (CLECs), who were supposed to benefit from Title II, they too reduced their investments from a high of $8 billion in 2000 to $500 million by 2005. Even after cables broadband were officially recognized as a Title I service in 2002, over the next couple of years, the capital expenditures for cable companies decreased by 18 percent. Under a modicum of scrutiny, the FCC’s narrative of investment falls apart.

Taken over the entire period, LECs, which bore the brunt of Title II classification, saw a 9 percent decrease in investment in the time period, which seriously undermines the FCC’s positive story about Title II. On the other hand, cable companies, which have never been subject to Title II regulation, saw an 82 percent increase in the amount of capital expenditures spending during the same time period. If the LECs grew at the same rate as the cable companies, then they should have ended up with nearly $33 billion by 2005. In other words, the industry left on the table nearly $1.6 billion every year in investment. It is worth noting that during the 11 years before Title II was applied, the telephone industry’s investment grew an average of 5 percent per year.[17]  

One method economists use to quantify changes in policy is what’s called a difference in difference analysis. It calculates the effect of a treatment like a policy change on an outcome by comparing the average change over time in the treatment group to the control group.

Comparing the beginning and the end of the regulatory period using a simple difference in difference model, nearly $7.1 billion is missing from the bottom line of the major telecommunications firms by 2005.[18] While not the only cause, regulation likely helped to deter billions in investment. More granular data could help tease out the various causes, but the FCC never conducted such a study, even though Commissioner Pai and Commissioner McDowell before him have both called for rigorous empirical studies.

What About Consumers?

Ultimately, consumers are the most important part of this equation, and they benefited handily after Title II was designated to the trash heap. YouTube, Facebook, and Netflix all became household names in the lightly regulated world we are now leaving behind. More than any other, the development of Content Delivery Networks has been a boon to video watching online and to broadband speeds. Since 2007, average US speeds have increased by 435 percent.[19] Even though it was first dreamed about in the mid-1990s, consumer-led demand during the lightly regulated period is making video over the Internet an actual replacement for TV.

Advocates for Title II reclassification have sold their plan as real network neutrality, even though there are countless ways to ensure an Open Internet. As expressed in the recent Order, one of the positive arguments for reclassification is that it won’t be harmful to investment. Sadly, history does not support this rosy view. While the FCC’s net neutrality order paints this time period in a good light, it does a very poor job of separating the fever from the facts. Consumers will suffer in the end when the legal dust settles and many of these problems could have been solved had the FCC done its due diligence.    


[1] Federal Communications Commission, In the Matter of Protecting and Promoting the Open Internet, Report and Order On Remand, Declaratory Ruling, and Orderhttp://transition.fcc.gov/Daily_Releases/Daily_Business/2015/db0403/FCC-15-24A1.pdf at 191

[2] Will Rinehart, Small Businesses Bear the Brunt of Network Neutrality Ruleshttp://americanactionforum.org/insights/small-businesses-bear-the-brunt-of-network-neutrality-rules

[3] Elise A. Couper, John P. Hejkal, and Alexander L. Wolman, Boom and Bust in Telecommunications, https://www.richmondfed.org/publications/research/economic_quarterly/2003/fall/pdf/wolman.pdf

[4] George Gilder, The Bandwidth Tidal Wavehttp://www.seas.upenn.edu/~gaj1/bandgg.html.

[5] Andrew M. Odlyzko, Internet traffic growth: Sources and implicationshttp://www.dtc.umn.edu/~odlyzko/doc/itcom.internet.growth.pdf

[6] Reed Hundt, You Say You Want a Revolution: A Story of Information Age Politics.

[7] Robert D. Hershey Jr., MARKET INSIGHT; A Nasdaq Correction; Now Back To Businesshttp://www.nytimes.com/2000/03/19/business/market-insight-a-nasdaq-correction-now-back-to-business.html

[8] Department of Commerce, The Emerging Digital Economyhttp://govinfo.library.unt.edu/ecommerce/EDEreprt.pdf

[9] See Andrew M. Odlyzko, Bubbles, gullibility, and other challenges for economics, psychology, sociology, and information sciences, http://firstmonday.org/ojs/index.php/fm/article/viewArticle/3142/2603#2a & Rebecca Blumenstein, How the Fiber Barons Plunged The U.S. Into a Telecom Gluthttp://www.wsj.com/articles/SB992810125428317389

[11] Federal Communications Commission, FCC Release Fiber Deployment Updatehttp://transition.fcc.gov/Bureaus/Common_Carrier/Reports/FCC-State_Link/Fiber/fiber98.pdf

[12] Christiaan Hogendorn, Excessive(?) entry of national telecom networks, 1990–2001http://www.stern.nyu.edu/networks/03-07_Hogendorn_Excessive_Entry.pdf

[13] Rintaro Kurebayashi, Nathaniel Osgood, and Sharon Gillett, Dynamic Analysis of the Long-Distance Telecom Bubblehttp://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.408.6214&rep=rep1&type=pdf

[14] Liana B. Baker, One Of The Industries Crushed By The Dotcom Crash Is Making A Big Comeback, http://www.businessinsider.com/r-business-bandwidth-demand-lights-up-once-dark-fiber-sector-2014-25

[15] Anton Troianovski, Optical Delusion? Fiber Booms Again, Despite Busthttp://www.wsj.com/articles/SB10001424052702303863404577285260615058538

[16] TIA, Investment, Capital Spending and Service Quality in U.S. Telecommunications Networks: A Symbiotic Relationship http://www.tiaonline.org/policy_/publications/filings/documents/Nov13-2002_CapEx_QoS_Final.pdf

[17] Bureau of Economic Analysis, Fixed Asset Table Table 3.7S. Historical-Cost Investment in Private Structures by Industry, http://1.usa.gov/1HUqbyk

[18] The model used here is ΔDiD = ( Y treatment,after – Y control, after ) – (Y treatment,before – Y control,before) where the treatment or treat is the Title II regulation. Investment data from 1995 represents the time period before the treatment, while investment data from 2005 is the after treatment period.  

As part of the Peterson Foundation’s Solution Initiative, the American Action Forum developed a “scoreable” policy proposal to set the federal budget on a sustainable, long-term path.The AAF solution, “Balanced: 2028”, focuses on three key policy areas: tax reform, entitlement reform, and immigration reform, while reforming every major area of federal spending. 

View and download the documents below to learn more about the proposal. 

MEMORANDUM TO: The 45th President and 115th Congress

 

Balanced: 2028

 

Balanced: 2028 - AAF's Budget Proposal To Address the Debt & Secure A Brighter Economic Future

  • A Supreme Court ruling in favor of King could free 11.1 million from individual mandate penalties.

  • Eliminating the 30-hour work week could open up jobs to 3.3 million part time workers.

Executive Summary

The Supreme Court’s pending decision in King v. Burwell could upend the way premium subsidies are distributed through the federal health insurance exchanges in as many as 37 states. The impacted states are those that declined or failed to establish their own exchanges under the Affordable Care Act (ACA). Examining the insurance market effects we find that:

In addition, such a decision would tend to reverse the damaging labor market impacts of the ACA. Our analysis indicates that these impacts would be:

Introduction

King v. Burwell, which was argued before the Supreme Court in March 2015 and will likely be decided in June 2015, poses a question of statutory interpretation. The plaintiffs argue that the text of the ACA should be read literally: it only authorizes premium subsidies for people in “exchanges established by the State under §1311.” A separate section, §1321, describes the establishment of a federal exchange by the Secretary of Health and Human Services (HHS) if the state does not create its own exchange. A rule issued in 2012 by the Internal Revenue Service (IRS) allowed premium subsidies to be paid through exchanges established by the Secretary, but plaintiffs argue these subsidies are illegal since there is no congressional authorization for that spending.

If the plaintiffs are successful, the immediate effect will be to stop the flow of premium subsidies to states that have not established their own exchange. A secondary effect will be that many individuals in those states will be exempt from the individual mandate penalty because without the subsidies, they will qualify for an affordability exemption. Likewise, all employers in states without state-established exchanges will be exempt from the employer mandate penalties, which are only triggered if an employee receives a federal tax subsidy for health insurance through an exchange. This will cause a fundamental shift in the economies of the “King states” by altering the way insurance is bought and sold, and by redefining the relationships between employers and employees.

Methodology

In determining how to address the questions presented by a possible ruling in favor of the plaintiffs in King, we had to make some simplifying assumptions. First, we assumed that no federal congressional action would be taken between the announcement of a ruling in King and the beginning of the 2016 open enrollment season; in reality there is every indication Congress would take some action, but it is impossible at this stage to predict what it would be. Second, when considering the effects on employers and employees, we assumed that all employees are residents of the state in which they are employed. We rely loosely on the assumption that the Supreme Court would stay a ruling for 3-6 months to give states and the administration time to react to the consequences of its ruling. Finally, we assumed that the language limiting the permissible establishment of §1311 exchanges to dates before January 1, 2014 would not be enforced and that the states would be afforded an opportunity to establish exchanges in the future, particularly if the ruling in King is favorable to the plaintiffs; the legality of this non-enforcement may ultimately be settled by subsequent litigation.

Individual Market Effects

A ruling for King would bring about changes in the individual market with myriad impacts on various stakeholders, including newly-insured individuals, previously-insured individuals, and individuals facing tax penalties as a result of the ACA.

Should the plaintiffs in King win, around 6.6 million people would lose their health insurance subsidies.[1][1] This is about 87 percent of the total enrolled population of the federal exchanges as of February 22, 2015. Of the states with federal exchanges, however, there will be significant geographical differences in the impact on individuals. For instance, about 92 percent of enrollees in North Carolina’s exchange are subsidized while 71 percent of New Hampshire’s exchange enrollees are subsidized.

There are also significant differences in the value of the subsidies in different states. In the 2015 enrollment year, the average annual tax subsidy received in the federal exchanges will be approximately $3,156, or $263 per month, yet there is significant geographical variation among states.[2] For example, in Arizona, the average annual subsidy is only $1,860 while it is $6,408 in Alaska. Alaska, however, is an outlier with unique geographical and demographic characteristics that drive up costs in that state; the next highest average subsidy is $5,040 followed by $4,236 in Wyoming and Mississippi, respectively.

Many individuals that lose access to a premium subsidy will likely continue to purchase insurance, either by paying a higher price for their current plan or by switching to cheaper insurance. In 2014, the McKinsey Institute estimated that about 74 percent of exchange enrollees were insured before passage of the ACA, which indicates that these individuals would be able to purchase insurance post-King as well, if they were so inclined.[3] It is important to keep in mind, however, that the Essential Health Benefits and mandatory community rating attendant on the ACA’s Qualified Health Plan requirements have made previously-available insurance plans unavailable or unaffordable to many, which may reduce re-uptake.

Despite obstacles to gaining insurance imposed by the ACA, most subsidized individuals would still be able to access insurance post-King. Immunity from the individual mandate will allow individuals over 30 years old to enroll in catastrophic plans, which is currently penalized by the mandate under the ACA. These plans have lower actuarial value than the “metal-tiered” plans of the ACA, but actuarial value does not measure quality of coverage, rather the expected ratio of dollars paid out-of-pocket to dollars paid by the insurer to cover the cost of care.

Many individuals living in states that will be impacted by the King decision will be exempted from the individual mandate and its tax penalty. Under the ACA, a household is not required to pay the penalty if the lowest cost health insurance plan available is more than 8 percent of household income. Without subsidies, many households will newly fall into this exemption category. These households will be freed from the burden or threat of the tax, which will average about $1,200 for this population in 2015.[4] In total, 11.1 million individuals will no longer face the threat of a tax penalty. Absent the subsidies, most exchange enrollees would be exempted from the threat of this penalty, and those not exempted would have incomes approaching 400 percent of the Federal Poverty Level.

The individual mandate penalty is not the only tax headache that could be avoided however. It is estimated that in 2014, about 50 percent of individuals who purchased insurance through the exchanges and received subsidies will have to repay some portion of those subsidies through their taxes. These 3.85 million or so individuals, on average, will owe the IRS around $794.[5] Should King win, however, these people will not face this problematic tax-season surprise.  

Individuals will also have new incentives in King States. Because premium subsidies available through the ACA increase the value of not working, the Congressional Budget Office (CBO) estimates that over 2 million workers will be drawn out of the national labor force. If the Court rules in favor of King, the American Action Forum (AAF) estimates that 1.27 million workers will be added to the labor force by 2017.[6]

Employer Market Effects

A ruling in favor of the plaintiffs in King would have implications for the employer market as well. Since the employer mandate will be unenforceable, some employers may drop insurance coverage for their employees.

In the 37 states that are likely to be impacted by King, about 95 million people are covered by employer-sponsored insurance. However, 96.1 percent of large employers and 60 percent of all employers offered insurance before the employer mandate went into effect. [7] These employers are unlikely to drop coverage for their employees regardless of the outcome in King. Some reports suggest that the average number of firms offering employer sponsored insurance is statistically unchanged even after the mandate went into effect.

Because they will not be subject to the mandate, small to medium size employers may expand employment to more people, or allow their employees to work more than 30 hours per week. AAF estimates that there are currently about 3.3 million part-time workers in the states affected by the ruling who are seeking but are unable to find full-time employment.[8] Others have estimated that at least 20 percent of businesses cut hours for workers in 2013, in part to remain below the 50 employee threshold that triggers the employer mandate.[9]

Currently there are about 261,844 employers in the 37 King states that are subject to the mandate penalty, 146,407 of which are medium-sized employers, who were the most impacted by the employer mandate. Absent the administrative and financial burdens imposed by the ACA mandates, the recent trend away from full-time hiring that has cost Americans more than 350,000 jobs—237,000 of which are in King states—may be reversed and these and countless other small employers may begin hiring more full-time workers.

How States May React

It is not completely clear which states the ruling in King would apply to. It is possible that it will be enforced in all states with any measure of reliance on the federal exchange platform (currently 37 states). The ruling may only be applied to states that have never had their own platform, exempting states that established exchanges and then switched, such as New Mexico and Nevada, from the effects of the ruling. It is also possible that King will only apply in states that have not passed legislation or executive orders attempting to create exchanges, which means that the 8 hybrid exchanges will be exempted. The Secretary of HHS has the authority to deem which states have exchanges “established by the State under §1311,” and her decision will likely reflect what the states want. The inevitable objections to her determinations will eventually have to be settled in the courts if no legislative fix is offered.

Assuming that all 37 possible states are subject to the ruling, there will likely be four main categories of responses to the ruling.

Some states that were politically, financially, or technologically unable to establish their own exchanges in 2013 may attempt to do so before the next open enrollment period. For instance, in Delaware, Democrats have unified control of both the Governorship and the legislature, and could therefore be among the first to attempt and subsequently establish its own exchange. Iowa, Maine, Montana, New Hampshire, New Jersey, Pennsylvania, and West Virginia all have Democrats in control of either the Governor’s mansion or the legislature, either of which could initiate the establishment of exchanges in those states if that appeared to be the course of action that enables the continued flow of subsidies. Despite cautionary tales from states with failed or unstable state based exchanges, this endeavor could be facilitated by innovative private companies that are already offering to sell or rent states the technology needed to quickly take over control of their own “pre-fab” state exchanges.

On the other hand, some states have passed legislation which precludes the establishment of an exchange. Of course, subsequent legislation can always repeal previous laws, but laws preventing the establishment of exchanges, known as Health Care Freedom Acts, have the power to prevent governors from establishing an exchange by executive order. Governors in Georgia, Louisiana, Missouri, Utah, and Virginia will all need legislation to be passed by their legislatures before an exchange may be established.

Taking this concept a step further, Alabama, Arizona, Ohio, Oklahoma, and Wyoming have passed Health Care Freedom Amendments to their state constitutions. This means that in order for these states to establish exchanges, they must meet the enhanced constitutional threshold for an amendment, rather than just reach a simple majority required to pass a typical bill. Because of this rather large obstacle, it is less likely that these states will succeed in any attempts to establish exchanges, especially not before the 2016 open enrollment period.

Obstacles to establishment aside, some states are unlikely to even try to establish exchanges, instead taking advantage of the tax exemptions the King ruling would apply to the states’ citizens and businesses. For example, during the debate of whether to establish an exchange back in 2012, Indiana Governor-elect (now Governor) Mike Pence cited the employer mandate exemption and its benefits to Indiana’s labor market as a specific reason for why the state should not establish an exchange. Other governors and state legislatures will again face the decision of whether to establish an exchange, and this time, they will have the guarantee of knowing that by not establishing, they can help create job growth in their states.

Long-Term Effects

Many of the effects of the decision in King will not be immediately apparent. That said, there will be longer term changes in the individual insurance market, labor market, as well as the way insurance plans are designed.

In a post-King world, individuals will be better situated to make the most economically appropriate decisions regarding purchasing health care.  Without the mandate penalty, many individuals may decide to purchase less generous insurance (regardless of actuarial value) if it is better suited to their needs and lifestyle. This change will allow individuals to purchase less-expensive insurance and to have more personal control over their resources.

We might expect to see higher wages in the states impacted by King, even among employers that do offer insurance to their employees, because they will be able to make employment decisions and run their businesses more efficiently. AAF estimates that the ACA’s regulations have reduced annual wages for employees of small and medium sized businesses by $830 to $940 each, totaling $13.6 Billion in the King states. There will be fewer administrative burdens when employers offer full-time employment to qualified employees who will have the opportunity to become specialized and therefore more efficient in their work. We might also expect to see more job growth in these states as employers expand their businesses and offer more hours without the cost of complying with the ACA.

Insurers may also begin adapting their plan designs to better meet the physical and financial needs of citizens of these states. These plans will still be subject to the many restrictions on plan design imposed by the ACA, but the restrictions on what types of plans individuals may purchase will be lifted. For example, an individual over the age of 30 would be able to purchase catastrophic health insurance without being subject to the individual mandate penalty if standard policies are not affordable to them. Community rating and other Essential Health Benefit requirements would, however, remain in effect absent further legislative action.

Conclusion

While a ruling in favor of the plaintiff in the King v. Burwell case would prohibit subsidies flowing to states without their own exchanges, there are also economic benefits related to the elimination of the individual mandate for some and the employer mandate for all affected citizens. Almost 11.1 million individuals will become eligible for an exemption from the individual mandate penalty, and 1.27 million will be incentivized to re-enter the labor market. Millions of employers will also be released from the employer mandate penalty, which could lead to wage increases of up to $940 per employee, and would eliminate employment restrictions that have left 3.3 million part-time workers unable to work more hours.


[1] 6.6 million individuals were enrolled in the 37 King state exchanges as of March 31, 2015 (an earlier version of this paper estimated 7.7 million based on reports from February, 2015). This is not a static number as some individuals may not have paid their first premiums, or have left the exchange due to a new job or otherwise attaining access to employer sponsored coverage. Likewise, more individuals may enroll in the exchanges through the administration’s enrollment period extension, or as a result of a qualifying life event, such as a move or change in family size.



[1]http://aspe.hhs.gov/health/reports/2015/MarketPlaceEnrollment/Mar2015/ib_2015mar_enrollment.pdf; February 15,2015; http://www.cms.gov/Newsroom/MediaReleaseDatabase/Fact-sheets/2015-Fact-sheets-items/2015-06-02.html.

[2] Id.

[3] http://www.forbes.com/fdc/welcome_mjx.shtml

[4] AAF estimates of the number of people that are subject to the individual mandate and the average applicable penalty rely on demographic information from the 2011-2013 American Community Survey and health insurance premium data published on data.healthcare.gov.

[5] http://kff.org/health-reform/press-release/new-analysis-half-of-u-s-households-eligible-for-a-tax-subsidy-under-the-health-law-would-owe-a-repayment-while-45-percent-would-receive-a-refund/

[6] Using CBO’s figures and estimated total premium subsidies spent in each state, we project the increase in workers relative to the CBO baseline that would occur in the 37 states that King would impact. In total, we estimate that King would add 1.2 million workers to the labor force by 2017 and 1.6 million by 2024.

[7] http://kff.org/report-section/ehbs-2014-summary-of-findings/; http://www.rwjf.org/content/dam/farm/reports/reports/2013/rwjf405434

[8] AAF estimates of part-time workers seeking more hours use the February 2015 Current Population Survey, and reflect the number of employed workers that work for fewer than 30 hours a week but would prefer a full-time job.

[9] http://www.rwjf.org/content/dam/farm/reports/reports/2013/rwjf405434


Since the financial crisis, both domestic and international regulators have advanced efforts to revise capital standards and streamline the fragmented international regulatory environment facing large and globally active insurance companies, so-called internationally active insurance groups (IAIGs) and a smaller set of global systemically important insurers (G-SIIs). Given that these “global insurers” and their many subsidiaries serve customers in countries around the world, spanning multiple regulatory jurisdictions, some see value to financial stability in restructuring their supervision; others say that these efforts have intrinsic merit in an increasingly globalized marketplace irrespective of the financial crisis and its implications. This paper briefly highlights the policy issues raised by these regulatory initiatives and explores the potential impacts on the insurance industry and American consumers.

Background                                                                

The U.S. is the largest insurance market in the world with $1.3 trillion in premiums in 2013 or 27 percent of the world market.[1] Insurance companies operating in the U.S. are primarily regulated at the state-level. Global insurance groups, however, are active internationally and therefore must comply with the regulatory regimes of every jurisdiction in which they operate. For example, MetLife operates in 50 countries through 359 subsidiaries and AIG operates in 95 countries and jurisdictions.[ii] Despite state-level supervision in the U.S., various regulatory and advisory bodies play a part on the federal level and international stage, and increasingly so since the financial crisis. Their roles are summarized in Table 1.

TABLE 1. THE ROLE OF VARIOUS REGULATORY & ADVISORY BODIES

 

GLOBAL

INTERNATIONAL ASSOCIATION OF INSURANCE SUPERVISORS (IAIS): IAIS is a standard setting body whose members represent insurance regulators in nearly 140 countries. In coordination with the Financial Stability Board (FSB) of which it is a member, IAIS is charged with developing the streamlined global regulatory framework for IAIGs and identifying G-SIIs. FIO, FRB, NAIC and state insurance regulators all participate in IAIS and its initiatives.   

FEDERAL

FINANCIAL STABILITY OVERSIGHT COUNCIL (FSOC): FSOC is a council of America’s financial regulators given the task of identifying activities that pose risks to America’s financial stability. It has designated 3 insurance companies as systemically important, which then subjects those companies to increased regulation and supervision by the FRB.

FEDERAL INSURANCE OFFICE (FIO): An entity within the Treasury Department, the purpose of FIO is to be a federal level monitor of the insurance industry. Though it has no regulatory oversight powers, it has the potential to effectively represent insurance interests abroad and add insurance expertise to the federal government. 

NATIONAL ASSOCIATION OF INSURANCE COMMISSIONERS (NAIC): Serving as a standard-setting body, NAIC’s members are the chief insurance regulators of all the states and U.S. territories. Through NAIC, insurance regulators establish best practices, produce model laws, coordinate oversight, and are represented abroad.

FEDERAL RESERVE BOARD (FRB): The FRB is responsible for the regulation of insurance companies designated by FSOC as well as companies with a savings and loan company within the insurance group. These regulations may include higher capital standards, supervision, resolution plans, and other measures.

STATE

CHIEF INSURANCE REGULATORS OF ALL 50 STATES, DC & 5 TERRITORIES: Throughout U.S. history, state governments have been charged with the regulation of insurance companies, a principle enshrined in the McCarran-Ferguson Act passed in 1945. While NAIC coordinates insurance regulation and promotes uniformity, each state has its own insurance commissioner or other regulator responsible for supervising the companies within its borders.

           

 

The Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) was passed in 2010 to address systemic weaknesses made apparent by the financial crisis, end “too big to fail,” and protect consumers from abusive practices by the financial services industry. Insurance companies are widely acknowledged to have weathered the financial crisis better than banks and other financial companies.[iii] In fact, many doubt that most insurance and reinsurance companies pose any systemic risk.[iv][v][vi][vii][viii][ix] Nonetheless, insurance companies have not been immune from efforts to increase capital and regulatory supervision on the federal level.

Dodd-Frank created the Financial Stability Oversight Council (FSOC) to coordinate macroprudential oversight amongst America’s financial regulators, and identify and address threats to overall financial stability. Importantly, FSOC was given the power to designate nonbank financial firms (including insurance companies) whose failure would trigger a crisis, label them “systemically important financial institutions” (SIFIs), and subject them to increased regulation by the Federal Reserve Board (FRB).[x] Dodd-Frank also created the Federal Insurance Office (FIO) within the U.S. Treasury Department to monitor the U.S. insurance industry and address the lack of insurance expertise on the federal level. Its director is a nonvoting member of FSOC, serving in an advisory capacity. Since its creation, FSOC has designated three insurance companies as SIFIs—AIG, Prudential Financial and MetLife (which is contesting the designation in court[xi]). The process by which FSOC designated these firms as SIFIs has, in and of itself, been greeted by criticism.[xii] But more pressing for insurers, it is still unclear how FSOC’s process will dovetail with regulatory developments on the global stage and how the FRB will apply capital standards for SIFIs.

The New Regulatory Frontier for Global Insurers

Jumpstarted by the Group of 20 (G-20), the Financial Stability Board (FSB), including the affiliated International Association of Insurance Supervisors (IAIS), was tasked with facilitating the coordination and cooperation of insurance supervisors. FSB and IAIS have sought to revise capital standards, identify broad risks to financial markets and their stability, and generally make insurance regulation more efficient and streamlined. These regulatory initiatives, some akin to domestic efforts in Dodd-Frank, are summarized in Table 2 and broadly aimed at three goals: enhanced financial stability, more effective and efficient jurisdictional coordination, and consistent best practices. The initiatives are targeted solely at global insurers, IAIGs and G-SIIs. 

TABLE 2. IAIS REGULATORY INITIATIVES

INITIATIVE

DESCRIPTION

AFFECTED ENTITITES

BCR

IAIS completed the Basic Capital Requirement (BCR) in 2014, which is intended to be a uniform capital baseline for applying higher-loss absorbency requirements to G-SIIs on a group basis.

G-SIIs

HLA

IAIS is in the process of developing higher loss absorbency (HLA) requirements that would apply to G-SIIs. G-SIIs will be required to maintain the base capital level mandated by BCR and additional HLA requirements. It is possible that higher levels of capital may be required for non-insurance business or a broad focus area may be considered. IAIS expects to release the HLA requirements for consultation in late June and to complete its work by the end of 2016.

G-SIIs

ICS

The International Capital Standard (ICS) will supplant the BCR as the foundation for the HLA standard for G-SIIs, but will be applicable to all IAIGs as part of ComFrame. IAIS plans to develop the ICS by the end of 2016, with members just beginning implementation in 2019. Numerous policy decisions, such as the approach to take on valuation, remain and field testing must be completed before members begin implementation. IAIS has also noted a transitional period may be needed as individual jurisdictions gradually phase-in requirements.

G-SIIS & IAIGs

ComFrame

The Common Framework for the Supervision of IAIGs or ComFrame, which includes the ICS, is meant to be a comprehensive framework for regulatory supervisors to address group-wide activities and risks. Designed to be integrated and multilateral, the ultimate aim is to make international group supervision more effective and efficient.

G-SIIs & IAIGs

IAIGs are predominately defined by their size (at least $50 billion in assets or gross written premiums of not less than $10 billion on a rolling three-year average) and global reach (premiums written in at least 3 jurisdictions with not less than 10 percent of gross premiums written outside of the home jurisdiction). IAIS has identified about 50 IAIGs globally, of which only a subset are American companies.[xiii]

G-SIIs, in contrast to IAIGs, are identified as posing systemic risk. The IAIS is in the process of revising the G-SII assessment methodology, slated for completion this year.[xiv] In the meantime, nine companies have been identified as G-SIIs—Allianz SE; American International Group, Inc.; Assicurazioni Generali S.p.A; Aviva PLC; AXA S.A.; MetLife, Inc.; Ping An Insurance Company of China, Ltd.; Prudential Financial, Inc.; & Prudential PLC.[xv] American companies AIG, Prudential Financial and MetLife have therefore been identified as both SIFIs and G-SIIs.

For reference, there are more than 6,000 insurers in the United States.[xvi] It is expected that IAIS’s initiatives may impact only a handful of the largest and most globally active insurance groups.

The IAIS is scheduled to implement the efforts in Table 2 in 2019. FIO, FRB, NAIC and state insurance representatives all participate in IAIS working groups as representatives of American interests. Yet it is ultimately federal regulators through the FSB who will agree to these standards on behalf of the U.S. Importantly, however, U.S. law dictates that state-based insurance regulators are responsible for fully implementing the agreements for companies operating within their jurisdiction. Of note, since IAIS is only a standard-setting body, all other countries must also independently implement any final agreement.      

Potential Costs & Benefits of a Harmonized Global Regulatory Framework

As with all regulations, developing a coordinated global regulatory framework for insurance groups will have costs and benefits. If new standards require substantially higher capital without additionally harmonizing the fragmented regulatory environment, compliance costs and the cost of insurance coverage may increase while harming coverage expansion and investment.[xvii] On the other hand, proponents of global regulatory efforts argue that, if done right, IAIS’s efforts could allow American companies to more effectively compete in the global market, reduce costly inefficiencies and redundancies borne by multijurisdictional regulation, drive down compliance costs, and more comprehensively promote financial stability.[xviii]

Proponents of IAIS’s regulatory initiatives generally cite five main benefits:

Global Competitiveness: According to FIO, U.S.-based insurers anticipate 40 percent of revenue coming from outside the country in the coming years.[xix] Additionally, U.S.-based subsidiaries of foreign holding companies are active participants in the U.S. market, accounting for 13 percent of aggregate life/health (L/H) and property/casualty (P/C) premium volume. Developing markets will be a particularly important source for growth in the coming years. In an increasingly globalized marketplace, harmonized rules may help American companies remain competitive internationally.

Reduced Inefficiencies & Complexity: The currently fragmented regulatory environment for insurance groups operating in multiple countries and regulatory jurisdictions can create redundancies and conflicts that raise the cost of doing business. If supervisory roles can be streamlined, coverage costs for insureds may fall. Along with high compliance costs, the complexity of the current regulatory environment may also discourage competition. Regulatory simplification and harmonization could establish a more level playing field; instead of a market dominated by established players who are deep-pocketed enough to sort through the regulatory morass, insurance companies could fairly compete based on the superiority of their products. Ultimately this development would allow for new market entrants, spur domestic companies to grow into new markets, and benefit policyholders.

Comparability: Fragmented regulation of insurance companies and diminished trust in ratings agencies since the financial crisis have necessitated increased due diligence on the part of multinational companies looking to engage with global insurers from different jurisdictions. State guaranty funds were designed largely to protect small business and individual policyholders from losses when an insurance company fails, thus capping payouts at $300,000.[xx] Harmonizing insurance regulations would allow multinational companies to compare solvency, lower search costs, and provide strong insurance partners.

Aligned Standards with the Law of Large Numbers: The insurance industry, unlike banks, is driven by the law of large numbers in which diverse and uncorrelated risks are aggregated; the larger the sample size, the more likely actual losses match expected losses. Existing capital rules, assessed at the legal entity-level and not the group-level, must be calculated in each jurisdiction in which an insurer is operating. This can prevent insurers from effectively deploying capital. If the IAIS process helps move solvency regulation to the group-level, global insurers may more easily employ the law of large numbers, amassing large portfolios of uncorrelated risks to provide cost-effective risk mitigation to policyholders, without trapping capital.   

Financial Stability: While some object to the notion that insurance companies as a whole pose any systemic risk, proponents believe IAIS’s efforts to implement greater international coordination, prescribe minimum standards, and promote best practices foster financial stability. In this view, the fragmented nature of regulatory supervision has limited the ability of regulators to perceive systemic risks; an international capital standard assessed at the group-level would remedy those gaps in regulatory oversight.  

Primary Issues Raised by Stakeholders

As is to be expected with the potential for such wholesale change in insurance regulation, market participants, academics, and other stakeholders have raised some issues regardless of their support for the IAIS process or not. Following are a number of commonly cited issues: 

International standards are duplicative and unnecessary: Some believe that only certain non-insurance activities at the largest institutions warrant any scrutiny greater than currently exists or none at all.[xxi] This line of thinking maintains that because academic literature supports the notion that insurance companies do not pose any systemic risk, internationals standards are simply unnecessary, adding a duplicative and likely conflicting layer of regulation.[xxii]

The IAIS is moving too aggressively and opaquely: IAIS has given itself an aggressive timeline for the development of the initiatives listed in Table 2, though recently softened its language on timing.[xxiii] Some have argued that the timeline is inappropriate considering the number of issues involved in adopting global standards. Kevin McCarty, Florida’s Insurance Commissioner and past President of NAIC, testified to Congress, “We have serious concerns about the aggressive timeline of developing a global capital standard given legal, regulatory, and accounting differences around the globe.”[xxiv] Concern regarding the speed of the IAIS process is further exacerbated by the perception that IAIS is limiting stakeholder engagement on issues that fundamentally affect the business of insurance.[xxv] Two bills In Congress have recently been introduced partly in response to this line of criticism and to inform the process.[xxvi]

IAIS efforts could lead to a one-size-fits-all capital standard: Many stakeholders have been particularly vigilant at discouraging the adoption of one-size-fits-all or bank-centric capital standards. Kevin McCarty testified to Congress, “We are concerned that taking a uniform regulatory approach that treats insurers more like banks may actually encourage new risk-taking in the insurance industry.”[xxvii]

The current international process appears more favorable to market valuation accounting preferred in Europe: Many believe that the current process appears tilted toward adopting a market valuation accounting approach that is favored in Europe.[xxviii] In a 2013 peer review by the FSB, international regulators faulted the regulation of insurance in America for its lack of uniformity.[xxix] In that report and others, European counterparts have failed to recognize the benefits of America’s state-based approach to insurance regulation, particularly for consumers. That lack of recognition and the perception that negotiations currently favor a more European-style accounting approach have engendered concern. State-based regulators have warned that a more European-favored approach could have a negative impact on the U.S. insurance market and hurt consumers because of its volatile short-term focus that differs from the longer-term view taken in the U.S.[xxx] In response to this concern, the IAIS is including U.S. GAAP accounting standards in ongoing field testing of international capital standards, though the issue is not fully resolved.[xxxi]

The international process unduly exerts international pressure on U.S. regulators: As noted by S. Roy Woodall, Jr., the independent member of FSOC with insurance experience appointed by President Obama, in his testimony to Congress, “International regulatory organizations may be attempting to exert what I consider to be inappropriate influence on the development of U.S. regulatory policy.”[xxxii] In his view, while state insurance regulators are involved in developing international standards through IAIS, representatives from the Treasury Department, FRB, and Securities and Exchange Commission (SEC) as part of the FSB decide whether to consent to international insurance standards and policy measures.[xxxiii] Yet, ultimately it is state representatives who are responsible for fully implementing any international standards within their jurisdiction. Without strong coordination, adoption of forthcoming international agreements on insurance regulation could be fragmented to the detriment of U.S. leadership and competitiveness. And in fact, Kevin McCarty, representing state insurance regulators, recently emphasized, “We will not implement any international standard that is inconsistent with our time-tested solvency regime.”[xxxiv]

FRB involvement in insurance regulation should concern policymakers: Since the passage of Dodd-Frank, the Federal Reserve Board has become the regulator of approximately one-third of the U.S. insurance industry despite its former role primarily supervising banks.[xxxv] FRB is charged not only with regulating any insurance companies designated by FSOC as a SIFI, but also supervising insurance holding companies that own an insured bank or thrift. Together these companies offer a range of products that until recently FRB had little expertise overseeing. For this reason, some fear that FRB will struggle to tailor regulation for insurance companies.[xxxvi] FRB’s role as an insurance regulator may also lead to greater scrutiny by Congress and threaten the Federal Reserve’s central bank independence.[xxxvii]  Since Congress passed its fix to the Collins Amendment,[xxxviii] the FRB has said that a formal rulemaking is forthcoming on a domestic regulatory capital framework tailored to the insurance business.[xxxix] Representatives from NAIC have encouraged the FRB to be flexible and remember that FRB’s standards are in addition to state risk-based capital standards applicable to insurers within FRB-regulated groups and a replacement.[xl] It is further unclear how FRB’s recent efforts and involvement on insurance issues will affect IAIS regulatory developments.

Looking Forward

As the previous sections highlighted, stakeholders continue to raise a number of important issues that must be overcome as the IAIS process moves forward and all the costs and benefits are assessed. Importantly, Congress has recently taken a greater interest in the policy issues raised by ongoing regulatory initiatives targeting global insurers, which are still years from adoption. While FRB, FIO and other regulatory officials have promised collaboration and coordination, it is still uncertain how various entities, domestically and internationally, will work together to ensure that consumers are protected and regulations work to promote a level playing field for companies in an increasingly global marketplace.



[1] Swiss Re Sigma, “World Insurance in 2013,” (May 2014); http://media.swissre.com/documents/sigma3_2014_en.pdf

[ii] See Financial Times, “Global insurers should be supervised at scale,” (May 2015); http://www.ft.com/intl/cms/s/0/89cc9b24-ef28-11e4-87dc-00144feab7de.html & American International Group, Inc., SEC Filing Form 10-K (February 2015); http://www.sec.gov/cgi-bin/browse-edgar?CIK=0000005272&action=getcompany

[iii] Robert Shapiro & Aparna Mathur, “Unnecessary Injury: The Economic Costs of Imposing New Global Capital Requirements on Large U.S. Property and Casualty Insurers,” (November 2014); http://ssrn.com/abstract=2540589

[iv] Hua Chen, J. David Cummins, Krupa A. Viswanathan, & Mary Weiss, “Systemic Risk and the Interconnectedness Between Banks and Insurers: An Econometric Analysis,” Journal of Risk & Insurance (March 2013); http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6975.2012.01503.x/abstract

[v] Marian Bell & Benno Keller, “Insurance and Stability: The Reform of Insurance Regulation,” (2009) Zurich Financial Services Group

[vi] Group of 30, “Reinsurance and International Financial Markets,” (2006); http://www.group30.org/rpt_08.shtml

[vii] Sojung Carol Park & Xiaoying Xie, “Reinsurance and Systemic Risk: The Impact of Reinsurer Downgrading on Property–Casualty Insurers,” Journal of Risk & Insurance, (June 2014); http://onlinelibrary.wiley.com/doi/10.1111/jori.12045/abstract

[viii] Martin F. Grace, Georgia State University, “The Insurance Industry and Systemic Risk: Evidence and Discussion,” (April 2010); http://ssrn.com/abstract=1593645

[ix] Scott E. Harrington, “The Financial Crisis, Systemic Risk, and the Future of Insurance Regulation,” Journal of Risk & Insurance, (October 2009); http://onlinelibrary.wiley.com/doi/10.1111/j.1539-6975.2009.01330.x/abstract

[x] For more on FSOC: Andy Winkler, “Primer: FSOC’s SIFI Designation Process for Nonbank Financial Companies,” (September 2014); http://americanactionforum.org/research/primer-fsocs-sifi-designation-process-for-nonbank-financial-companies

[xii] Douglas Holtz-Eakin, Testimony to the Senate Banking & Urban Affairs Committee, “FSOC Accountability: Nonbank Designations,” (March 2015); http://americanactionforum.org/testimony/fsoc-accountability-nonbank-designations

[xiii] IAIS, “Updated ComFrame Frequently Asked Questions,” (December 2014); http://iaisweb.org/index.cfm?event=getPage&nodeId=25229

[xiv] Financial Stability Board, “FSB publishes the 2014 Update of the G-SII List,” (November 2014); http://www.financialstabilityboard.org/2014/11/fsb-announces-update-of-list-of-global-systemically-important-insurers-g-siis/

[xv] Ibid.

[xvii] See Footnote #3

[xviii] See Ernst Baltensperger, Peter Buomberger, Alessandro A. Luppa, Benno Keller, & Arno Wicki, “Regulation and intervention in the insurance industry: fundamental issues,” (February 2008); https://www.genevaassociation.org/media/201045/geneva_report[1].pdf

[xix] Michael McRaith, FIO Director, Testimony to the Senate Banking & Urban Affairs Committee, “State of the Insurance Industry & Insurance Regulation,” (April 2015); http://1.usa.gov/1OzgSt6

[xx] NCIGF, “Facts & Statistics: The Property and Casualty Guaranty Fund System At-A-Glance;” http://ncigf.org/media-facts

[xxi] See Footnote #4-9

[xxii] See Footnote #3

[xxiii] IAIS, “Risk-Based Global Insurance Capital Standard (ICS) Consultation Document,” (December 2014); http://iaisweb.org/index.cfm?event=getPage&nodeId=25229#

[xxiv] Kevin McCarty, Florida Office of Insurance Regulation Commissioner on Behalf of NAIC, Testimony to the Senate Banking & Urban Affairs Committee, “State of the Insurance Industry & Insurance Regulation,” (April 2015); http://1.usa.gov/1QFjSmp

[xxv] Ibid.

[xxvi] See H.R. ____, the “International Insurance Standards Transparency & Policyholder Protection Act,” introduced by Rep. Sean Duffy (R-WI): https://duffy.house.gov/press-release/duffy-insurance-bill-protects-policyholders & S. ___, the “International Insurance Capital Standards Accountability Act of 2015,” introduced by Senators Dean Heller (R-NV) & Jon Tester (D-MT): http://www.heller.senate.gov/public/index.cfm/pressreleases?ID=40b8f1e3-e60f-45bc-a9c1-c36284d604bc

[xxvii] See Footnote #24

[xxviii] See Footnotes #3 & #8

[xxix] Financial Stability Board, “Peer Review of the United States: Review Report,” (August 2013); http://www.financialstabilityboard.org/wp-content/uploads/r_130827.pdf

[xxx] See Footnote #24

[xxxi] See Footnote #23

[xxxii] S. Roy Woodall, Jr., Independent Member Having Insurance Expertise on FSOC, Testimony to the Senate Banking & Urban Affairs Committee, “State of the Insurance Industry & Insurance Regulation,” (April 2015); http://1.usa.gov/1OzPuLF

[xxxiii] Ibid.

[xxxiv] See Footnote #24

[xxxv] Mark Van Der Weide, Deputy Director of the FRB Division of Banking Supervision & Regulation, Testimony to the Senate Banking & Urban Affairs Committee, “State of the Insurance Industry & Insurance Regulation,” (April 2015); http://1.usa.gov/1J6p3WZ

[xxxvi] See Footnote #12

[xxxvii] Ibid.

[xxxviii] S. 2270, the Insurance Capital Standards Clarification Act of 2014 (Passed December 2014); https://www.congress.gov/bill/113th-congress/senate-bill/2270/all-info

[xxxix] See Footnote #35

[xl] See Footnote #24

Related Research
  • The Dodd-Frank Act has a significant impact on economic growth

  • The Dodd-Frank Act will reduce economic growth by $895 billion, or $3,346 per working person

Executive Summary 

The Dodd–Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank) was enacted in 2010. It created new agencies and bureaus, changed capital requirements, revamped securitization rules, changed the oversight of derivatives, imposed the Volcker Rule, and had provisions for corporate governance.

This short paper looks at the growth impacts of the banking sector’s response to these requirements and the burden of compliance costs.  The consequences are significant – roughly $895 billion in reduced Gross Domestic Product (GDP) over the 2016-2025 period, or $3,346 per working-age person. Clearly, such a computation is subject to large uncertainties, but the order of magnitude is instructive.

Introduction

Dodd-Frank was a sweeping reform. It created new agencies and bureaus: the Financial Stability Oversight Council (FSOC), the Office of Financial Research in Treasury, the Consumer Financial Protection Bureau (CBPB), the Federal Insurance Office in Treasury, an Office of Credit Ratings within the Securities and Exchange Commission and others. It revamped securitization rules; changed the oversight of derivatives; changed the prudential standards for risk-based capital, leverage, liquidity, and contingent capital; imposed the Volcker Rule, had provisions for corporate governance, and more. And, in the process of being implemented, it required 398 separate rulemakings that are still not complete nearly five years later.

It is widely perceived that this massive regulatory initiative has generated uncertainty that has harmed lending. It is even more likely that the banking sectors response to these requirements and the burden of regulatory compliance have been an effective tax on the banking sector that has harmed lending, investment and growth. To date, however, there has been little quantitative evidence on the magnitude of these impacts.

This short paper looks at the growth impacts of these requirements and the burden of compliance costs. I modify a standard model of economic growth (the “Solow model”) to incorporate these features and then use a parameterized version to estimate the impact.

To anticipate the results, the growth consequences are significant – $895 billion in reduced Gross Domestic Product (GDP) or $3,346 per working-age person over the next 10 years. Clearly, such a computation is subject to large uncertainties, but the order of magnitude is instructive.

A Framework for Analysis

The framework focuses on the links between saving and investment in the economy as a whole. Investment, in turn, drives growth in the capital stock that, when combined with growth in labor, generates growth in output or GDP. Because the goal is to understand how fast the standard of living rises, the entire exercise focuses on growth in capital per working age individual (labor) and income per person.

The starting point is the observation that national saving finances national investment:

(1)              I = S 

However, the presence of capital and other requirements, compliance burdens and other costs means that not all savings are channeled into productive investments; in part these features serve as a “tax” on intermediation:

(1’)      I = S(1-t)

where t is the effective tax rate. Investment is, by definition the change in the amount of capital, meaning that the growth in the capital stock is given by:

(2)              gK = I/K = S(1-t)/K

Saving is, in turn, equal to the saving rate (s) times income or GDP (Y):

(3)              gK = sY(1-t)/K

Using lower case letters to denote capital per worker (k) and GDP per worker (y) yields: 

(4)              gK = s(1-t)y/k

Finally, notice that the difference between growth in the overall capital stock (gK) and growth in capital per worker (gk) is the rate of population growth (h):

(5)              gk = gK - h

Collecting all these results, the growth rate of capital per worker is generated by:

(6)              gk = s(1-t)y/k -  h

The last step in developing the framework is to recognize that the growth of income per worker (gy) is related to the growth of capital per worker by:

(7)       gy = qgk = qs(1-t)y/k -  h

where q is the share of national income earned by capital (as opposed to labor).  Equation (7) is crucial to the analysis because it indicates that the change (D) in the growth rate when the effective tax on intermediation rises is given by:

(8)        Dgy = - qs(y/k)Dt

The remainder of this short paper is devoted to exploring the empirical magnitudes implied by the Dodd-Frank burden and equation (8).

Estimating the Growth Impact of Dodd-Frank

The starting point for fleshing out the growth implications of the increased Dodd-Frank burdens is using data from the Bureau of Economic Analysis (BEA) to develop estimates of the share of capital in national income, q (0.39 in 2013); the gross national saving rate, s (17.6 percent in 2013); and the ratio of output to capital y/k, (=Y/K, 0.33 in 2013). Collecting these results, they imply that the change in the growth rate of income will by roughly 2.3 percent of the change in the effective tax rate on intermediation of saving and investment.

How large is that tax increase? To begin, note that Dodd-Frank is mainly concentrated on the banking sector, while other forms of transforming saving into investment are essentially unaffected. One can think of the overall effective tax rate as a weighted average of the impacts on the banking sector and non-banking financial sectors. For 2013, the BEA shows that fixed investment totaled $2,769.5 billion while the Federal Reserves Flow of Funds data put total bank lending for these purposes at $786.8 billion. Taken together, this suggests that banking has a 28.4 percent share of overall intermediation.

What did Dodd-Frank do to the effective tax rate on banks?  Consider, first, the burden of complying with the new regulations. The American Action Forum’s analysis of the Federal Register indicates that the cumulative burden (including the market value of paperwork hours for compliance) is roughly $14.8 billion annually. Notice that after-tax income in the presence of the burden is:

(9)        [rL – C – Burden](1-tB)

where r is interest on loans (L), C is the cost of acquiring funds and other operations, and tB is the tax rate on banks. Suppose that instead of a burden, the same after-tax income was generated by simply raising the tax rate to t.  Then, by definition:

(10)         [rL – C – Burden](1-tB) = [rL – C](1-t)

Equation (10) can be re-arranged to yield:

(11)         t’ = tB + (1-tB)[Burden/(rL-C)]

To put some empirical meat on (11), the Federal Deposit Insurance Corporation’s (FDIC) Quarterly Banking Profile (QBP) provides information on taxes ($67.5 billion) and net income ($151.2 billion) that permit one to compute an initial tax rate of 31 percent. Using the AAF burden data and (11) yields an increase to 37.8 percent from compliance burdens.

A similar approach can be used to transform the roughly 2 percentage point rise in the leverage ratio of the banking sector (from 7.5 to 9.5 percent) from 2008 to 2014 into a rise in the effective tax rate. The banking sector responded to Dodd-Frank by holding more equity capital, thus require it to have greater earnings to meet the market rate of return – the same impact as raising taxes.  In this case, the higher leverage ratio translates into a further increase in the effective tax rate to 40.3 percent, for a total rise of 9.2 percent.

Collecting results, the impact on economic growth is a decline in the per capital growth rate of 0.059 percentage points annually.  Is this a big deal? Consider lowering the growth rate in the Congressional Budget Office baseline projections by exactly this amount between 2016 and 2025. The lower rate of economic growth translates into a total loss of $895 billion in GDP or $3,346 for every member of the working age (16 and older) population over those 10 years.


The Affordable Care Act (ACA) changed the American health care system in myriad ways. The primary objectives of the ACA were to expand insurance coverage while reducing the cost of insurance, and to rein in the increasing cost of health care. Whether these goals are being achieved and at what cost to the budget and to the healthcare stakeholders are important considerations. Five years after passage of the ACA, this report attempts to synthesize many of the studies and cost estimates which have been produced in order to answer these questions.

Key Take-Aways

The number of uninsured individuals has decreased, but not by as much as the Congressional Budget Office (CBO) originally predicted.[1]

·         15 million: fewer uninsured individuals since 2010

·         35 million: individuals still without insurance

·         12 million: more people enrolled in Medicaid since 2010

·         11 million: individuals have insurance through a state or federal exchange

·         7.7 million: individuals receiving subsidies for coverage through an exchange

The cost of expanded insurance coverage is being felt at the individual, state, and federal level.

·         $300: average increase in annual deductibles for ESI from 2010-2014

·         $5,730: average annual cap on out-of-pocket expenses for plans purchased through the exchange in 2014; $2,719 more than the average for ESI plans

·         $43 billion: projected individual mandate penalties over the next 10 years

·         $167 billion: mandate penalties paid by employers over the next 10 years

·         $42.6 billion: cost of ACA regulations implemented thus far

·         $1.2 trillion: federal cost for ACA coverage provisions over the next 10 years

The growth in total health expenditures has also returned to pre-recession rates, demonstrating no bend in the cost curve.[2]

·         $3.15 trillion: national health care expenditures in 2014

·         17.9: percent of GDP spent on health care in 2014

 

 

The Uninsured

There are many varying estimates of the number of uninsured, both now and before passage of the ACA, but reducing this number was inarguably the primary goal of ACA proponents. In the chart below, you can see the estimates for the non-elderly uninsured population from several different sources, both prior to and in 2014. Most of the coverage provisions of the ACA were fully implemented in 2014, including the opening of the health insurance exchanges (where non-group individual and family coverage could be purchased) and the expansion of Medicaid eligibility in 29 states. The average of the estimates places the uninsured rate at 19 percent before 2014 and down to 15 percent in 2014, with an average estimate of 10.6 million fewer people uninsured.

 

One thing to note about estimates of the uninsured is that many of these surveys count people as uninsured if they lacked health insurance coverage at any point during the year. This can obviously be an inaccurate count, as this would capture anyone who may change jobs and thus have a gap in coverage, even for just a few weeks. In 2013, the Census Bureau adapted the wording of its question on being uninsured to try and obtain a more accurate count. Additionally, there are many reasons why it is difficult to know how many individuals are newly insured; changes in the Census methodology, general economic growth and unemployment rates, and natural evolution in the health insurance market would all contribute to changes in the number of uninsured, regardless of the ACA’s existence. While the Department of Health and Human Services (HHS) claims 16.4 million non-elderly adults have gained coverage since passage of the ACA, it is difficult to know how many of those individuals would have gained coverage anyway.[3]

Further, the ACA has led to multiple coverage gaps which are leaving subsets of the population without any options to purchase affordable health insurance. These coverage gaps result from the complex web of provisions included in the law not fitting squarely with other provisions of the law. One example is known as the “Family Glitch”, which will leave nearly 2 million people, half of whom are children, without access to affordable coverage.[4] A similar problem arises for people who fall into the Medicaid coverage gap—these are people who are ineligible for Medicaid in their state because the state chose not to expand eligibility rates, but are still below 100 percent of the federal poverty level (FPL) and therefore not eligible for coverage in the exchange either (which is only open to people between 100 and 400 percent FPL).[5]

The differing definitions of “affordability” are also causing problems. In order to be exempt from the individual mandate due to the unaffordability of insurance available through the exchange, the lowest cost plan must equal more than 8 percent of a person’s income. However, for an employer to meet their responsibility of providing affordable coverage to their employees, they must only offer coverage that does not cost the employee more than 9.5 percent of an employee’s income. The 1.5 percentage point gap in these two numbers could result in an individual being left without any truly affordable options for insurance. An employer offering coverage that is deemed affordable avoids the penalty under the employer mandate, whether the employee can actually afford the coverage or not. As a result, the individual becomes ineligible to receive subsidies to purchase coverage through the exchange, where a plan may be cheaper for that individual. So while the individual won’t have to pay the mandate penalty, he/she may also not have any insurance.

The Insured

The ACA has also been responsible for shifting people between health insurance markets and types of coverage. The biggest increase has been in Medicaid and the Children’s Health Insurance Program (CHIP), which has seen total enrollment grow from 57.8 million in 2013 to 70 million in 2015, an increase of 19.3 percent in just two years.[6]  Children account for nearly 42 percent of total enrollment in these two programs. According to a RAND study, 80 percent of non-elderly adults either maintained insurance through the same means in 2014 as they had in 2013, or remained uninsured; the remaining 20 percent are no longer uninsured, newly uninsured, or receiving health insurance through a different means than they had the previous year.[7]

The majority of non-elderly adults continue to be insured through an employer-sponsored plan (up 8 million in 2014 to 116.9 million), as can be seen in the charts below, though at least 2.5 million people lost their employer-sponsored insurance between 2013 and 2014. (Researchers publishing in Health Affairs found similar estimates for the number of cancelled plans.[8]) Medicaid now serves as the primary source of insurance for 9 percent of the non-elderly adult population,[9]  while the marketplace exchanges insure 2 percent.[10] Of the uninsured population prior to 2014, nearly 70 percent of them remain uninsured.

  

 

Cost of Coverage               

Medicaid Costs

With much of the gains in insurance coverage coming from increased enrollment in Medicaid (which, combined with CHIP, now covers 70 million individuals, up from almost 52 million in 2010), the cost of this joint federal-state assistance program will undoubtedly rise, increasing burdens on both federal and state budgets.[11] CBO estimates Federal spending on Medicaid will be $335 billion in FY2015, with expenditures expected to grow by 75 percent over the next 10 years bringing federal spending on the program to $588 billion by 2025.[12] States, on average, cover 43 percent of the cost of the Medicaid program.[13] This amounted to $181 billion in FY2012, before the ACA’s Medicaid expansion provisions went into effect and dramatically increased enrollment.[14] While the federal government is covering 100% of the cost of covering the “expansion population” through 2016 and is to continue to cover 90 percent of the costs, state budgets are already tight and will likely feel the strain of this new expense. Additionally, the “woodwork effect” will cost even non-expansion states up to $700 million.[15]

Exchange Enrollment and Costs

In 2014, the first year the ACA’s health insurance exchanges were operating, 8.02 million individuals enrolled in a marketplace exchange plan. According to HHS, 2.57 million enrolled through a state exchange and 5.45 million enrolled through the federal exchange.[16] Even though the initial open enrollment period was 6 months long (October 1, 2013, to March 31, 2014), 47 percent did not enroll until the last month or during the Special Enrollment Period (which extended the deadline to April 19 in response to lower-than-expected enrollment).[17] Further, by the end of the year, only 6.7 million people were enrolled in an exchange plan.[18]

Among all enrollees, 6.67 million (83 percent) received federal financial assistance in purchasing a plan; however, many people received incorrect subsidy amounts and are having to reconcile those errors during tax season this year. Because subsidy amounts were originally calculated using 2012 income, which may be very different than actual income earned in 2014, it is estimated that between 4.5 and 7.5 million people will either have to pay back some of their subsidy ($794, on average) if income was higher than expected or will receive additional money ($773, on average) if income was lower than expected.[19]

In 2015, enrollment through the exchanges has increased to 11.69 million individuals as of February 15 (though enrollment is still ongoing since the deadline was extended to April 30). According to HHS and consistent with the CBO March 2015 estimate, 2.85 million enrolled through a state exchange and 8.84 million enrolled through the federal exchange.[20] Among all enrollees, 7.7 million received financial assistance to purchase a plan, costing the government $28 billion this year.[21

 

Table 1. Exchange Characteristics

 
 

2014

2015

Enrollment

   

Federal

5.45 million

8.84 million

State

2.57 million

2.85 million

Total

8.02 million

11.69 million

Subsidies

   

Enrollees Qualifying for Subsidy

6.67 million

7.7 million

Average Monthly Premium (after subsidy)

$82

$101

Average Subsidy Amount

76%

72%

 

Costs to Individuals

With all of the shifting in the health insurance market between various types of coverage and new regulations on health insurance products, many individuals were unsure how these changes would impact their wallets.

In 2014, plan premiums, on average, were higher for those purchasing employer-sponsored insurance (ESI), compared with the average exchange premium. However, since the average exchange subsidy amount covered only 76 percent of the premium cost whereas employers covered 82 percent for their employees, on average, the premium amount paid by the individual was only slightly higher for those purchasing coverage through their employer ($90/month[22]) than those purchasing a plan through the exchange ($82/month). [23]

Further, exchange plans typically have higher deductibles ($2,910 in 2014 compared with $1,217 for ESI), co-insurance rates (20 percent compared with 19 percent), and caps on out-of-pocket expenses ($5,730 compared with $3,011), meaning a person with coverage through the exchange will be liable for more out-of-pocket expenses than an individual with ESI, on average.[24] For example, given total annual health care costs of $3,000 (in excess of premium payments), an individual with coverage through an exchange will spend $1,275 more than a person with ESI. This difference only increases as health care expenses increase. Further, CBO’s latest report estimates that premiums for benchmark exchange plans will increase 8.5 percent on average per year from 2016-2018, which could make the problem even worse if employer-sponsored plans do not see the same growth rate.[25] (All figures based on individual coverage in 2014.)

While we don’t yet have data on employer-sponsored plans for 2015, costs for exchange plans remain largely unchanged from last year. In 2015, the average subsidy amount covers 72 percent of premium costs and the average premium is $364 per month, meaning the cost to enrollees, on average, is $101 per month.[26] The average deductible is now $2,556 for a Silver plan.[27]

 

Overall Cost of the ACA

After examining the changes to the health insurance market and the corresponding costs to the individual, we must next examine at what cost to the taxpayer those changes have been realized. At the time of passage in March 2010, the CBO estimated the ACA (and its accompanying reconciliation legislation) would reduce the federal deficit by $143 billion between 2010 and 2019.[28] This net reduction would be the result of $788 billion in new spending on the law’s various insurance coverage provisions, a reduction in spending of $511 billion in other areas (including Medicare), and $420 billion in increased revenue from new taxes and payments. CBO projected the number of uninsured would decrease from 50 million in 2010 to 23 million in 2019, when exchange enrollment would be 24 million, and the average exchange subsidy would reach $6,000 per person, for a total 10-year cost of providing coverage through the exchanges of $358 billion.[29]

Five years later, in CBO’s March 2015 budget baseline, we can see how some of the costs have changed. Some of these changes are the result of having more accurate, timely information now that some of the provisions have gone into effect and we can see how consumers, insurers, and employers are actually responding to the effects of the law. Other changes in the numbers reflect the fact that cost estimates are done in ten year windows and many of the provisions of the legislation, particularly the costly spending provisions, were not set to go into effect until 2014 or later, nearly halfway through the original ten year window. Now, many of the provisions have gone into effect or will in the next year or two, allowing for the costs to be more fully accounted for in this latest estimate. In the ten years from 2016-2025, CBO now estimates the gross cost of the coverage provisions will be $1.707 trillion. [30] After accounting for expected revenue increases, the net cost of the coverage provisions is estimated to be $1.207 trillion over the next decade, and mandatory spending is projected to increase by $1.747 trillion under this law. The uninsured population, calculated to be 35 million in 2015, is projected by CBO to drop to 27 million in 2025, while exchange enrollment will double to 22 million by the same year, from its current level of 11 million. (Note that exchange enrollment is now predicted to be 2 million less in 2025 than it was first predicted to be in 2019.) The average subsidy in 2025 is expected to be $6,600/person, and total costs of providing coverage through the exchanges will be $849 billion from 2016-2025.[31]   

CBO only originally estimated $211 million in unfunded mandates, but AAF’s Regulation Rodeo has found that other hidden mandates in the bill—the 96 regulations implemented pursuant to the ACA thus far—are responsible for costs to our economy of $42.6 billion and 155 million paperwork hours.[32]

Bending the Cost Curve

In addition to paying for health insurance, there is the cost of actual health care, which includes hospital, physician and clinical services, medications and medical devices, public health initiatives, medical research, and more. Throughout the health care reform debate and beyond, the notion of “bending the cost curve” for providing these services has been a stated priority. As health care costs continue to rise and consume a larger share of our nation’s GDP and our wallets, the imperative is to reduce the cost of care while maintaining or improving its quality. So far, the law is not a success in this respect.

The Altarum Institute recently published its latest Health Sector Trend Report showing that total national health expenditures grew 5.2 percent annually in 2014 from 2013.[33] However, growth in just the last quarter of 2014 was up 6.2 percent from the same quarter in 2013, and up 6.6 percent in the month of February this year from a year ago, which may indicate similar increases are to be expected as 2015 continues.[34]

While looking at the changes in growth rates from last year provides a good idea of the current trend, examining the changes over the last several years provides a more complete picture which is helpful in determining the cause of the changes. The conversation surrounding health care expenditures has been particularly interesting recently because of the occurrence of two simultaneous events, both of which may have had a large impact on health care spending: health care reform, of course, and the “Great Recession”. The trends show that since the start of the recession, which began at the end of 2007, annual growth in health care spending decreased from 6.3 percent in 2007 to 3.8 percent in 2009, and hovered around the 4 percent mark for five straight years, as shown in the chart below.[35] Then, in 2014, health expenditures grew by 5 percent, the first significant uptick in some time. Many proponents of the ACA credit the law with the slowdown in spending growth and even contend that an increase was to be expected in 2014. This is due to the various provisions that took effect in 2014 to increase coverage, which should thus increase use of services and spending on health care. However, the slowdown in health care spending, dropping to 4.8 percent in 2008, correlates much more strongly with the recent recession than it does with passage of the ACA. The legislation, of course, was not signed into law until 2010, the third year of the slowdown.

Altarum: Annual Growth in National Health Expenditures, Overall and by Selected Categories[36]

Further, when we look more closely at the breakdown of the increase in spending in 2014, we do not find a large increase in services as the ACA proponents’ theory would suggest. Spending on services grew only 4.1 percent in 2014, up just 0.2 percentage-points from 2013. Breaking this down even further, the entire increase in health services is due to increases in hospital services; growth in physician services declined for a second straight year (chart below). This could imply that even though people now have insurance coverage, they either do not know how to appropriately use that coverage (continuing to seek non-emergent care at hospitals rather than a doctor’s office) or they are not seeking routine care, potentially because the cost-sharing aspects of their insurance are too high.[37]

A report from athenahealth shows that there was barely any increase in new-patient volume in 2014, despite expectations of substantial increases by many.[38] The Commonwealth Fund also found that 43 percent of non-elderly adults in both Florida and Texas, two of the country’s four most populous states, had problems accessing health care because of issues related to the cost of care.[39] Another report by the Commonwealth Fund documents other problems that people continue to have regarding health insurance and access to care, including high cost-sharing.[40] These reports underscore the issue that having insurance is not the same as having access to affordable health care.

Increased Medicaid enrollment as a result of expanded eligibility under the ACA may also be partly to blame for limited access to care; a study published by JAMA in 2007 found higher rates of hospital utilization among Medicaid patients.[41] One reason for the increased utilization among Medicaid patients is the low physician reimbursement rates. These prevent some providers from accepting Medicaid patients, causing them to seek care in the emergency room instead.  

Altarum: Health Services Spending and Component Growth[42]

Kaiser: Excess Health Spending Growth Adjusted for GDP and Inflation[43]

 

Research by the Kaiser Family Foundation looks even more deeply at health care expenditure trends over the past few decades, specifically separating out “excess” health care spending, which is the amount that health care spending increases above growth in GDP and inflation. “Excess” health care spending began to rise again in 2009, after a steady decline beginning in 2003.[44] This pattern also does not fit the theory that growth in health care costs slowed because of passage of the ACA.

Conclusion

Five years after passage, there are few clear indications that the ACA has had its intended impact on cost of care and access to it. Meanwhile, the law costs significantly more than projected. We are unsure how many previously uninsured people have truly gained coverage because of the law. For many who have gained insurance coverage, they have not, in turn, been successful at gaining access to affordable care; they are paying premiums for plans that do not meet their needs and which include deductibles and coinsurance rates which inhibit the use of such coverage. National health care expenditures continue to rise, proving that we have been unsuccessful at bending the cost curve thus far. Going forward, health coverage policy solutions will need to focus on enlisting market forces to lower costs rather than merely subsidizing them.

Appendix

Federal Spending and Insurance Coverage

 
 

2010

2015

2010-2015 Change

Health Care Spending

 

 

 

Total Spending

$2.6 trillion

$3.15 trillion^

+ $550 billion

Percent of GDP

17.40%

17.9%^

+ 0.5 percentage points

Impact of the ACA

 

 

 

Net Ten Year Cost

-$143 billion

--

--

Mandatory Spending

$483 billion

$1,747 billion

+ $1,264 billion

Annual Cost of Regulations

--

$42.6 billion

--

Insurance Coverage

 

 

 

Total Uninsured

50 million

35 million

-15 million

Percent Uninsured

19%

15%

-4 percentage points

Employer Sponsored Insurance

150 million

153 million

+ 3 million

Total Exchange

--

11 million

+ 11 million

Medicaid

51.8 million

63.9 million†

+ 12.1 million

CHIP

5.3 million

5.8 million*

+ .5 million

^ As of Dec 2014

† Total Medicaid/CHIP enrollment is 69.7 million as of Dec 2014; subtracted CHIP estimate

* As of Dec 2013

Note: These numbers do not encompass the entire insured population, and thus do not add to the reduction in uninsured.

Cost of Self-Only Insurance Coverage

 

Employer Sponsored Insurance

Exchange

2010

2014

2014

Average Total Monthly Premium

$421

$502

$346

Average Employer Contribution

82%

82%

--

Average Employee Contribution

$75

$90

--

Average Subsidy

--

--

76%

Average Premium After Subsidy

--

--

$82

Percent Increase from Year Prior

5%

2.40%

45%¡

Annual Deductible

$917

$1,217

$2,910

Co-Insurance (Inpatient)

18%

19%

20%

Annual Limit on OOP Expenses

$2134

$3,011

$5,730

 

¡ For 27 year olds in the individual market

 
         


[1] CBO estimated in 2010 that the number of uninsured individuals would be 26 million in 2015; it is now estimated by CBO to be 35 million.

[20] This is not completely accurate, as enrollment has been extended through tax season, though it is not expected that there will be a significant increase in enrollment during this period.

[21] http://www.cbo.gov/sites/default/files/cbofiles/attachments/43900-2015-03-ACAtables.pdf

[30] CBO no longer scores the overall cost of the legislation; they simply score the cost of the insurance coverage provisions.


  • The United States is in dire need of sweeping tax reform

  • The administration's proposal for a one-time levy on overseas earnings to pay for highway spending is flawed

  • Fixing the broken code offers long-term economic growth; a temporary policy offers little economic benefit

Introduction

The United States is in dire need of sweeping tax reform. The House and Senate are both in the nascent stages of considering tax reform, characterized by committee-level working groups, discussions, and hearings. Both the House and Senate budget resolutions are based on a fundamental rewrite of the tax code. While promising, history indicates that there have only been a handful of tax overhauls of the modern code, which underscores tax reform as an intrinsically difficult and low-probability event. 
 
At the same time, the administration and some members of Congress have advocated using overseas funds of U.S. multinational corporations to fund infrastructure spending. These proposals vary in nature, with the administration seeking to apply a one-time levy on these funds, while others have advocated for a “repatriation holiday.”
 
In this short paper, we review the need for tax reform, characteristics of a successful tax reform, and the policy merits of recent tax proposals. We conclude that the leading recent tax proposals fall well short of true fundamental tax reform, may impede progress toward tax reform, and suffer specific policy flaws as well.

The Need for Tax Reform

The United States is suffering from subpar economic growth and reduced long-term potential for growth. In part, this stems from a corporation income tax that includes a very high rate and worldwide base, two features that put it at odds with international norms and harm the growth and competitiveness of the U.S.  A corporate tax reform that lowered the U.S. tax rate would return the U.S. to international tax norms, ridding it of the dubious distinction of having the highest statutory tax rate in the world. U.S. firms are increasingly at a disadvantage competing for the vast majority of world consumers and international markets as other nations adopt more favorable tax treatment of foreign-source income. 

Policy Criteria for Tax Reform

In light of the deficiencies of the U.S. code, tax reform should meet three key criteria:
 
  1. A permanently lower, statutory business tax rate that returns the U.S. to international norms.
  2. A permanent move toward a territorial-style treatment of overseas earnings that would reduce or eliminate taxes on repatriated earnings.
  3. A permanent broadening of the tax base to reduce economic distortions in the context of rate-reducing, overall tax reform.
 
In addition, recent tax proposals have linked the tax treatment of overseas earnings to funding infrastructure; the Highway Trust Fund (HTF) in particular. This suggests a fourth criterion for sound policy:
 

4.  A permanent, adequate funding source for the HTF.

Current  Proposals

Two divergent approaches characterize the existing proposals for the tax treatment of foreign-source income.  The first is specified in the administration’s budget. The administration would seek to impose a “one-time” tax of 14 percent on accumulated overseas earning through 2015. The tax would be payable over 5 years and is estimated to raise $268 billion over the budget window. The administration would use the added revenue to fund the projected shortfall in the HTF and additional surface transportation spending. This proposal is similar to a provision of former Ways and Means Chairman Dave Camp’s tax reform proposal of 2014. The Camp bill would have required owners of 10 percent or more of foreign subsidiaries to determine their share of the subsidiary’s earnings and profits, based on their share of ownership, on which they would pay a rate of 8.75 on cash or cash equivalents or 3.5 percent on any remaining earning. The tax would be payable over 8 years.  So, essentially if Shareholder A owned 20 percent of the foreign subsidiary, it would have to pay the 8.75 and 3.5 percent rates on the foreign subsidiary’s earnings. If the foreign subsidiary had 100 million in earnings, with half in cash and half in other forms of earnings, then Shareholder A would face a tax liability of $6.125 million ($50 million x 8.75 percent + $50 million x 3.5 percent = $6.125 million). Revenue raised from this proposal would be credited to highway trust fund accounts.  
 
While the administration argues that its one-time tax should be imposed as part of an overhaul of the business tax code, the revenues are excluded from the administration’s reserve for business tax reform and devoted instead to the administration’s highway bill proposal. This is quite different than the Camp provision, which was proposed strictly in the context of an overhaul of the U.S. tax code.
 
A similar approach, The Infrastructure 2.0 Act, sponsored by Rep. John Delaney, would also impose a deemed repatriation to fund infrastructure spending. The Delaney bill would impose a deemed repatriation at an 8.75 percent rate on existing overseas earnings.
 
Another approach to using foreign-source income to fund transportation spending is a “repatriation holiday,” wherein the overseas earning of foreign subsidiaries can be repatriated at a reduced tax rate for a certain period of time. For example, Senators Rand Paul and Barbara Boxer have advocated for this policy. They have proposed to allow companies to voluntarily return their foreign earnings to the United States at a tax rate of 6.5 percent, provided the repatriations are in excess of the firm’s historic average repatriations. The proposal would devote the new funds to the HTF accounts. 
 

Evaluating the Proposals

Outside of tax reform, the “one-time,” levy supported by the Obama administration lacks a policy rationale beyond constituting a revenue source. Any meaningful corporate tax reform must permanently address the currently flawed international tax regime. Within the logical confines of the administration’s overall business reform, the one-time tax as a transition rule has a policy rationale. Absent an overall reform, it does not.  
 
The repatriation holidays embody several flaws as well. To begin, they are temporary and do not permanently reform the tax treatment of foreign-source income. In addition, the Joint Committee on Taxation has estimated that, relative to current law, similar proposals would lose revenue (although some dispute this conclusion).  If true, this is hardly an appealing feature for a HTF funding mechanism.
 
A previous repatriation holiday was included in the American Jobs Creation Act (AJCA) of 2004, and may provide additional insight.  The AJCA allowed a temporary 85 percent tax deduction on dividends received from foreign subsidiaries for one year, effectively lowering the tax rate on repatriated foreign subsidiary earnings from 35 percent to 5.25 percent. The Act, along with subsequent IRS guidance, approved the use of repatriated funds for hiring and training, infrastructure, research and development, capital investments, and financial stabilization for the purposes of job retention and creation, and disallowed using repatriated funds for executive compensation, dividend payouts, share repurchases, tax payments, and debt instrument purchases.
 
The AJCA certainly increased revenue coming into the U.S.; companies repatriated $362 billion in 2004 out of an estimated $804 billion of foreign earnings available for repatriation. However, there is no official report on how the repatriated earnings were actually spent, since the AJCA did not require companies to trace or segregate their use of repatriated funds. In 2008 John R. Graham, Michelle Hanlon, and Terry Shevlin addressed this question in a survey of tax executives at over 400 firms. They found that 23 percent of repatriated funds went toward job creation, 24 percent toward capital investment, and 12.4 percent to pay down domestic debt. 
 
Finally, from the perspective of highway spending both approaches to a stand-alone foreign-source tax policy are flawed. The Highway Trust Fund is actuarially unsound – gas tax revenues are projected to fall well short of projected expenditures in perpetuity. A temporary repatriation would not fundamentally address this imbalance. 

Economic Policy Considerations

Some argue that focusing narrowly on the tax policy aspects of a repatriation holiday misses broader economic benefits. Economic assessments of the AJCA offer varied conclusions as to its net economic effect. Shapiro and Mathur find that repatriated funds were used to create or retain over 2.14 million jobs, and generated $34.5 billion in new federal revenues. On the other side of the spectrum, Dharmapala, Foley, and Forbes find no increase in domestic investment, employment, or R&D, and emphasize instead increases in share repurchases, and evidence of round tripping.
 
During the early stages of the recovery from the Great Recession there was considerable slack in labor markets and a case could be made for well-designed counter-cyclical policy. Indeed, a 2011 estimate, when the unemployment averaged about 9 percent, found that a repatriation holiday would have beneficial short-term impact and would increase GDP by $360 billion and create approximately 2.9 million new jobs. The analysis was based on many of the assumptions that underpin the Congressional Budget Office’s (CBO) estimates of the economic effects of the American Recovery and Reinvestment Act (ARRA). The CBO estimated that ARRA would have a large near-term, positive effect, but would actually reduce economic output in the long-run. 
 
At present, however, the economic setting has changed considerably.  The labor market displays relatively low unemployment and GDP is returning to its potential level of output. Thus, any near-term economic benefits of a stand-alone repatriation policy would be muted.  Instead, the U.S. tax code should ultimately be reformed permanently to encourage long-term economic growth. Reforms should encourage firms to headquarter and invest in the United States, minimize expensive and unproductive tax-planning strategies, improve economic competitiveness, and enhance high-quality jobs. Lowering the corporate tax rate while scaling back the myriad targeted deductions, credits, and carve-outs currently found in the corporate tax code would increase U.S. competitiveness, stimulate the economy, and introduce a greater degree of simplicity. 
 
Among the most clearly stated observation of the growth implications for corporate tax reform is from Gordon and Lee, who found that cutting the corporate tax rate by 10 percentage points can increase the annual growth rate by between 1.1 percent and 1.8 percentage points. The Tax Foundation also published estimates of the potential growth effects from corporate rate reduction, finding that reducing the “federal corporate tax rate from 35 percent to 25 percent would raise GDP by 2.2 percent, increase the private-business capital stock by 6.2 percent, boost wages and hours of work by 1.9 percent and 0.3 percent, respectively, and increase total federal revenues by 0.8 percent.”

Conclusion

The current administration has proposed a one-time levy on overseas earnings to pay for highway spending, while other policymakers have proposed stand-alone tax holiday as a financing mechanism for the same spending. These proposals suffer from a number of flaws, the most significant of which is that they are proposed separately and distinctly from a pro-growth reform of the broken U.S. tax code. Addressing the broken code offers long-term economic growth, whereas a temporary policy offers little economic benefit in the current climate, and fails as an effective financing mechanism for the unsound Highway Trust Fund. 
  • Small businesses take the brunt of regulatory burdens

  • Regs matter: A 10% increase in cumulative regulatory costs results in a 5-6% fall in the number of businesses with fewer than 20 workers

  • The admin's $656B in reg costs really hit small businesses

Executive Summary

American Action Forum (AAF) research examines the private sector implications of regulatory cost burdens. In particular, we analyze the cumulative effect of regulations on the number of businesses for a range of establishment sizes and find that regulatory costs have a highly regressive impact on private industries. Specifically, with a 10 percent increase in cumulative regulatory costs, there is a 5 to 6 percent fall in the number of businesses with fewer than 20 workers. That translates to a loss of over 400 small businesses in an industry. Meanwhile, those same regulations are associated with a 2 to 3 percent increase in businesses with 500 or more workers, indicating that those larger businesses are more capable of absorbing regulatory cost burdens. Small businesses will have a more difficult time complying with the cumulative effect of regulations, which could result in lost jobs.

Introduction

The interaction between regulation and private industries is highly complex. Since 2008, the federal government has imposed $733.9 billion in regulatory costs. AAF research indicates that the cumulative cost of all regulatory compliance devastates small businesses. Specifically, for every 10 percent increase in regulatory costs in an industry, the number of small and medium-size businesses in that industry falls 3 to 6 percent. The number of large businesses, meanwhile, grows 2 to 3 percent. In sum, we find that regulations cumulatively have a highly regressive effect, substantially reducing the smallest businesses and growing the largest.

Methodology

In a series of previous papers, AAF closely examined the cumulative impact of new regulations on private industries. In our last of these papers, for instance, we found statistically significant evidence that every $1 billion in new regulatory costs is associated with a 3.6 percent decline in industry-level employment.

In this paper, we study the same industries and regulations, but aim to dissect the impact of cumulative regulatory cost burdens by business size. Specifically, we estimate the relationship between cumulative regulatory costs in an industry and the number of business establishments with 1 to 4 workers, 5 to 9 workers, 10 to 19 workers, 20 to 49 workers, 50 to 99 workers, 100 to 249 workers, 250 to 499 workers, 500 to 999 workers, and those with 1,000 or more workers.

Data

To examine the effect of new regulatory costs, for each business size category we estimate the change in the number of establishments in an industry associated with an increase in the affected industry’s regulatory cost burden. AAF employs industry-level data for each business size category from the Census Bureau’s 2012 County Business Patterns and uses average number of establishments in the industries[1] in each year from 2003 to 2012.[2] The regulatory cost estimate for each industry in a year is the sum of the projected annual costs of all new regulations an industry faced from the beginning of the time to the given year. We also adjust regulatory cost projections for inflation to 2012 dollars.

Empirical Model

Using these data, AAF performs a fixed effects cubic regression to estimate the effect of an increase in regulatory costs in an industry on the number of business establishments by business size. Both the establishment and the cost terms are transformed into logarithmic variables. The cubic model with logarithmic variables allows us to address a nonlinear relationship between industry establishments and cumulative regulatory costs. In addition, we pool the business establishment data for all sizes under one business variable and use binary variables that represent each business size category. We then interact those categorical binary variables with the cost terms to estimate the association between cumulative regulatory costs and number of establishments in each business size category. For more information on our model and its exact specifications, see the appendix.

Findings

We find that regulatory costs are cumulatively associated with statistically significant changes in the number of industry establishments for each business size.

Table 1: Results

Business Size

Average Marginal Effect

1 to 4

-5.0%***

5 to 9

-5.5%**

10 to 19

-5.8%*

20 to 49

-4.0%***

50 to 99

-3.6%***

100 to 249

-2.3%**

250 to 499

-0.7%***

500 to 999

1.7%***

1000 or more

3.4%**

*Jointly Significant at the 10% Level

**Jointly Significant at the 5% Level

***Jointly Significant at the 1% Level

Average marginal effect of a 10 percent increase in cumulative regulatory cost burden on number of business establishments

 

The results in Table 1 indicate that regulations cumulatively have a highly regressive effect on businesses. While regulations cumulatively reduce the number of small and medium-size businesses, they are associated with an increase in the number of large businesses. Moreover, the results reveal that regulations harm the smallest businesses the most. In an average industry, a 10 percent increase in the cumulative cost of regulations is associated with a 5.0 percent decrease in the number of businesses with 1 to 4 employees, a 5.5 percent decrease in the number with 5 to 9 employees, and a 5.8 percent decrease in the number with 10 to 19 employees. To put these figures in perspective, an average industry in 2012 had 4,848, 1,617, and 1,311 businesses with 1 to 4 workers, 5 to 9 workers, and 10 to 19 workers respectively. If in the following years, an average industry faced a 10 percent increase in cumulative regulatory costs, it would lose 240.5 businesses with 1 to 4 workers, 88.9 with 5 to 9, and 75.5 with 10 to 19. This means the industry would lose over 400 businesses that have fewer than 20 workers.

Table 2: Implications

Business Size

Average Change in Number of Businesses

1 to 4

-240.5

5 to 9

-88.9

10 to 19

-75.5

20 to 49

-42.0

50 to 99

-7.6

100 to 249

-3.1

250 to 499

-0.4

500 to 999

0.6

1000 or more

1.5

 

Our results reveal that the number of medium-size businesses is negatively related to cumulative regulatory costs as well, although to a lesser degree. A 10 percent increase in the cumulative cost of regulations is associated with a 4.0 percent decline in the number of businesses with 20 to 49 workers, a 3.6 percent decline in the number with 50 to 99, a 2.3 percent decline in the number with 100 to 249, and a 0.7 percent decline in the number with 250 to 499. Notice that as the business size gets larger, the negative impact of regulations becomes weaker, perhaps because larger businesses are more capable of absorbing regulatory costs than smaller businesses.

To put these results in perspective, in 2012 an average industry in our sample had 1,045, 213, 132, and 53 businesses with 20 to 49 workers, 50 to 99 workers, 100 to 249 workers, and 250 to 499 workers respectively. Our results indicate that if in the following years an average industry faced a 10 percent increase in cumulative regulatory costs it would lose 42 businesses with 20 to 49 workers, 7.6 businesses with 50 to 99 workers, 3.1 businesses with 100 to 249 workers, and less than 1 business with 250 to 499 workers.

Finally, our results indicate that the number of establishments in the largest business categories actually grows when regulatory costs increase. Specifically, a 10 percent increase in cumulative regulatory costs is associated with a 1.7 percent increase in the number of businesses with 500 to 999 workers and a 3.4 percent increase in the number of businesses with 1,000 or more workers. In 2012, the industries in our sample averaged 34 businesses with 500 to 999 workers and 45 with 1,000 or more workers. Thus, the results indicate that if an average industry faced a 10 percent increase in cumulative regulatory costs, it would gain less than one business with 500 to 999 workers and 1.5 businesses with 1,000 or more workers.

Why would large businesses grow, despite facing regulatory costs? It could be that the largest businesses are most able to absorb regulatory costs, giving them a competitive advantage over smaller companies.[3] Previous research has found that this regressive trend is particularly apparent in the financial services industry. According to a 2012 Government Accountability Office (GAO) report, “[r]esearch suggests that one area in which large banks are able to take advantage of economies of scale is regulatory compliance, which contributes to their advantage in terms of operational efficiency.”

Discussion

Generally, regulatory costs are fixed, meaning that if all businesses are forced to deal with hundreds of hours of new paperwork, the costs of hiring an additional compliance officer will fall disproportionately on small institutions. Today, there are more than 236,000 regulatory compliance officers and they command average salaries of about $66,000 annually. AAF wrote on the steady growth of regulatory compliance staff, noting the “canary in the coal mine effect.” That is, we know that regulation likely increased during the recession because regulatory compliance staff grew by 18 percent between 2009 and 2012, the height of poor U.S. economic growth.

A 2013 Minneapolis Fed study emphasized the paperwork burdens and what being forced to hire compliance staff means for small banks. The study found that hiring two additional compliance officers reduced profitability by 45 basis points (roughly half-a-percent) and that one-third of the small banks studied would become unprofitable.[4] Moreover, Federal Reserve Board Governor Daniel Tarullo has warned policymakers about regressive regulation: “Any regulatory requirement is likely to be disproportionately costly for community banks since the fixed costs associated with compliance must be spread over a smaller base of assets.”

Beyond the empirical evidence that regulations disproportionately affect smaller institutions, regulators actually admit that rulemaking hurts small businesses. In a set of recently proposed efficiency standards of furnaces, the Department of Energy (DOE) noted that conversion costs for the measure would equal 18 percent of small business revenue, compared to just three percent of large business revenue. Furthermore, in EPA’s greenhouse gas regulation reporting rule, the agency noted that costs per entity for the smallest firms would be 1.32 percent, compared to 0.02 percent for the largest companies. In other words, for the reporting rule, regulatory costs are 65 times more burdensome for small businesses. Finally, in one regulation, DOE stated that the rule would likely cause small businesses to leave the air conditioning market or merge with larger competitors. DOE wrote, “It is possible the small manufacturers will choose to leave the industry or choose to be purchased by or merged with larger market players.” Regulations unquestionably have regressive impacts; just ask the regulators.  

Conclusion

Regulation does not just affect small businesses, it hurts the smallest more significantly than medium-to-large-sized establishments. The data are clear: as regulatory burdens increase the smallest businesses (1-19 employees) shed hundreds of establishments while the largest businesses (1,000 or more employees) actually grow by 3.4 percent. Despite regulatory reform designed to protect small businesses, sheer economies of scale and unchecked regulators have made life for small employers incredibly burdensome.

Appendix

As previously mentioned, AAF performs a fixed effects cubic regression to estimate the effect of an increase in regulatory costs on number of business establishments by business size. Both the establishment and the cost terms are transformed into logarithmic variables. The cubic model with logarithmic variables allows us to address a nonlinear relationship between industry establishments and cumulative regulatory costs.

We pool the business establishment data for all establishment sizes under one business variable and insert categorical binary variables that indicate which business size is being examined. For instance, when estimating the impact of regulatory costs on establishments with 1 to 4 workers, the binary variable representing that business category equals 1 and all other business binaries equal 0. Those binary variables also interact with each regulatory cost term in order to measure the cumulative impact of regulations on the number of businesses in each size category. As a result, the three cost variables (log(Cost), log(Cost)2, and log(Cost)3) and the three terms that interact each cost variable with the business size’s categorical binary variable (Business Size x log(Cost), Business Size x log(Cost)2, and Business Size x log(Cost)3) capture the total effect of regulatory costs on business establishments for any business size. For instance, the variables log(Cost), log(Cost)2, log(Cost)3, (1 to 4) x log(Cost), (1 to 4) x log(Cost)2, and (1 to 4) x log(Cost)3 capture the impact of cumulative regulatory costs on businesses with 1 to 4 workers.[5]

We use industry fixed effects to control for characteristics that vary across industries, but not over time. Also, to account for macroeconomic forces that change over time, such as the loss of businesses and jobs during the Great Recession, we control for year. To account for changes in prices during the time period, we control for industry chained Consumer Price Index. Also, any fixed effects model can face the problem of autocorrelation, in which a variable is correlated with itself over time and biases the results. Our model addresses this issue by using heteroskedasticity-and autocorrelation-consistent standard errors. The exact model is displayed below.

The subscripts i and t denote industry and year of the observations, respectively. There are three variables representing the regulatory cost burden, including log(Cost), log(Cost)2, and log(Cost)3. Those are then interacted with each binary business size variable: (1 to 4), (5 to 9), (10 to 19), (20 to 49), (50 to 99), (100 to 249), (250 to 499), and (500 to 999). CPI represents the annual average chained Consumer Price Index, as reported by the Bureau of Labor Statistics. Finally, each Yr variable is a binary variable representing the year of an observation.[6]



[1] The industries we examine can be found in “The Cumulative Impact of Regulatory Cost Burdens on Employment,” American Action Forum, May 2014, http://americanactionforum.org/research/the-cumulative-impact-of-regulatory-cost-burdens-on-employment

[2] U.S. Record Layout, County Business Patterns: 2012, Census Bureau, http://www.census.gov/econ/cbp/download/

[3] In some instances, medium businesses could be adding employees and growing into the larger business categories, resulting in a reduction in the number of medium businesses and an increase in the number of large businesses. This could be true for the 250 to 499 category, as some businesses with close to 499 workers may add more employees, resulting in fewer businesses in that category and more businesses in the 500 to 999 worker category. However, it is highly unlikely that businesses in the smaller categories are adding workers in this manner, simply because in each small business category, the relationship between regulations and establishments is negative.

[4] Marshall Lux and Robert Greene, “The State and Fate of Community Banking,” available at http://www.hks.harvard.edu/centers/mrcbg/publications/awp/awp37.

[5] To avoid the dummy variable trap, the binary variable that represents businesses with 1,000 or more workers was excluded from the model. As a result, the total effect of regulatory costs on those businesses was captured only by the three log(Cost) terms.

[6] We purposefully omit Yr1 from the regression model to avoid the dummy variable trap.

  • The administration's Jan regulatory review finds $3 billion in new regulatory costs

  • Without the help of the DoT, there would be nearly 9 million more paperwork hours found in the Jan. regulatory review

  • Find out how seriously agencies have taken the order to “modify, streamline expand, or repeal” existing regulations.

The administration’s recent attempt to eliminate red tape actually resulted in nearly $3 billion in additional regulatory burdens for Americans. Under President Obama’s executive orders (13,563 and 13,610), agencies were told to “modify, streamline expand, or repeal” existing regulations. Too often for the administration, regulations are regularly expanded and rarely repealed or modified.

Perhaps more troubling, these retrospective reviews of existing regulations seldom follow the law. Under Executive Order 13,610, agencies are required to submit retrospective plans to the White House “on the second Monday of January.” Agencies then must make these reports public within “three weeks from the date of submission of the draft reports.” This means that all reports were supposed to be public the first week of February. Instead, it wasn’t until March 17, more than six weeks after the deadline, when regulatory czar Howard Shelanski outlined the new plans.   

The American Action Forum (AAF) reviewed all publicly released plans from cabinet agencies and found:

  • The updating agencies listed 438 rulemakings and amended paperwork requirements, with a median of 22 per agency;
  • Among the listed rulemakings, net costs increased by more than $2.9 billion, with just three agencies reducing burdens, and
  • Among the listed rulemakings, there was a decrease of 60.1 million paperwork hours, led exclusively by the Department of Transportation (DOT). 

 

nalysis of January 2015 Retrospective Review Plans

Agency

Number of Rules Reviewed

Cost (in millions)

Burden Hours

Agriculture

54

-$0.2

267,033

Commerce

45

-$0.1

-3,774

Defense

77

$194

10,263

DHS

24

$257

1,018,170

Education

22

$4,339

7,358,599

Energy

18

 

 

EPA

21

 

 

HHS

32

$392

 

HUD

11

 

 

Interior

22

$3

160,427

Justice

9

 

1,404

Labor

14

$35

 

State

28

 

-12,500

Transportation

47

-$2,253

-68,916,917

Treasury

13

 

-18,385

Veterans Affairs

1

 

5,550

Totals

438

$2,967

-60,130,180

Results

As noted, the headlines figures are incredibly “top heavy.” Without the Department of Transportation (DOT), the totals would change to increased costs of $5.2 billion and increased paperwork of 8.7 million hours. Once again, DOT saves the report from total failure, but that doesn’t allow other agencies to continue submitting new regulations that expand regulations, add costs, and pile on more paperwork.

For example, the Department of Health and Human Services (HHS) included a rule in its retrospective plan that adds more than $390 million in long-term burdens. The administration admits the rule implements part of the Affordable Care Act (ACA). In other words, the administration claims a new rule that imposes hundreds of millions of dollars in new burdens and implements the ACA is a retrospective review designed to streamline the regulatory process.

In addition, the Department of Education once again included its controversial “Gainful Employment” rule in its plan, even though it will cost $4.3 billion and add more than 6.9 million paperwork burden hours. The regulation, previously struck down by an Obama appointee, would not streamline the current regulatory environment; it would single out for-profit education for onerous new rules. In the rule’s brief acknowledgement of the president’s executive order, it notes, “[T]he Department believes that these final regulations are consistent with the principles in Executive Order 13563.” However, despite billions of dollars in total costs, the administration fails to provide a single benefit figure and can’t explain how nearly seven million hours of new paperwork will streamline the regulatory system.

As a result of the gainful employment rule, the Department of Education’s cumulative paperwork burden is now 94.8 million hours. By contrast, in 2008 that figure stood at 58.5 million hours. Despite the president’s effort to cut red tape, the amount of education-related paperwork has increased by 62 percent.

Thankfully, the Department of Transportation (DOT) saved all other agencies by finalizing or proposing $2.5 billion in cost-cutting measures and reducing 68.9 million paperwork burden hours. For perspective, DOT’s current paperwork burden is 317 million hours.

 

DOT achieved these goals through its “Driver-Vehicle Inspection Report” (DVIR), an example of an agency revisiting its regulatory slate and making substantive revisions, rather than simply adding another layer of rules and pretending that the measure was “retrospective.” The DVIR rule is projected to save the trucking industry $1.7 billion in costs and eliminate more than 46 million hours of paperwork. The agency also proposed to eliminate 22 million hours of paperwork by revising its “Hours of Service” reporting.

 

A Culture of Retrospective Review?

 

Current Harvard law professor and former regulatory czar, Cass Sunstein, wanted to instill a “consistent culture of retrospective review” when he helped to advance the president’s executive orders. Looking at the number of new initiatives in the retrospective reviews reveals that many agencies simply “cut and paste” from their previous work.

 

For example, the Departments of Energy (DOE), Housing and Urban Development (HUD), and the Environmental Protection Agency (EPA) failed to quantify a single new rulemaking that cuts costs in their recent reports. Even though there were 438 entries provided in the January update, only 140 of them were new. The chart below outlines the number of new reviews by agency and the percentage of previous rulemakings that agencies borrowed from past reports. 

 

Updates from Previous Reports

Agency

Number of New Reviews

Percentage of Old Rules

Agriculture

31

42.6%

Commerce

13

71.1%

Defense

30

61%

DHS

1

95.8%

Education

14

36.4%

Energy

                            0

100%

EPA

2

90.5%

HHS

10

68.8%

HUD

8

27.3%

Interior

8

63.6%

Justice

5

44.4%

Labor

3

78.6%

State

5

82.1%

Transportation

6

87.2%

Treasury

3

76.9%

Veterans Affairs

1

0%

Totals

140

Average: 64.3%

 

 

As noted above, three agencies recycled more than 90 percent of their retrospective reviews. DOE couldn’t conjure a single new rulemaking to review and for manufacturers subject to expensive regulations, that’s perhaps a positive development. In previous updates, DOE pretended that new economically significant efficiency standards were somehow designed to cut or streamline the regulatory state. DOE’s last two reports contained $10.5 billion and $3.3 billion in net regulatory cost increases.

 

The Department of Veterans Affairs (VA) is also an outlier here. Despite the long catalogued list of problems with VA, it listed only one rulemaking in its report and the measure actually increased paperwork. VA did mention that it is pursing other updates to its regulatory slate, but it is hard to imagine that with the current problems at VA, the agency can muster only one rulemaking designed to improve the system.

 

Conclusion

 

Out with the old, in with the new. This might be a sensible aphorism to use when cleaning out the regulatory thicket, but instead the administration would rather pile on with new rules. Only one agency has consistently shown an effort to streamline federal regulations. Despite, DOT’s efforts, the administration has added approximately $3 billion in costs.

Related Research
  • Observers are now concerned that too many patents are issued and too many activities are protected under US patent law.

  • Patent trolls and the policies that will change how companies interact with "patent assertion entities."

  • The US must find a way to provide strong patents for legitimate inventors while stopping the fraudulent behavior of patent asserters.

Patents are a government-granted right that allows individuals or companies to exclude others from making, using, selling or importing inventions for a set amount of time, typically 20 years. By imitating a number of key features of real property, patents have allowed innovators to reap the gains of their effort.[1]  The United States has a highly-successful system for issuing patents; indeed observers are now concerned that too many patents are issued and too many activities are protected.

This concern spurred significant changes to patent law in 2011. Despite this recent legislation, Congress is again considering reforms, with the focus on so-called patent trolls. Patent trolls are entities that earn income off their patents from suing others for infringement. These companies then issue letters demanding payments to those others for violating their patents. Because the licensee fee that these firms demand is cheaper than going to trial, which can cost  nearly $3 million to complete, most companies opt to pay.[2] The extent and cost of this practice has been widely debated. This primer reviews the current literature on patent trolls, documents changes made to the law in 2011 with the America Invents Act (AIA), explains the current five areas of proposed reform, and finishes with a discussion on software patents.

As Marc Andreessen, co-founder of Netscape Communications, first said, software is eating the world. Getting the contours of patent law right is important for this highly productive industry. However, ensuring that it succeeds must not come at the expense of others, like those in the high tech manufacturing industry and the medical sciences.   

The Trouble With “Trolls”

Patent trolls fall under the umbrella of patent assertion entities (PAE), which is included under the broader class of non practicing entities (NPE). Because the licensee fee that these firms demand is cheaper than going to trial most companies opt to pay.

Reform proponents point to an explosive increase in these kinds of lawsuits in recent years. According to a number of reports, abusive claims accounted for over half of all patent litigation in 2012, while there was a nearly 40 percent increase in suits by NPEs between 2007 and 2011.[3] Thus, there has been pressure to change some of the aspects of patent laws to stop firms from engaging in this kind of behavior. 

At first blush, anecdotal evidence supports the reformers’ case. Famously, Innovatio IP Ventures sent demand letters to a number of small businesses like coffee shops and bakeries for their use of off the shelf Wi-Fi equipment, because they violated patents that Innovatio bought from Broadcom, a semiconductor company. In reality, many of the largest Wifi router manufacturers already retained licenses for those patents, so consumers were protected from any litigation. Because the original demand letters called for those small businesses to pay between $2,500 and $5,000, many in the hardware industry like Cisco, Netgear, and Motorola Solutions intervened. Cisco eventually settled with the company for 3.2 cents per infringement.[4]

Ultimately, the patents that Innovatio IP Ventures were filing claims with were used within an industry standard. To use these patents, the owner typically allows others to obtain a license if it has fair, reasonable, and non-discriminatory terms (FRAND). Under these terms, Innovatio IP Ventures should have gotten at most 9 cents for each of the infringement, which makes their settlement all that more interesting.[5] Broadly speaking, the cost and structure of FRAND agreements has been a serious point of content for those in the software industry, which now encompasses the vast majority of patent cases.

 

In practice, empirical evidence of the pervasiveness of abuse in sectors other than the software industry needs to be firmly established. For example, one of the most highly cited reports, the NPE Litigation Report, counts not just patent assertion entities, but also every other NPE. This other category includes a number of organizations many would not consider abusive, like universities, research institutions, individual inventors, and non-competing entities, a term to describe companies that assert patents outside their areas of products or services.[6] So, it is unclear how much of this is abusive as compared to a development within the market. Knowledge and innovative ideas are in demand, so firms are likely then to specialize in order to meet this demand, which includes NPEs.  Even still, with the decision in Alice v. CLS Bank, an important and recent case detailed later, estimates have charted a decline in patent cases from their high in 2012 with 2014 seeing over 1,000 fewer patent cases over the previous year.[7]

Courts can and have had an impact. Yet, many of the provisions in a recently passed piece of patent legislation have not been worked through the courts. In order to understand current calls for action, it is important to understand those changes as well.

The America Invents Act

The 2011 America Invents Act (AIA) was designed to improve patent quality and reduce strain on the judicial system. Among the most notable changes was the move away from a first-to-discover standard towards a first-to-file standard, thus coordinating US law with most international laws. To help reduce abusive claims, the Act provided new ways to challenge questionable patents through the Patent and Trademark Office’s (PTO) newly created Patent Trials and Appeals Board (PTAB). Proponents argued that if dubious patents could be invalidated through new, faster review processes, then trolls could not assert them against vulnerable producers who are unable to fight the patent themselves. The Act thus established a way for anyone to challenge any claim of any patent through a process called inter partes review (IPR). In this proceeding, the challenger presents evidence that a patent should be considered invalid. After agreeing to hear the petition, the PTAB must render its decision within twelve months, acting as a cheaper, faster alternative to the courts with a lower burden of proof.

The covered business method review (CMBR) works in a similar way. This process subjects patents on financial “business methods,” a controversial field of patents, to additional scrutiny as they are vulnerable to challenge by any party with standing. AIA instituted proceedings have cut down on the patent caseload of courts by 18 percent, but they are not without critics.[8]

As of November 2014, the PTAB cancelled 80 percent of claims challenged under IPR and 96 percent of claims challenged under CBMR.[9] Some have argued that the structure of the review process also lends itself to abuse as the system can be and is being gamed by petitioners to keep patents tied up in review.[10] For example, a patent can be challenged again and again even once the PTAB has ruled in its favor. CMBR rules are even weaker and allow for evidence to be withheld only to be introduced in later challenges. While there was an 18 percent decrease in patent litigation in courts, trials by PTO increased by 212 percent.[11] Thus, the AIA has clearly affected patents, but in ways that have not fully discernible since it takes nearly 2.5 years to bring a case to trial.[12]

Patent Reform Proposals in 2015

Five changes to the current patent law regime have been proposed to reduce the ability of firms to engage in abusive behavior: fee shifting, customer stay provisions, heightened pleading standards, transparency, and demand letter reform. 

Fee shifting would change U.S. law so that the loser in a patent infringement lawsuit would have to pay for the attorneys’ fees of the winner as well. Only in those instances where the court determines that the loser advanced objectively reasonable positions would they be able to escape the attorneys’ fees. Currently, each part must pay their own fees, and only in exceptional cases is the loser expected to pay. Both versions of the rule have their rightful critics. While there could be more frivolous lawsuits when each party must pay for their legal costs, small but meritorious claims are just as likely when the loser pays for all of the fees.[13] If the goal of the legislation is to undercut those claims that don’t have merit, but are still paid for by small businesses, then it is unlikely that fee shifting would be the most direct solution.[14] By most accounts only 4 to7 percent of cases go to trial. [15]  The other 93 to 96 percent of cases never reach a judgement, and thus would never test a new fee shifting standard. Thus, changes in the standards that would make loser pays less exceptional would likely be a better interim path and have garnered far wider support because it is seen as a middle road between the two.

Customer stay protections make it easier for the manufacturer of an allegedly infringing good to step in to defend its product while putting a temporary halt to proceedings against an end customer. For example, if a company begins litigation against a cafe for its use of a refrigerator which the PAE thinks it holds the patent, then the refrigerator manufacturer can halt the case against the cafe while it defends its patent against the PAE. This change would allow the larger manufacturing firm with more resources and, perhaps, patent lawyers at its disposal to defend its product while protecting end customers, who tend to be smaller, more vulnerable businesses.

Pleading standards are also another area slated for reform. At the beginning of a lawsuit, each side must formally submit their claims and defenses. The information required in patent cases is far less than in other parts of law. (This is due to the loophole called Rule 84, which allows for complaints that follow Form 18 to be sufficient.) Changes to pleading standards are already underway within in the courts.[16] Late last year, the Judicial Conference, which establishes uniform policy for the U.S. courts, suggested that Rule 84 be abolished. In all likelihood, the Supreme Court will approve the changes and it will be sent to Congress. Thus, Congress will likely have to address pleading standards one way or another. The question for Congress is whether it should issue a new standard or be subject to the broader standard.

Because many of the worst offenders have created shell corporations to shelter the identity and nature of the plaintiff, transparency has been another goal for legislators. New requirements would require that parties having a financial interest the patents asserted in a court action are disclosed. While this does create a burden for large integrated firms and could potentially reveal proprietary information, many court proceedings already require this under seal to protect confidential information. 

Demand letters mark the beginning of an infringement claim and have garnered attention in the press. A couple years back, a company called MPHJ sent over 16,000 of these letters demanding payment for a “scan to email” patent.[17] While there have been various bills designed to deal with this behavior, the Federal Trade Commission (FTC) brought a suit against MPHJ which resulted in a consent agreement that restricted it from sending deceptive letters. While these actions by the FTC have lessened the need for a directed bill, the agency has been restrained on this issue, so a bill clarifying the exact purpose of the FTC’s authority and giving them clear guidance on the kinds of behaviors that need to be curbed could deter abusive demand letters.

Patents and Software Post Alice

More than any other, the software industry has been at the heart of recent patent debates. Software patents are controversial, with some even calling for their abolishment.

The patentability of software was brought to the forefront in last summer’s Supreme Court ruling in Alice Corp. v. CLS Bank International.[18] Alice held patents on computer operated escrow services. CLS sued to have those patents declared invalid. In a 9-0 decision the Court ruled that Alice’s service was merely using a computer to carry out the already well known practice of intermediated settlement. That practice, the Court held, is an abstract idea, and using a computer to implement it does not transform it into a patentable invention.

The fallout of the Alice decision is ongoing, but it has sparked renewed debate over software patents in general. Even though it is still early, these patents have been invalidated more often, which has led some to hail the new legal standard as a victory. It is important to note that the Court’s decision did not rule out all software patents; it applies only to patents on art that is only innovative insofar as it carries out an abstract idea with a computer. Some have interpreted the ruling as teeing up a later invalidation of all software patents because all software is, at its core, merely feeding abstract information into a machine which can read it. While some of the most vocal opponents of this approach acknowledge that computer code is probably more suitable for copyright rather than patent protection, software is more than specific code.[19] The real value of a piece of software is in its function, and the invention of new functionality is no less patentable when it is implemented by a computer than when it is implemented by a mechanical device.

Conclusion

Congress is now considering a variety of reform proposals each of which has strong support from some groups and strong skepticism from others. The main concern is finding a way to provide strong patents for legitimate inventors while stopping the fraudulent behavior of patent asserters



[1] Will Rinehart, Intellectual Property Underpinnings of Pharmaceutical Innovation: A Primer, http://americanactionforum.org/research/intellectual-property-underpinnings-of-pharmaceutical-innovation-a-primer

[2] Chris Neumeyer, Managing Costs of Patent Litigation http://www.ipwatchdog.com/2013/02/05/managing-costs-of-patent-litigation/id=34808/

[3] Zach Warren, Facts and Figures: Patent troll lawsuits rising,  http://www.insidecounsel.com/2014/02/07/facts-and-figures-patent-troll-lawsuits-rising

[4] Joe Mullin, Wi-Fi “patent troll” will only get 3.2 cents per router from Cisco, http://arstechnica.com/tech-policy/2014/02/cisco-strikes-deal-to-pay-wi-fi-patent-troll-3-2-cents-per-router/

[5] Thomas C Lundin Jr., Tony V Pezzano , Jeffrey M. Telep and Taryn Koball Williams, Second judicial determination of FRAND rate in Innovatio IP Ventures Patent Litigation, http://www.lexology.com/library/detail.aspx?g=453f612b-fff8-474d-9777-512342ff789c

[6] RPX, 2014 NPE Litigation Report, http://www.rpxcorp.com/wp-content/uploads/sites/2/2015/03/RPX_Litigation_Report_2014_FNL_03.30.15.pdf

[7] Gene Quinn, Decrease in patent litigation questions need for patent reform,  http://www.ipwatchdog.com/2015/03/30/decrease-in-patent-litigation-questions-need-for-patent-reform/id=56159/

[8] Ryan Davis, Fewer Patent Cases In Court, Way More At PTAB, Report Finds, http://www.law360.com/articles/622450/fewer-patent-cases-in-court-way-more-at-ptab-report-finds

[9] Greg Dolin, The Costs of Patent “Reform”: The Abuse of the PTO’s Administrative Review Programs,  http://cpip.gmu.edu/wp-content/uploads/2014/04/Dolin-Abuse-of-PTO-Review-Programs.pdf

[11] See footnote 7

[13]  Robert V. Percival & Geoffrey P. Miller, The Role of Attorney Fee Shifting in Public Interest Litigation, http://scholarship.law.duke.edu/cgi/viewcontent.cgi?article=3755&context=lcp

[14] Avery W. Katz and Chris William Sanchirico, Fee Shifting in Litigation: Survey and Assessment, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1714089 

[15] Michael A. Albert, Is Patent Litigation Worth It?, http://www.wolfgreenfield.com/files/albert_is_patent_litigation_worth_it_.pdf

[16] Vin Gurrieri, Judges Vote To Nix Rule Creating Patent Complaint Forms, http://www.law360.com/articles/578149/judges-vote-to-nix-rule-creating-patent-complaint-forms

[17] Alison Griswold, The FTC Has Settled With America’s Most Notorious Patent Troll, http://www.slate.com/blogs/moneybox/2014/11/07/ftc_patent_troll_settlement_mphj_charged_with_deceptive_sales_claims_and.html

[18] Timothy B. Lee, Will the Supreme Court save us from software patents?, http://www.washingtonpost.com/blogs/the-switch/wp/2014/02/26/will-the-supreme-court-save-us-from-software-patents/

[19] Adam Mossoff, A Brief History of Software Patents (And Why They’re Valid), http://cpip.gmu.edu/wp-content/uploads/2013/08/A-Brief-History-of-Software-Patents-Adam-Mossoff.pdf

According to the Environmental Protection Agency’s (EPA) own estimates, its proposed power plant regulation could eliminate one-fifth of existing coal generation facilities and 80,000 energy jobs. The regulation, set for final publication this summer, would regulate emissions at existing coal and natural gas power plants, while also ensuring that consumers use less energy from coal facilities. Based on American Action Forum (AAF) research, this means that more than 90 coal-fired power plants could be retired across the country. Secondary employment impacts suggest that EPA’s power plant regulation could eliminate 296,000 jobs, about the population of Cincinnati, Ohio and more than the total number of jobs the economy created in February 2015.

Methodology

Buried in one of EPA’s technical support documents, among more than 1,600 other supporting documents for the proposed rule, the administration detailed the economic implications of complying with building blocks one (heat rate improvements at coal plants) and two (increased natural gas dispatch rates and decreased coal usage). EPA estimated one-fifth of existing coal could be retired, around 49 gigawatts (GW) of power, and the coal industry could lose upwards of 80,000 jobs.

Facilities Closure

AAF used the 49 GW figure and EPA’s eGRID data to find the least efficient affected power plants, as measured by heat rate and pounds of CO2 emitted per megawatt hour (CO2/MWh). Anticipating that the least efficient coal facilities are most at risk for closure, AAF identified the 93 least efficient power plants that produced about 50 GW of power.

Below is a map of facilities most susceptible to closure because of EPA’s proposed power plant regulation.

Employment Impacts

EPA predicts that if states adopt only options one and two of the administration’s plan for power plants, 80,000 energy industry jobs will be lost to EPA climate regulations.

For perspective, 80,000 jobs is larger than the population of Napa, CA. But this is only the first part of the story. EPA never quantifies the secondary employment effects of these lost jobs. A 2009 PricewaterhouseCoopers study found that one energy job supports 3.7 additional jobs. Using a jobs multiplier of 3.7, applied to the 80,000 lost jobs that EPA concedes, yields about 296,000 lost jobs across the U.S. 

To put the figure of 296,000 lost jobs in context, the average annual pay in the “fossil fuel electric power generation” industry is $103,645 and the average coal mining salary is $82,068. This means that by 2030, the economy could lose $27.7 billion in wages, larger than the GDP of Jamaica. However, nowhere in EPA’s Regulatory Impact Analysis (RIA) do they monetize the loss of these jobs or wages.

Professors Jonathan Masur and Eric Posner of the University of Chicago have devised a central figure of the costs of a lost job: $100,000. Using this data point, the broader economic implications of 296,000 lost jobs become bleak. The employment costs could eclipse $29.6 billion. That figure alone would make it one of the costliest regulations of all time, but it is absent from EPA’s RIA because agencies typically refuse to incorporate employment projections with regulatory analysis.

Below is a snapshot of how EPA anticipates these jobs will be lost throughout the energy industry.

Employment Category

Year 2020

Year 2025

Year 2030

Retired Coal

-15,600

-14,100

-12,300

Retired Oil and Natural Gas

-700

-500

-400

Coal Extraction

-10,500

-12,400

-13,500

Total Loss by 2030: 80,000 Jobs


AAF determined how job losses are distributed across states. Jobs lost through coal and oil and natural gas retirements were allocated according to which states will see the most generation losses because of the proposed EPA climate regulations. EPA provided estimates of historic energy generation and how coal and oil and gas facilities will be redispatched or retired under the rule. Jobs lost in coal extraction were allocated among states according to coal employment data from the Bureau of Labor Statistics.

State Impacts

The distribution of these 296,000 lost jobs is not evenly scattered throughout the United States.

The redispatch or retirement of coal facilities falls heavily on a small number of states that the EPA will hold to more strict compliance goals. For example, Georgia is expected to lose 13.7 terawatt hours of coal through redispatch or retirement. This represents about 3.6 percent of the total lost coal in the U.S. Thus, Georgia’s share is roughly 5,909 fewer jobs. Losses in oil and gas generation are significantly smaller, but fall disproportionately by generation losses. California will lose 10.4 terawatt hours of oil and gas steam generation, amounting to 956 lost jobs.

Jobs lost in coal extraction will also fall disproportionately on a small number of states. While coal extraction in the Appalachian Basin is far more labor-intensive than coal extraction in the western U.S., western states account for a greater percentage of coal generation. The result is job losses distributed across centers of production. For example, Virginia accounts for 6 percent of all coal extraction jobs and will see 7,452 jobs lost.

It is important to note that EPA anticipates that nearly half of all coal extraction jobs will be lost. Of the 76,000 jobs in coal extraction today, EPA climate regulation is anticipated to eliminate 36,400 by 2030.

The following map displays how many jobs each state could lose as result of redispatch, retirement, and lost coal extraction employment.  

Conclusion

It’s no surprise that EPA’s greenhouse gas rule will eliminate tens of thousands of jobs in the energy sector; the agency admits the rule will cause steep jobs cuts, up to one-fifth of coal generation. But that’s not the entire story. The 80,000 in possible job losses could easily translate into more than 296,000 jobs throughout the economy and $29 billion in employment costs. EPA might tout the benefits of its proposal, but the significant job losses are just as noteworthy.

Chairman Lamar Alexander (R-TN) and Ranking-Member Patty Murray (D-WA), leaders of the Senate Health Education Labor and Pensions Committee (HELP), introduced a bill last week to reauthorize the Elementary and Secondary Education Act (ESEA).  The Committee is scheduled to mark up the bill this week, and the Senate may consider the bill this summer. The Every Child Achieves Act (ECAA) is the product of months of bipartisan negations that the authors hope will break the log-jam that has stalled ESEA’s reauthorization for eight years. The 600 page bill includes a number of reforms to all ten titles of ESEA, but below are a few notable takeaways.

Student Testing

Despite Senator Alexander hinting at the possibility of eliminating federal requirements for annual testing, the ECAA continues to mandate annual assessments. The assessments apply to math and English/language arts in grades 3 through 8 and once in high school. For science the assessments would occur once in grades 3 through 5, and once in grades 6 through 8, as well as once in high school. However, it also affords states the option of using one summative test or a number of short term formative assessments that can be combined to reflect a single summative assessment.  This is likely a welcomed policy change for those school districts already using such measures to inform daily instruction. Also retained in the ECAA is the requirement to disaggregate assessment data for subgroups of students, including minorities, low-income students, English-learners, and those with disabilities, considered by many as the sole redeeming quality of the No Child Left Behind Act – the current iteration of ESEA.

Accountability System

The ECAA provides states the freedom to design and establish accountability systems without federal requirements or consequences. This would be a substantial shift from the current federal policies which call for punitive and prescriptive actions when schools fail to meet adequate yearly progress (AYP) in closing achievement gaps and ensuring every student is proficient in English, math, and science. These systems would give school districts the responsibility for designing evidence-based interventions for low performing schools, with technical assistance from the states. The federal government would be prohibited from mandating, prescribing, or defining the specific steps school districts and states must take to improve those schools. States would be required to monitor interventions implemented by school districts and take steps to further assist school districts if interventions are not effective[1].

State Education Standards

There is little debate that states should establish clear and consistent guidelines for what students should know and be able to do at each grade level in math and English language arts in order to graduate high school with the ability to succeed in college and the workforce. Many, including the business sector, consider it imperative that states take action by adopting rigorous academic standards to ensure a strong economy. Still, for conservatives, the defining principle when adopting academic standards is that the states themselves would have the fundamental responsibility and right to establish them. The ECAA recognizes this responsibility by prohibiting the federal government from determining or approving state academic standards, and specifically names the Common Core State Standards Initiative as an example of prohibited federal overreach.

Title I Portability

Not included in the ECAA is a provision that would allow for federal funding to be used at any public school of choice, including charter schools - commonly known as Title I portability. This idea, which dates back to the Reagan Administration, has gained momentum in recent months – particularly among conservatives who view federalism and choice as the way to provide equality for disadvantaged students in a system that consigns them to inferior schools with records of persistent failure. The omission of this policy is likely the results of the bipartisan negotiations, but given the history of support for such policies by the likes of Chairman Alexander, Senator Burr and other non-committee Republicans, it’s likely to come back up during the legislative process.

Charter School Programs

The legislation strengthens and streamlines the Charter Schools Programs (CSP) by consolidating state start-up grants and facilities aid grants into one program while adding and authorizing, for the first time, grants for the replication and expansion of high performing charter schools. This would be a victory for parents and students, since the additional funding for replication and expansion would ensure the growth of high performing charter schools that have demonstrated success in improving academic achievement, particularly amongst disadvantaged students. However, questions and concerns have been expressed as to whether Charter Management Organizations (CMOs) will squeeze out funding for new innovative up-start charters schools when CMOs are allowed to ‘double dip’ in both state start up grants and replication and expansion grants.

Redefining Core Subjects

One note of interest, is that the ECAA expands the definition of 'core academic subjects' which has historically included English, reading or language arts, mathematics, science, foreign languages, civics and government, economics, arts, history, and geography. The updated definition now includes computer science, music, and physical education, and any other subject as determined by the state or local educational agency. This can be interpreted as a victory by those who have claimed that NCLB has precipitated the narrowing of curriculum leading to the elimination of the arts in some schools.

Early Childhood Education

Perhaps one of the most anticipated changes, albeit woefully inadequate for early childhood education advocates, is the allowance of ESEA funds for the purpose of early education programs. Specifically, the ECAA would allow funds from title I (programs to support disadvantaged students), title II (programs to support teachers and school leaders), and title III (programs serving English language learners) to be used to strengthen and expand early childhood education programs. Historically, these funds have been restricted for use in elementary, middle, and high schools.

Amending the Draft

The Student Success Act (SSA), which has a formidable contingent of support but has still fallen short on the votes needed to pass the House, is viewed as a baseline for what Congressional Republicans are seeking in a reauthorization. And if the rhetoric from Representative Bobby Scott (D-VA), and left-leaning organizations such as Council of Great City Schools, National Council of La Raza, National Education Association, and The Education Trust is any indication, the ECAA as drafted will change quite a bit (if not scuttled altogether) through the legislative process. Here are a few changes to watch for:

Targeted Assistance vs. Funding Flexibility

A key provision in Chairman Kline’s SSA replaces the maze of targeted federal dollars with a single Local Academic Flexible Grant, which supporters view as providing states and school districts the needed flexibility to improve student learning. However, the ECAA and some organizations that describe themselves as advocates for the disadvantaged view such measures as diluting and or diverting federal funds away from intended purposes. These groups, as the Education Trust stated in a press release, are “pleased that the [Senate bill] maintains targeting of federal dollars to the districts and schools serving the highest concentrations of low-income students.” Given the support amongst conservatives and the battle cry of ‘local control’ it’s likely that we will see amendments offered that provide for funding flexibility.

School Choice and Federal Funds that Follow the Student

Given Chairman Alexander’s history of support and advocacy, consider, for example, his Scholarship for Kids Act, provisions for school choice seems glaringly absent from the ECAA. Add to that the fact that Senator Tim Scott (R-SC), self-described champion of choice and author of the CHOICE Act (Creating Hope and Opportunity for Individuals and Communities through Education), sits on the Committee and you have the makings of an amendment that would establish a new funding stream for school choice programs. Or we could see an amendment that calls for the portability of title I funds to ensure students have the option to choose a school that best fits their educational needs.

Conclusion

The key provisions described above, and the two descriptions of likely amendments, are just a drop in the bucket when it comes to the number of issues up for debate when the ECAA is marked up in committee (and if by some procedural miracle is given full consideration on the Senate floor). The last time ESEA was reauthorized, the debate in the Senate lasted for seven weeks and 150 amendments were offered. That reauthorization pushed the number of programs to 89 up from 55 authorized in the then current law, while the price tag ballooned to $33 billion up from the $19 billion presented in President Bush’s plan and $23 billion in the House’s version. Should the ECAA survive this gauntlet in the Senate, the legislative process would still be far from finished. The more conservative House of Representatives would have to put their mark on the legislation, potentially crippling the ability to reach the 60 vote minimum required for cloture in the Senate.  It is clear that the law needs to be replaced. Everyone from the president to governors to school leaders agree that the current law has become unworkable. Most state leaders agree that the Obama Administration’s waivers have added to the confusion and increased federal burdens. An updated version of the law would ensure that every child has access to an excellent education.


When the American Action Forum (AAF) analyzed the total costs of the immigration system, we found close to $30 billion in annual regulatory compliance costs. The specific toll on American employers is just as significant and these burdens increase the cost of doing business and place a barrier to firms hiring qualified workers. AAF found that a hypothetical firm hiring an immigrant would have to manage up to six federal forms, totaling 118 pages, and at a cost of approximately $2,200 per firm, per hire. For some small businesses, this amounts to a “regulatory tax” of 3.6 percent.

Methodology

As with AAF’s previous study on the total costs of the immigration system, we examined paperwork requirements via the Office of Information and Regulatory Affairs (OIRA). We found 20 requirements that dealt specifically with the labor implications of hiring an immigrant worker. Of this sample, there were seven paperwork burdens that specifically applied to employers. AAF used agency estimates on the amount of time for each requirement, the number of forms, length of applications, and number of applicants. When an agency failed to provide a cost for the paperwork burden, AAF used the Department of Labor’s estimate of “Real GDP Per Hour Worked:” $60.59. For the cost of an immigration attorney, AAF assumed $180 per hour.

Findings

Before hiring an immigrant with H-1B or H-2B status, the Immigration and Nationality Act requires employers to verify that a foreign worker will not adversely affect the wages and working conditions of U.S. workers comparably employed. To comply with the statute, the U.S. Citizenship and Immigration Services (USCIS) rules require that the wages offered to a foreign worker must be the prevailing wage rate for the occupational classification in the area of employment.

The prevailing wage rate is defined as the average wage paid to similarly employed workers in a specific occupation in the area of intended employment. Effective January 4, 2010, employers can obtain this wage rate by submitting a request to the National Prevailing Wage Center, or by accessing other legitimate sources of information such as the Online Wage Library.

Employers must submit this prevailing wage determination at least 60 days in advance of the “initial recruitment efforts.” Every year, employers submit more than 890,000 applications, for a total national cost of $24 million. According to USCIS, the four-page form should take employers slightly less than one hour to complete. Yet, even after submission to the federal government, the wait time can last up to two months.

Once recruitment efforts have started, the largest employer burden, in terms of time spent on each requirement, is the I-129 form, or the “Petition for Nonimmigrant Worker.” Every year, USCIS receives more than 333,800 responses from businesses for this form, for a total time of 1.6 million hours. The reported annual cost is $128 million. Doing the math, this means that the average business spends roughly five hours on this 36-page form and almost $80 per hour completing it. For employers who forgo the assistance of an attorney, they must negotiate the instructions for form I-129, which are 29 pages.

Any employee that does eventually get hired with then have to face the E-Verify program or the I-9 form, “Employment Eligibility Verification,” which the employer also manages. The I-9 form, which virtually all Americans must complete, generates 176 million submissions to USCIS annually, for a total of 40.6 million hours of paperwork. The federal government doesn’t monetize the cost of this single form, but applying the figure for real GDP per hour worked yields roughly $2.4 billion in aggregate burdens, about the GDP of Aruba.  

The contentious E-Verify program, run primarily by employers, is the most burdensome requirement in terms of pages, at 50. Annually, employers submit more than 23 million forms, which takes about 3.5 million hours. The time per E-Verify submission is 3.6 hours. Obviously, for a business that must submit several forms, the time and economic burden can escalate quickly.

H-1B Skilled Immigrants

The H-1B program employs workers with specialized knowledge, including scientists, engineers, and computer programmers. Unfortunately, for businesses seeking to hire skilled workers, there is a cap of roughly 65,000 H-1B Visas annually and demand for these employees is high.

For employers who brave the process, they’ll need to submit an application for an H-1B immigrant. Despite the 65,000 cap, USCIS receives more than 340,000 responses annually, for a total of 310,000 hours. According to the federal government, each form takes an hour to complete. USCIS does not monetize the employer burden to complete this information, but assuming $60.59 per hour yields a total cost of $18.7 million. 

Once the paperwork is submitted for employers and potential U.S. workers, the wait time for this process varies from eight weeks, to more than six months. The average wait time is roughly 250 days, depending on the processing center. This significantly increases the cost of hiring skilled workers in the U.S.

H-2B Temporary Jobs

The H-2B program allows employers to hire temporary, non-agricultural employees. Like its sister H-1B Visa, the H-2B program also has a federal cap: 66,000 per fiscal year. However, there is an internal quota as well, as no more than 33,000 can be filled in the first half of the fiscal year, with the remainder in the second half of the year.

Employers interested in H-2B immigrants must submit an application, which totals only six pages. Amazingly, USCIS estimates that employers will spend 2 hours and 45 minutes completing the form, but this will cost businesses just $1 an hour. For the entire application, it estimates annual costs of $19,000, although many burdened employers across the nation would likely disagree with that estimate. Once the application process is complete, the waiting game remains. The national average wait time for a H-2B Visa is one month according to USCIS.

A Hypothetical Process

Stepping back from the macro-level view of national paperwork burdens on businesses, hiring a foreign worker under current law is a complicated, costly procedure for an individual employer. While the general steps may seem similar to hiring a domestic employee, many of the aforementioned forms and requirements add layers of complexity and cost. The following is an abbreviated examination of the issues, questions, and procedures an employer must address when hiring a foreign worker.

1.      First Step: Hire a lawyer. While this may not be as much of an issue for larger employers with scores of compliance staffers and attorneys attune to immigration law, many smaller employers will need help to avoid some of the pitfalls of the process. Even minor, honest mistakes can leave a company or employer open to serious legal issues. AAF assumes a typical immigration attorney charges $180 per hour.

2.      Second Step: Interview and Assess Prospective Employee. Prior to seeking applicants, an employer must take approximately an hour to make the wage declaration described above. Much as one would for any employee, an employer would hold an interview and check references. However, holding an in-person interview likely involves more complicated travel arrangements, including possibly acquiring a visitor Visa (a process that can take multiple days). In addition, assessing the applicant’s background and prospective job responsibilities is particularly crucial to the next step.

3.      Third Step: Determine and Petition for Employee Status. This is the stage where an employer has to determine what kind of Visa is appropriate for a prospective employee. Part of this involves completing the 4.8 hour-long I-129 form. Under this process, there are several types of Visas that could apply to the prospective employee. The H-1B and H-2B visas are the most well known; these would take an employer roughly one to 2.75 hours, respectively. Even the offer letter extended to the prospective employee needs careful wording to align with these reports. For instance, here is the template prescribed by the University of Michigan’s International Center hiring practices.

4.      Fourth Step: Hiring and Verifying Employee. Finally, once the prospective employee becomes an actual employee, the employer must verify their legal status. As mentioned above, the I-9 and E-Verify process takes roughly 3.6 hours. What’s described here is only for a temporary worker. Those employees and employers that seek permanent residency at some point must undertake an entirely different process.

The following chart details possible employer burdens from hiring a single skilled immigrant:

Total Cost: Even without counting the potentially months-long wait times for government approvals, the process of simply filling out forms for a single hire could take more than 12 hours, depending on the type of Visa. Burdens for this time translate from $727 to $2,187. In other words, because of federal regulations, hiring a single immigrant worker acts a regulatory tax of approximately 3.6 percent for small firms.

Related Research

The significant and persistent U.S. energy boom is happening in spite of government obstacles to resource development. AAF research proved that federal land use management agencies keep more than 60 percent of federal oil and 40 percent of federal natural gas resources closed to development. To maintain the U.S. position as the world’s leading oil and gas producer and continue to challenge the power of Organization of the Petroleum Exporting Countries (OPEC), Russia, and other diplomatic antagonists, the government must find ways to improve its management of domestic energy resources. This paper outlines six steps the U.S. government can take to do just that.

One: Be Aware of Resources

AAF research was based on the Inventory of Onshore Federal Oil and Natural Gas Resources and Restrictions to Their Development (Inventory), a 2008 study commissioned by Congress and authored by the Departments of the Interior, Agriculture, and Energy. As its title suggests, the report was a thorough assessment of energy resources on federally managed lands and it examined how the federal land use management agencies, especially the Bureau of Land Management (BLM) at the Department of the Interior and the Forest Service at the Department of Agriculture, approach their stewardship responsibilities.

The Inventory used data through 2006, however, and so excluded updated resource and recovery estimates made possible by advancements in imaging and fracking technology in the shale and tight oil deposits driving the energy renaissance. Note from Figure 1 that estimates of oil and natural gas resources have increased dramatically since the Inventory release date. 

Figure 1. U.S. Oil and Natural Gas Reserve Estimates

 

 

A 2012 report from the Congressional Budget Office estimated that about 70 percent of federal oil and gas reserves are open to leasing, a far higher figure than determined by the Inventory. Regularly updating this document will provide reliable planning information on the distribution of resources and compel regular performance evaluations of how federal agencies are managing lands and resources.

Two: Give States a Larger Role

The federal government owns and manages 28 percent of all U.S. lands. In ten states, the government manages more than one third of all lands; in five, it manages more than half (Table 1). Nearly half of all federally owned lands are managed for underlying oil and gas resources, and a full 60 percent of oil and gas lands are entirely closed to development. Combined, these factors produce huge distortions at the state level that remove local voices and interests from land use management decisions.

Table 1. Portion of State Lands Managed by the Federal Government

State

Federal Lands Portion

State

Federal Lands Portion

Nevada

81%

Wyoming

48%

Utah

66%

California

48%

Alaska

62%

Arizona

42%

Idaho

62%

Colorado

36%

Oregon

53%

New Mexico

35%

Source: Congressional Research Service

 

These policies are unsustainable. Oil and gas development is thwarted, sure, but so are land-intensive clean energy plans necessary to comply with state-level renewable energy targets and pending federal greenhouse gas regulation. To enable flexible, informed land use strategies that best address local energy goals, the federal government must give states a larger voice in land management decisions or devolve ownership or management authority to the states. States are the wisest users of their own resources, and best know how to protect the local environment, people, and prospects for growth; their role must be strengthened.

Three: Plan Wisely, Be Timely

At the time of the Inventory, 20 percent of federal lands with oil and gas resources were closed to development while undergoing land use planning and environmental review processes. While it would be reckless to rush through the assessments that will inform wise land use, wise use also requires that these assessments be completed in a timely manner. The energy boom revealed that new technologies can identify enormous resources in unlikely places. Tying up lands in endless review processes limits their utility and defers development, employment, and revenue benefits ultimately due to American taxpayers.

Federal land managers should focus on completing open planning processes in a timely manner, especially those in resource-rich areas like Alaska’s North Slope, Nevada’s Eastern Great Basin, and Montana’s Thrust Belt. In all three areas, more than 28 percent of lands are awaiting final plans. 

Four: Clarify the Rules of the Road

It is apparent that federal land managers have unclear processes for leasing and permitting that undermine the ability of oil and gas operators to access and develop resources. For example, AAF research showed that leasing activity at BLM declined 20 percent over 2011-2013 at the same time that industry nearly doubled its requests for new leased acreage.

BLM permitting processes also generate confusion and delays. On average, it took 222 days to process an Application for Permit to Drill (APD) at BLM over the last decade (Figure 2). Industry operators took 135 of those days to file and complete a permit application. Any application that takes 135 days to accurately complete is far too convoluted. BLM should clarify and clearly communicate permit application requirements with the goal of reducing annual average APD process times back to the 2005 low of 154 days. While this will still be far longer than state permit approval timelines (which take a few weeks), it is a clear and achievable target.

Figure 2. Average APD Process Times at BLM

Five: Manage the Managers

There are real and serious ongoing management issues at BLM. Both the Government Accountability Office (GAO) and the Department of Interior Inspector General released reports in the last two years suggesting that management issues at BLM slow down the permitting process and prevent effective oversight of development projects. Even the 10-year Federal Permit Streamlining Pilot Project to improve the efficiency and speed with which BLM processes permits has gone without the oversight necessary to determine its effectiveness.

Poor supervision and shoddy record keeping deliver an agency without direction. Interior must develop appropriate internal oversight methods, craft clear and achievable performance metrics, establish more rigorous procedures for processing APDs, and collect lessons learned from the permit streamlining pilot project.

Six: Move Past Scoring

Congress should not judge the benefits of natural resource policy based strictly on a 10-year budget score. This is a common tool for evaluating legislation, but congressional action to open lands or increase the pace of permitting will not result in a favorable assessment for two important reasons.

First, it is extremely difficult to positively score a more rapid leasing and permit process. About 80 percent of lands managed for oil and gas resources are open to development by law and limited only by various restrictions and lease stipulations imposed by federal land managers. As the Congressional Budget Office describes, “Federal agencies can impose or lift those restrictions at any time, so their long-term effects on leasing, production, and federal receipts are not quantifiable.”

Second, opening up new areas to development will not yield significant revenues in a budget score. Untapped areas will require environmental analysis, permitting, and infrastructure construction, which can delay oil and gas production and resulting revenues well past the 10-year scoring window. Some land use plans analyzed in the Inventory have been in effect since 1976; land use planning by nature has to focus on good policy over the long-term.

Congressional actions that would expedite or improve federal permit and leasing should not be judged solely by a favorable 10-year budget score. To create meaningful and lasting progress in land management requires a long-term view that won’t necessarily improve the federal budget outlook today.

Conclusions

The U.S. economy has relied on proactive states and companies to drive the energy boom thus far. It’s time for the federal government to move past inefficient and obsolete land management policies and be a proactive participant in the energy economy.

 

Related Research

Introduction

Tucked away in the reforms to the Elementary and Secondary Education Act (ESEA) that the House is currently considering is a provision that has raised the ire of the administration and organizations that prefer to maintain status-quo. At issue is whether states should be given the option to allow federal funding to be used at any public school of choice, including charter schools. Commonly known as Title I portability, or financial backpacks, the idea is not new, and dates as far back as the Reagan Administration.  Support for the idea is mounting – particularly among conservatives who view federalism and choice as the way to counter “a system [that] consigns the poor and immobile to inferior schools and leaves the control of schools in the hands of those who benefit most from the status quo.” From this perspective it becomes imperative to change the way Title I funds are allocated to a per-pupil basis rather than a formulaic system that has shown little evidence in significantly improving academic achievement. By doing so schools will be incentivized to attract, retain and improve outcomes for disadvantaged students.

Overview of Title I

As part of ESEA, Title I aims to provide all students with an equitable education by providing financial assistance to high-poverty schools and school districts. Four different types of formula grants are used in this funding and school districts must demonstrate that the funds are used to supplement and not supplant services provided by the state and local agencies. As low-income students have a strong correlation for developmental delays and poor academic achievement, it is extremely important that these students receive extra aid.

 

For the 2014 fiscal year, Title I, Part A was the single largest investment for K-12 education with an estimated $14.4 billion allocated. The funding goes through a long top-down process, which starts at the federal level, then to state education agencies, down to the school districts, and finally to schools. Title I does not fund the low-income student directly. Instead, funding is directed towards schools with highest concentration of low-income students. Basing funding on highest concentration of low-income student has been demonstrated to not accurately measure poverty, particularly in high schools which tend to have larger student populations.

Problems with the allocation of Title I funding are surfacing anew as Congress addresses reauthorization of ESEA. In fact, diminishing Title I funding throughout the education process of low-income students and costly, time-consuming regulations have been some of the most discussed issues in the debate. There is growing consensus that “pumping all this money into districts to boost the budgets of schools serving disadvantaged students hasn’t done much good by way of improved academic achievement.”[1]

Table 1. Title I Formula Grants

Federal funds are currently allocated through four statutory formulas that are based primarily on census poverty estimates and the cost of education in each state.

 

1.       Basic Grants provide funds to Local Education Agencies (LEAs) in which the number of children counted in the formula is at least 10 and exceeds 2 percent of an LEA's school-age population.

 

2.       Concentration Grants flow to LEAs where the number of formula children exceeds 6,500 or 15 percent of the total school-age population.

 

3.       Targeted Grants are based on the same data used for Basic and Concentration Grants except that the data are weighted so that LEAs with higher numbers or higher percentages of children from low-income families receive more funds. Targeted Grants flow to LEAs where the number of schoolchildren counted in the formula (without application of the formula weights) is at least 10 and at least 5 percent of the LEA's school-age population.

 

4.       Education Finance Incentive Grants (EFIG) distribute funds to states based on factors that measure:

o    a state's effort to provide financial support for education compared to its relative wealth as measured by its per capita income; and

o   the degree to which education expenditures among LEAs within the state are equalized.

Source: US Department of Education

 

Title I Portability

In response to the issues with Title I funding, Congress has offered an answer with Title I portability. Within the House’s Student Success Act, states would be given the choice to employ Title I portability for public schools. And although the Senate is still drafting their version of a reauthorizing bill, the Health, Education, Labor and Pensions Committee Chairman Lamar Alexander has, in the past, proposed a similar structure with the Scholarship for Kids Act. This emphasis on portability will greatly impact the current Title I funding system, but as with any change in policy, there are some positive and negative outcomes associated with the reforms.

Impact of Title I Portability

School Choice

The National Center for Education Statistics[2] reports, trends of providing school choice throughout the nation have been increasing in the past twenty years, and it is projected to continue increasing. The defining principle of school choice policies is that parents should be able to send children to a school that fits their child’s educational needs. As concluded by multiple research studies, these policies along with financial aid support allow disadvantaged students a better chance to receive a high-quality K-12 education by improving opportunities to access private schools, charter schools, traditional public schools and others. By allowing federal dollars to follow low-income students with Title I portability, eligible students will have access to much needed additional financial support that will allow parents and students to explore these other school options.

Reallocation of Funding

Title I portability will cause a shift in overall funding in many districts. According to Education Trust, the shift could cause areas with a high poverty concentration to lose funding. On the other hand, areas with a low poverty concentration will receive more funding. For example, it is estimated that Pennsylvania’s highest poverty quartile would lose twenty-one percent of funding. Pennsylvania’s lowest poverty quartile would gain fifty-six percent in funding. States will have the option to implement a weighted funding formula that can take into account the increased burdens of poverty concentration and or any regional differences in costs.

Stabilization of Funding Throughout Schooling

Currently, Title I at the local level uses the percentage of low-income students to divide the pool of funding. However, overall school population tends to increase from elementary to high school. Therefore, even if the enrollment of low-income students is the same between a student’s past elementary and current high school, the high school will not receive as much Title I funding per low-income student as the elementary school. By the current distribution of funding, as a low-income student goes through K-12 education, the amount of money per year will decrease. Seventy-six percent of Title I funding goes to elementary schools. Only ten percent of Title I funding goes to high school. Title I portability gives low-income students access to funding throughout their education.

Transparent Funding for Teachers, Principals

Allowing Title I portability will empower school principals and teachers to be more in charge of their own funding. A large portion of Title I funding for students gets absorbed into administration cost. For instance, prior to 2008, less than half of Hartford, Connecticut’s education funding made it to the classroom.[3] Now, with an emphasis on portability, over seventy percent goes directly to the schools. As principals and teachers are the closest to students, they can utilize the best way to address student needs and improve academic achievement. As a result, the district’s schools posted the largest gains, over three times the average increase, on the state’s Mastery Tests in 2007-2008. 

Because of the “Supplement not Supplant” provision in Title I, funding is difficult to allocate efficiently to special education students. State officials must oversee and prove the impact of each dollar spent in Title I funding.[4] For special education students, the Title I funding cannot overlap with other funds. This well-intended policy adds time-consuming budget analyses. As there is a high correlation between special education students and low-income students, it is important that schools can utilize funding to best benefit each student. 

Implementing Title I portability would move funding to the local level and eliminate the administrative burden of “Supplement not Supplant”. The principals and teachers would have more autonomy over money. By shifting to a focus in portability, funding will be easier to track. Thus, the success of the funding will be easier to track too. 

Conclusion

As Congress continues to explore the reauthorization of ESEA, it is extremely important to ensure every student has access to a quality education. Title I has the history of attempting to bridge the economic gap in education. However, the formula behind Title I funding needs revamping to effectively aid every student throughout their education. By enabling the portability of Title I funds, families will be better equipped when making decisions about the education provider that is in the student’s best interest.

Karla Luetzow Co-authored this report. Follow her at Policy Interns



[1] Chester E. Finn, Jr. Even with Limited Leverage, Uncle Sam Can Promote School Choice (EducationNext, August 2012) http://educationnext.org/even-with-limited-leverage-uncle-sam-can-promote-school-choice/

[2] National Center for Education Statistics. 2007. Trends in the Use of School Choice: 1993 to 2007. http://nces.ed.gov/pubs2010/2010004.pdf

[3] Reason Foundation. 2009.  New Weighted Student Formula Yearbook:

 http://reason.org/blog/show/reason-foundations-new-weighte

[4] Katie Furtick and Lisa Snell, Federal School Finance Reform: Moving Toward Title I Funding Following the Child (Reason Foundation. 2014.) http://reason.org/files/federal_school_finance_reform.pdf

Related Research

The starting salary for an employee at Walmart is below the poverty line. Now, the American government subsidizes Walmart to the tune of 7.8 billion dollars a year, by issuing food stamps to over one in ten of its workers. But here's the scary part. Fifteen percent of all food stamps are actually used at Walmart. Meaning Walmart gets to double dip into the federal government's coffers.

--Heather Dunbar, House of Cards

Introduction

Heather Dunbar may be fictional, but the argument is quite real and oft-repeated. It is also 1000 percent wrong. The reality is that low-wage employers compete with income support programs for the time of workers. If the programs become more generous, the value of not working increases, and employers have to raise wages to attract workers. Far from subsidizing the employers of low-wage workers, the income support raises their cost of doing business. In the process, those programs  may contribute to pricing low-skilled workers out of jobs and increasing the incentive to substitute modernization and technologies.

This is not an argument against the provision of income support or a social safety net. Even valuable government programs, however, have economic consequences, and these should be clearly understood.

In this brief essay, I pursue this objective by exploring the economic foundations and real-world magnitudes of the Welfare Wage Multiplier (WWM) – the percent increase in market wages driven by the expansion of income support programs. To anticipate the results, I find that expansions in the scale of taxpayer-provided support will raise wages by as much as 19 percent.

Economic Foundations[1]

The economics of the impact of more generous income support programs on wages and low-wage work are quite simple. Other things being the same, more generous programs will lure some workers or, for others, some part of their current hours of work out of the employment market. This is an entirely understandable and predictable response to having more money. At the same time, restaurants, drinking establishments, retail stores and other low-wage employers will find themselves competing for shrunken pool of workers and forced to raise pay to get the employees needed to satisfy their customers.

The actual increase in wages in response to more generous income support relies on the “welfare wage multiplier” (WWM). A shorthand capturing this relationship is: %w = W %s, where %w is the percent increase in low-skill wages and it equals %s or the percentage increase in the generosity of the income support times W, which is the WWM.  The WWM is, in turn, built upon the characteristics of the labor market, specifically: W = (qh)/(e-g). The multiplier, while arcane, makes intuitive sense. For example, h measures the sensitivity of workers to having a larger income. If being more affluent does not affect desire for work, it is equal to zero, income support lacks a channel to influence wages, and %w is zero. Alternatively, the more sensitive workers are, the greater the impact on wages.

Similarly, q is the fraction of potential income contributed by income support programs. Obviously, if programs are not a key component of income, making them more generous will not affect wages. However, when they is a central part of the economic lives of workers, it will have a bigger impact.

The final pieces of the WWM are the responsiveness of employers (e) and workers (g) to increases in wages; negatively for employers and positively for workers. So when employers or workers are more responsive to increases in wages, the denominator (e-g) grows in absolute value, which reduces W and the WWM. That makes sense. If employers are quite sensitive to wage increases, it will only take a small increase to bring jobs down to the new level of labor supply after an expansion of income support. Similarly, if workers are quite sensitive, it will take only a small increase in wages to offset the lure of more generous income support programs. 

How Big is the WWM?

A formula and a pile of Greek letters is great, but what does it tell us about the impact of government programs on low-skill wages.  Let’s start plugging in some numbers. A rough summary of the evidence on the responsiveness of workers to income (h) is that it is quite limited (-0.1 to -0.2), while the responsiveness to wages (g) is similar (0.1 to 0.3). 

Using the Survey of Income and Program Participation (SIPP), I compute a rough measure of transfer income as a fraction of potential income (q). Transfer income includes household income received from energy assistance, social security, supplemental security income, unemployment insurance, veteran’s assistance, Supplemental Nutrition Assistance Program (SNAP, “food stamps”), and Temporary Assistance for Needy Families (TANF). The figure does not include value of Medicaid (an in-kind form of income support). While SIPP tracks if someone receives Medicaid, it does not report the value of the insurance; in this regard the estimates are likely on the conservative side.

Potential Income: Potential income includes total potential labor earnings in a household (hourly pay rate x 84 hours x 50 weeks for each household worker), total household property income, and total transfer income.

If one focuses on those that receive public assistance and work, the average value of transfer income as a proportion of potential income is roughly 0.15. Of course, it could be that the programs generate high enough wage barriers and sufficient income that some people do not work at all. Using the value for all those households that receive assistance at all yields a value of 0.28.

The final piece of the puzzle is the sensitivity of employers to more costly labor (e). The limited research suggests this could range from a moderate response (-0.2) to great sensitivity (-1.0). 

Collecting these estimates, the information suggests that the WWM could be as small as 0.01 and as large as 0.19.  Put differently, suppose that one expanded income support by $1,000 annually. As a fraction of the average receipt ($13,228) in the SIPP, this would constitute a 7.6 percent increase in the generosity of income support. In turn, this would raise low-skill wages by as little as 0.076 percent to as much as 1.4 percent. 

Or, consider that the data underlying the House Budget Committee report on poverty programs indicates that income support spending – excluding the Earned Income Tax Credit, which has a clear positive impact on labor supply – has expanded by roughly 50 percent since 2003. Using the range (above), this implies that the expansion has driven up wages by anywhere from 0.5 percent to nearly 10 percent.

These examples may or may not seem significant, but consider the cumulative impact of small expansions income support by imagining the reverse: a 100 percent elimination of the net. This would generate as much as a 19 percent decline in wages. 


[1][1] The logic in this section is the same that underlies Hagedorn et. al (2013): Marcus Hagedorn, Fatih Karahan, Iourii Manovskii, and Kurt Mitman, “Unemployment Benefits and Unemployment in the Great Recession: The Role of Macro Effects,”

Federal Reserve Bank of New York Staff Reports, no. 646

October 2013; revised February 2015 who examined how the duration of unemployment benefits impacted unemployment during the great recession. They note that higher unemployment is due to an increase in the equilibrium wage, which reduces vacancies and labor market tightness (defined by ratio of vacancies to unemployment).

On February 23, President Obama announced that his administration was moving forward with new regulations on financial advisers, commonly known as fiduciary standards. The U.S. Department of Labor (DOL) first proposed such a rule in 2010 to protect consumers from professional advisers who financially benefit from recommending certain investments. Fiduciary standards legally bind an adviser to act in a client’s best interest. This paper provides background on the regulation and its purpose, while also outlining the policy and market implications of DOL’s rulemaking.

Background

TABLE 1. TIMELINE OF ACTIONS, RULEMAKINGS, & RELATED EVENTS ON FIDUCIARY STANDARDS

JULY 2010

DODD-FRANK ACT EFFECTIVE

OCTOBER 2010

NPRM: CONFLICT OF INTEREST RULE-INVESTMENT ADVICE FROM DOL/EBSA

JANUARY 2011

SEC RELEASES STUDY ON FIDUCIARY STANDARDS FOR BROKER-DEALERS & INVESTMENT ADVISERS

MARCH 2011

DOL/EBSA HOLDS PUBLIC HEARINGS ON CONFLICT OF INTEREST RULE-INVESTMENT ADVICE

SEPTEMBER 2011

DOL/EBSA WITHDRAWS NPRM ON CONFLICT OF INTEREST RULE-INVESTMENT ADVICE

MARCH 2013

RFI: DUTIES OF BROKERS, DEALERS, & INVESTMENT ADVISERS FROM SEC

JANUARY 2015

CEA MEMO ON FIDUCIARY STANDARD REPROPOSAL LEAKED

FEBRUARY 2015

WHITE HOUSE RELEASES FACT SHEET & CEA REPORT ON CONFLICTED ADVICE; REMARKS BY PRESIDENT OBAMA TO AARP

Note: Request for Information (RFI) & Notice of Proposed Rulemaking (NPRM)

Section 913 of the Dodd-Frank Act directed the Securities and Exchange Commission (SEC) to study the existing regulatory framework governing broker-dealers and investment advisers and the impact of adding more stringent protections for investors.[1] It empowered the SEC to put in place those protections by issuing new rules on broker-dealers and investment advisers based on its findings.[2] Yet DOL moved forward with a notice of proposed rulemaking (NPRM) in October 2010, before the SEC was able to complete its study of the issue or promulgate related regulations.

DOL, with its original proposed rule, would have broadly redefined the circumstances under which a person is considered a fiduciary under the Employee Retirement Security Act of 1974 (ERISA). But following public comment[3] and bipartisan legislation passed in the House of Representatives[4], DOL ultimately withdrew its proposal in September 2011.

A reproposal of that rule has now been sent to the White House Office of Management and Budget (OMB) for standard review and a formal NPRM will be published in the coming months, according to DOL.[5] While the text of the regulation has not yet been released, market participants will be looking for material changes from the original. With the release of a memo and report by the President’s Council of Economic Advisers (CEA) and a speech to the AARP, the Obama Administration has touted the DOL rulemaking as an effort to stop Wall Street firms from hurting middle class families and workers.[6] SEC Chair Mary Jo White also recently stated her personal desire for the SEC to move forward with its own rulemaking on fiduciary standards, setting up a potential for overlapping regulations.[7]

That potential for overlap between DOL and SEC rulemakings was one concern repeatedly cited by both lawmakers and industry stakeholders when DOL’s original proposal was released and will likely reappear in response to the news that both agencies are moving forward on the issue. Table 2 outlines how the authority to issue regulations relating to fiduciary standards compares between the two government agencies.

TABLE 2. COMPARISON OF DOL & SEC REGULATORY AUTHORITY ON FIDUCIARY STANDARDS

 

U.S. DEPARTMENT OF LABOR (DOL)

SECURITIES & EXCHANGE COMMISSION (SEC)

WHO:

 Advisers for Retirement Account Investments

 Broker-Dealers and Investment Advisers

WHAT:

DOL may redefine the circumstances under which a financial adviser would be considered “fiduciary,” though those standards would be limited to advisers giving advice for retirement plan investments that fall under ERISA

SEC could impose a uniform fiduciary standard of care for all broker-dealers and investment advisers providing personalized investment advice

WHEN:

A reproposal of a withdrawn regulation is expected to be formally proposed in the coming months, followed by a period of pubic comment

The SEC Chair has stated a desire to move forward with a uniform fiduciary standard and SEC staff has been studying the issue, but there is no formal timeline for when a regulation may be proposed

HOW:

DOL can set standards for retirement plans under the Employee Retirement Income Security Act of 1974 (ERISA)

Section 913 of the Dodd-Frank Wall Street Reform & Consumer Protection Act gave SEC the authority to establish a uniform standard of care for broker-dealers and investment advisers

Potential Impacts on Low- and Middle-Income Savers

Advocates both favoring and opposing DOL’s actions have cited the potential impacts on low- and middle-income savers to support their respective arguments. In particular, administration officials have emphasized how regulations could curtail the costs of conflicted advice on retirement savings as part of the president’s “middle-class economics” agenda. Yet industry stakeholders and policymakers have expressed concern that such a singular focus on the cost of conflicted advice (a cost under dispute) ignores the merits of alternative proposals and the potential for far-reaching impacts on consumers, including the possibility of higher costs for savers seeking investment advice.

In its 2010 NPRM, DOL included an impact analysis associated with its rulemaking.[8] In its assessment, DOL estimated that the monetized costs of the rule would be $17.7 million on affected entities.[9] Yet many found this cost estimate narrowly focused (since it only factored in the costs for companies to review and implement the regulation) and fundamentally inadequate. For example, a comment letter from the American Bankers Association noted, “…In its regulatory analysis, the Department does not fully or realistically quantify the Proposal’s burdens and costs.”[10] In particular, DOL did not study any impact the proposal could have on the costs and availability of professional financial advice to consumers or comprehensively assess viable regulatory alternatives to their proposal. For its reproposal, DOL has assured the public, “The new proposal will include a robust economic analysis, which will detail the costs of conflicts of interest and the expected impact of the rule.”[11]

Closer Look at CEA’s Findings & Market Perspectives

In conjunction with the president’s recent push forward with DOL’s fiduciary standard reproposal, the CEA released a white paper that includes a review of academic research on the topic and its own estimate of the costs to consumers of conflicted advice, an estimate being widely cited by administration officials in support of DOL’s rulemaking.[12] The report’s authors use academic research to produce their own estimate of a simplified cost of conflicted advice, relying heavily on a paper by Christoffersen et al.[13]

The CEA’s paper makes many assumptions but ultimately concludes that investments influenced by conflicted advice underperform by 100 basis points. They then apply that estimate of underperformance to the entire value of load mutual funds and annuities in IRA assets, approximately $1.66 trillion, and conclude that conflicted advice costs Americans approximately $17 billion annually.  However, a recent paper by NERA Economic Consulting analyzed the CEA report and noted at least six major flaws with their analysis of the economic literature and cost estimate[14]:

  1. The academic literature CEA cited does not simply provide for the conclusion that all IRA assets in load mutual funds and annuities underperform by 1 percent annually. Instead, CEA made many assumptions and extrapolations of more complex academic research to arrive at a simplistic aggregate cost on American savers; the reality is much more nuanced.
  2. CEA did not justify or support with academic literature why annuities in IRAs were included along with load mutual funds as total assets when calculating their cost of conflicted advice.
  3. No attempt was made to quantify or factor in the benefits provided by brokers, some intangible, such as customer service, risk reduction, and encouragement to invest more that may outweigh fees.
  4. The assumptions made by CEA rely more heavily on studies that look at the average performance of funds instead of the more realistic tracking of investor performance within a fund.
  5. The paper does not explore the costs and benefits of an alternative system. For example, while it assumes that implementing DOL’s proposal would not materially impact the cost of investment advice for American savers, that assessment is not supported with any kind of quantitative analysis.
  6. Comparisons to similar initiatives abroad categorically ignore studies that have shown those actions increased brokers’ fees and decreased access to investment advice for some consumers. 

Primary Concerns & Criticisms

Here are a few of the most commonly cited criticisms of DOL’s rulemaking on fiduciary standards for retirement investment advisers:

DOL’s fiduciary standard rulemaking will reduce access to retirement savings options and increase the costs of seeking investment advice for low- and middle-income Americans.

While the Obama Administration attempted to examine the cost of conflicted advice in its CEA report and memo, some argue that the imposition of DOL’s fiduciary standards have their own costs and benefits, which DOL may not have fully explored despite widespread criticism of its original proposal. A new regulatory regime could increase costs on brokers’ who may then elect to raise fees to cover compliance with new rules and/or drop low-balance clients who are no longer profitable. Additionally, as SEC Commissioner Daniel Gallagher remarked, “The White House memo is clearly premised on a belief that the status quo is deficient. But it ignores the main reason for the mitigation-based approach to conflicts and related disclosures: Investors benefit from choice; choice of products, and choice in advice providers.”[15] A number of other policymakers, academics, and industry stakeholders have also expressed concern over how these unintended consequences may make DOL’s proposal better politics than policy.[16][17][18]

DOL failed to consult SEC in drafting both the original proposal and re-proposal.

Concern that the DOL original rule would conflict and confuse the regulatory regime regarding investment advisers and broker-dealers prompted lawmakers and others to push for greater cooperation in any reproposal. Addressing those concerns, DOL Assistant Secretary for the Employee Benefits Security Administration Phyllis Borzi testified that DOL and SEC “are actively consulting with each other and coordinating our efforts.”[19] Yet according to SEC Commissioner Gallager, “…despite public reports of close coordination between DOL and SEC staff, I believe this coordination has been nothing more than a ‘check the box’ exercise by DOL designed to legitimize the runaway train that is their fiduciary rulemaking.”[20] Chairman John Kline (R-MN) and Subcommittee Chairman Phil Roe (R-TN) of the House Committee on Education and Workforce similarly agreed and have since requested documents and communications related to the DOL’s consultation with the SEC to be furnished to the Committee.[21]

Professional financial advice is already highly regulated; the appropriate avenue for further regulatory changes is not through DOL.

Lawmakers and others have expressed concern about increased regulations on professional financial advice stemming from DOL instead of first allowing the SEC to finish its ongoing review of the current regulatory regime and rulemaking. A bipartisan bill passed the House of Representatives in October 2013 that would have cemented the need for SEC to move before DOL, but was not taken up by the Senate.[22] That bill has been reintroduced to prevent DOL from moving forward with its current reproposal.[23] SEC Chair Mary Jo White’s recent comments that she intends for the SEC to pursue a uniform fiduciary standard of care for broker-dealers and investment advisers may add further concern that regulators are pursuing two separate and costly rulemakings to tackle the same issue.[24] Professional financial advisers are additionally subject to disclosure, examination, and other reporting regulations from SEC and the Financial Industry Regulatory Authority (FINRA).

Evidence from similar regulatory initiatives abroad does not support adoption of DOL’s rule.    

Administration officials and the recent CEA report on conflicted advice reference the experiences of other countries who have enacted similar reforms on investment advisers without negative market impacts despite much of the literature expressing uncertainty about the full effects of regulatory changes and some negative impacts for low-balance investors.[25]

Conclusion

DOL’s rulemaking on fiduciary standards may have serious impacts not only for financial advisers but also on American workers saving for their retirement. While perhaps well intentioned, policymakers and industry stakeholders have shown that moving forward with these rules could fundamentally impact the ability of low- and middle-income savers to affordably seek qualified investment advice for their retirement futures. In the coming months, DOL will release the text of its rulemaking. As the public, industry stakeholders, and policymakers assess that proposal, a thorough analysis of the proposal’s costs and benefits and how it dovetails with ongoing efforts at SEC will be vital.

 

 

 

 


[1] Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203, § 913, 124 Stat. 1824 (2010); http://www.gpo.gov/fdsys/pkg/PLAW-111publ203/pdf/PLAW-111publ203.pdf

[2] Dodd-Frank Wall Street Reform and Consumer Protection Act, Pub. L. No. 111-203, § 913(f), 124 Stat. 1827 (2010); http://www.gpo.gov/fdsys/pkg/PLAW-111publ203/pdf/PLAW-111publ203.pdf

[3] U.S Department of Labor, “News Release: US Labor Department's EBSA to re-propose rule on definition of a fiduciary,” (September 19, 2011); http://www.dol.gov/ebsa/newsroom/2011/11-1382-NAT.html

[4] See H.R. 2374, the Retail Investor Protection Act of 2013; http://hdl.loc.gov/loc.uscongress/legislation.113hr2374

[5] See DOL’s FAQs on Conflicts of Interest Rulemaking; http://www.dol.gov/featured/ProtectYourSavings/faqs.htm

[6] White House Office of the Press Secretary, “Fact Sheet: Middle Class Economics: Strengthening Retirement Security by Cracking Down on Backdoor Payments and Hidden Fees,” (February 23, 2015); http://wh.gov/iDigO

[7] Justin Baer & Andrew Ackerman, “SEC Head Backs Fiduciary Standards for Brokers, Advisers,” (March 17, 2015); http://www.wsj.com/articles/sec-head-seeks-uniformity-in-fiduciary-duties-among-brokers-advisers-1426607955

[8] “Definition of the Term ‘Fiduciary;’ Proposed rule,” 75 Federal Register 204 (October 22, 2010), pp. 65269-65275; http://www.gpo.gov/fdsys/pkg/FR-2010-10-22/pdf/2010-26236.pdf

[9] Ibid.

[10] Timothy E. Keehan, American Bankers Association, Comment Letter Re: Public Hearing on Definition of Fiduciary, (April 12, 2011); http://www.dol.gov/ebsa/pdf/1210-AB32-PH053.pdf

[11] See DOL’s FAQs on Conflicts of Interest Rulemaking; http://www.dol.gov/featured/ProtectYourSavings/faqs.htm

[12] White House Council of Economic Advisers, “The Effects of Conflicted Advice on Retirement Savings,” (February 2015); https://www.whitehouse.gov/sites/default/files/docs/cea_coi_report_final.pdf

[13] Susan Christoffersen, Richard Evans & David Musto, “What Do Consumers’ Fund Flows Maximize? Evidence from Their Brokers’ Incentives, ” (January 11, 2013); Journal of Finance, 68: 201–235; http://onlinelibrary.wiley.com/doi/10.1111/j.1540-6261.2012.01798.x/abstract

[14] Jeremy Berkowitz, Renzo Comolli, & Patrick Conroy, “Review of the White House Report Titled ‘The Effects of Conflicted Advice on Retirement Savings,” (March 15, 2015); http://www.nera.com/content/dam/nera/publications/2015/PUB_WH_Report_Conflicted_Advice_Retirement_Savings_0315.pdf

[15] SEC Commissioner Daniel M. Gallagher, “Remarks at the SEC Speaks in 2015,” (February 20, 2015); http://www.sec.gov/news/speech/022015-spchcdmg.html#.VQLqbEYi8cx

[16] See Footnote #14

[17] Douglas Holtz-Eakin, “Eakinomics: Regulation is just another word for no election left to lose,” (February 24, 2015); http://americanactionforum.org/daily-dish/february-24th-edition1

[18] Debevoise & Plimpton, “Memorandum Concerning Expected Department of Labor Conflict of Interest Rule,” (February 17, 2015); http://fsroundtable.org/debevoise-memo-fsr-dol-fiduciary-duty/

[19] Testimony of Phyllis C. Borzi Before the House Committee on Education and the Workforce, Subcommittee on Health, Employment, Labor, & Pensions, (July 26, 2011); http://edworkforce.house.gov/uploadedfiles/07.26.11_borzi.pdf

[20] See Footnote #15

[21] Letter from Reps. John Kline & Phil Roe, M.D., to Secretary Thomas E. Perez, (March 4, 2015); http://edworkforce.house.gov/uploadedfiles/3-4-15-secretary_perez-fiduciary_liability.pdf

[22] See H.R. 2374, the Retail Investor Protection Act of 2013; http://hdl.loc.gov/loc.uscongress/legislation.113hr2374

[23] See H.R. 1090, the Retail Investor Protection Act of 2015; https://www.congress.gov/bill/114th-congress/house-bill/1090/

[24] See Footnote #7

[25] See Footnotes #14 & 18

Congress has the chance to eliminate an annual legislative nightmare, fix the reimbursement of doctors under Medicare, introduce substantive, structural changes to an entitlement program, and ensure the continued insurance coverage of needy children – all without raising a dime of taxes. Reports indicate that there is a bipartisan, bicameral leadership agreement for Congress to repeal the Sustainable Growth Rate (SGR) mechanism, as well as extend for two years the Children’s Health Insurance Program (CHIP) and numerous other health provisions. The legislation isn’t perfect – more on that below – but it is an important step forward.

The virtues of the CHIP provisions are obvious; the main focus should be on repealing the SGR. This would end the time-consuming annual legislative folly known as the “doc fix” and would genuinely fix for the long term the reimbursement of physicians under Medicare. Repeal of the SGR is costly however – roughly $175 billion over the next 10 years. The crafters of the legislation owned up to the fact that it will pay the docs more than the current policy of a pay freeze. This increases the cost by $35 billion.  However, there is the additional cost to offset not cutting doctors’ pay as the SGR would have required. This is a $140 billion price tag.

The proposed bill contains structural reforms that roughly offset the $35 billion cost of raising pay in the first 10 years (and another $30 billion in other offsets), but not the full remaining $140 billion. That makes this bill a tougher call for a fiscal conservative.

However, those structural reforms to Medicare will continue to reap benefits in the years beyond the budget window. Specifically, for new retirees (and only new retirees) above the low-income threshold, it would restrict certain Medigap insurance plans from covering the first dollar of health spending.   In addition, after 2017 the bill would reduce the subsidy of Medicare premiums for higher-income beneficiaries; lowering it to 35 percent for those between $133,500 and $160,000, and down to 25 percent for those above $160,000.

It makes sense to improve incentives with less premium subsidy and a modest deductible, thereby making beneficiaries more cognizant of the costs of their health care decisions. The equivalent of Medigap policies that cover the first dollar of care simply do not exist in the employer, individual, small-group and other insurance markets.

Importantly, because these policies are phased in they don’t affect Medicare much in the first 10 years. But the savings will continue to rise, grow faster than physician reimbursements, and on balance lower projected Medicare spending indefinitely into the future. A rough projection is that the combination of the Medigap policies and the reduced premium subsidies will cut Medicare outlays by $230 billion over the second 10 years, 2026-2035.

Put differently, one could imagine issuing the $140 billion as a Treasury security. The additional savings from these structural reforms would be sufficient to pay off the IOU and interest by the end of the second 10 years.

The proposed SGR repeal bill is not perfect. But it will fix the reimbursement of doctors, introduce structural changes to Medicare, extend CHIP, and more than balance over the next 20 years – without raising taxes. 

Summary

  • The federal government is currently in control of over 279 million acres of land, but has determined 60 percent of that to be inaccessible for energy exploration.
  • Over 18.9 billion barrels of oil and 94.5 trillion cubic feet of natural gas remain out of reach.
  • The government has reduced leasing activity 20 percent, though applications have more than doubled.

Introduction

This year we’ll close the first decade of the fracking revolution, which inverted long-standing trends in energy production and turned the United States into the largest oil and gas producer in the world. As new trade opportunities open up, energy prices drop, and OPEC desperately tries to hold its market share, federal energy policy must be updated to reflect this new reality. Unfortunately, the administration is tethering us to the past.

Our oil and gas policies were largely written in the 1970s during a period of scarce energy resources and high prices. These policies were designed to isolate the U.S. market from oil price shocks, resist foreign sources of oil, and ensure we preserve scarce oil and gas reserves for our future and our national defense. Today, none of these policies work well; oil and gas production is collapsing the trade deficit, we’re in a position to export our resources to key international partners, and advances in drilling technology have opened up access to major shale and tight oil. It’s time to develop a policy that makes sense today.

To realize a pro-growth energy strategy, the first step is land use policy that provides access to relevant resources. This paper focuses on policies that provide for or limit access to the oil and gas resources that have been driving economic growth.

The federal government owns 28 percent of U.S. lands. The vast majority is managed by four agencies (the Forest Service in the Department of Agriculture and the National Park Service, Bureau of Land Management (BLM), and Fish and Wildlife Service in the Department of the Interior) for the purposes of preservation, conservation, resource development, and recreation. To allocate acreage across purposes, agencies use a lengthy and comprehensive land use planning process that involves all levels of government, the public, and stakeholder groups, and ensures compliance with the National Environmental Policy Act.

Once purposes for the lands have been established, land management agencies can begin the process of leasing acreage for the purpose of oil and gas development. Once a developer acquires leasing rights, it then applies for drilling permits to access resources on these lands. Major issues arise along all steps in this process.

Closed for Business

With the 2000 Energy Policy and Conservation Act (EPCA) and the Energy Policy Act of 2005 (EPAct 2005), Congress directed the Secretary of the Interior to create an inventory of oil and natural gas resources on federal on-shore lands and to determine the restrictions inhibiting resource development. The Departments of Interior, Agriculture, and Energy relied on 18 sample geologic provinces to assess not just our natural resources, but also the bureaucratic obstacles to bring those resources to market.

The findings of the 2008 report, Inventory of Onshore Federal Oil and Natural Gas Resources and Restrictions to their Development (Inventory), are dismal. Federal policies restrict access to the majority of federal land and oil reserves. It’s not just the energy industry that is held back by restricted production. Lease and royalty payments generate about $15 billion dollars in federal revenue each year, revenue that is used to execute our resource conservation policy and manage public resources. Refer to Table 1 for a high-level summary of Inventory findings.

Table 1. U.S. Lands and Resources Captured in EPCA Inventory

Access Category

Area

Resourcesa

Total Oil

Total Natural Gas

(thousand acres)

Percent of Federal Holdings

(MMbbl)

Percent of Federal Holdings

(bcf)

Percent of Federal Holdings

Total Holdings

279,039

100

30,503

100

230,975

100

 

Inaccessible

165,882

60

18,976

62

94,502

41

 

Accessible with Restrictions

65,186

23

9,260

30

112,919

49

 

Accessible under Standard Lease Terms

47,972

17

2,268

8

23,554

10

a Undiscovered technically recoverable resources and reserves growth

Source: Inventory, Table ES-1

Let’s begin with the acreage most open to development – lands accessible under standard lease terms. The lease is a straightforward, four page document that outlines blanket lease terms, like rental and royalty rates and operational requirements. If the land use planning process does not yield any additional considerations, the lessee must comply with the numerous environmental, conservation, and cultural requirements imposed by the standard lease under a variety of federal statutes. Lands with these basic requirements account for less than one tenth of federal oil and gas resources.

Federal lands open to restricted lease conditions can come with a number of caveats over a broad scale. Some restrictions may be relatively minor operationally, like additional safeguards for local cultural landmarks or practices. Others can be quite arduous, like development windows restricted to less than three months to protect endangered species or fragile environments. Of the 125 land use plans sampled in the Inventory, land management agencies heaped on 3,125 individual lease stipulations. Leases under these plans had an additional 157 unique conditions of approval. These lands, accessible to varying degrees, account for 30 percent of all oil resources and nearly half of natural gas resources.

Finally, there are lands that are closed to development. These lands may be closed by congressional or executive action or administrative decisions by the land management agency, if they have not yet undergone the land use planning process, or because government owns only mineral, but not land surface rights. Together, these categories render inaccessible more than 62 percent of federal oil reserves and 41 percent of federal natural gas reserves.

Winners and Losers

It comes as no surprise that the energy boom has been most prolific in states with relatively minor shares of federal ownership, including Texas (1.8 percent), North Dakota (3.9 percent), and Pennsylvania (2.1 percent).[1] The Inventory reveals that federal land managers can and do tie up great portions of land and resources in states where they are major landholders. Particularly in the Western states and Alaska, significant amounts of oil and gas are removed from potential production for environmental protection, lagging land use planning processes, or administration priorities. Table 2 details the portion of land managed by federal agencies, the portion of that land managed for oil and gas resources, and how much of those resources are rendered unavailable.

Table 2. Oil and Gas Resources Closed to Development

 

Portion of Land Managed by the Federal Government (millions of acres, %)

Portion of Federal Lands with Oil and Gas Resources (millions of acres, %)

Inaccessible Oil (MMbbl, %)

Inaccessible Natural Gas (bcf, %)

Alaska

225.8 (61.8%)

77.67 (34.4%)

14,212 (75.6%)

57,658 (67.8%)

Eastern Region

27.4 (5.0%)

20.51 (74.9%)

98 (49.0%)

1,921 (42.0%)

Western Region

374.7 (27.7%)

180.85 (48.3%)

4,661 (40.5%)

34,916 (24.7%)

Source: Inventory, CRS Report R42346

According to the Inventory, 87.9 percent of all Alaskan acres managed by the federal government are closed to oil and gas operations. This ties up nearly half of all federally managed oil and a quarter of all federally managed natural gas resources. In a state that receives about 90 percent of its budget from oil and gas revenues, this limits state solvency and economic opportunity. In all, the Inventory estimates that 14 billion barrels of oil and 58 trillion cubic feet of natural gas are closed to development in the state of Alaska.

In January 2015, Obama proposed to expand wilderness designation to 64 percent of the Alaska National Wildlife Refuge (ANWR).  A wilderness designation is the highest level of protection available for federal lands, inhibiting resource development, roads, even permanent structures. Alaska has pushed back on this proposal; state officials are concerned about federal overreach and the budgetary implications of closing off an enormously resource-rich area (ANWR has somewhere on the order of 8 percent of all U.S. undiscovered oil reserves on an area that makes up just 7 percent of the 19 million acre refuge).

Western states don’t fare much better. Of 12 study areas and extrapolated oil and gas lands, nearly half is closed to development, representing 41 percent of oil and 25 percent of gas resources in the West. Some resource-rich regions are particularly tied up; 92 percent of the Ventura Basin in Southern California is closed to leasing by executive order and administrative priorities, and arduously slow land management planning shuts in most of eastern Nevada. Some 4.7 billion barrels of oil and 35 trillion cubic feet of natural gas are closed to development.

Eastern states have better prospects simply because federal agencies have very few land holdings east of the Mississippi River. These states largely have the ability to manage oil and gas resources themselves. Of that small portion of eastern lands with oil and gas resources managed by the federal government, some 51 percent are accessible to oil and gas interests. It’s worth noting that the Inventory may dramatically understate oil and gas resources available in the East; the study captured 201 million barrels of oil and 4.6 trillion cubic feet of natural gas, but relied on pre-fracking boom data that fails to capture the shale gas resources in the Appalachian Basin.

Other Enduring Obstacles at the Bureau of Land Management

The BLM has struggled to process applications and requests from industry. In 2013 and 2014, both the Government Accountability Office (GAO) and the Department of Interior Inspector General released reports suggesting that management issues at BLM slow down the permitting processes and prevent effective oversight of development projects.

EPAct 2005 attempted to improve BLM operations in two important ways. First, it initiated the Federal Permit Streamlining Pilot Project to improve the efficiency and speed with which BLM processed permits. The pilot afforded seven Western offices additional staff and funding. While the project is popular in the states where pilots were established, BLM has yet to develop a report on whether the pilot has been successful.

Second, it imposed a time limit on processing Applications for Permit to Drill (APD) of 30 days. GAO found that BLM has been unable to meet that target and fails to record data necessary to determining compliance with the target across the agency. The Inspector General identified issues with prioritization, supervision, and planning that make the APD approval process virtually indefinite. The result, as displayed in Figure 1, is an approval process that takes, on average, 7.5 months, significantly longer than states, which process permits within two months.

Figure 1. Average APD Process Times at BLM

Figure 2. Total Number of APD Approvals at BLM

Note, too, that the number of approved APDs also declined rapidly just prior to the rebound in oil production, reflecting industry’s preference to work with states (Figure 2).

The BLM is also minimally receptive to industry requests to open lands for leasing. Industry can express interest in certain federal lands by nominating acreage for a lease sale. In the last three years of data, such nominations have nearly doubled, reflecting enthusiastic interest in pursuing development on public lands. This enthusiasm has not been reciprocated at BLM. Over the same 2011-2013 period, leases actually declined one fifth.  Figure 3 highlights the diverging trends between industry interest and BLM leasing activity.

Figure 3. BLM Acreage Leased vs. Requested by Industry

 

The issues at BLM are deeply rooted and certainly not restricted to the leasing and permitting process for oil and gas operations. The GAO and Inspector General reports also documented significant issues with environmental compliance, inspections, staffing, and performance metrics, suggesting BLM is due for an operational overhaul.

Conclusion

This administration’s land management policy as pertains to oil and gas is simply obsolete. Despite dramatic changes in domestic production technologies and trends, international trade, and domestic reserves, federal agencies manage oil and gas resources as if we were stuck in the volatile and scarce energy market of the 1970s. More than half of all oil resources and 40 percent of natural gas resources are kept locked away and off the market, directly impacting economic growth and job creation in resource-rich areas.

In regions where the federal government is a significant landholder, decisions about resource development are routed through a land use planning process that is predisposed to withdrawing resources from development. Moreover, land use management agencies, especially BLM, are plagued with management issues that further restrain access to the limited acreage open to energy development. There is a clear and defined need to improve land use planning processes and operations if the U.S. wishes to maintain dominance in oil and gas production.



[1] For a more thorough discussion of the energy boom, see AAF report “Strengthening the U.S. Position as the Leading Oil and Gas Producer.” Government land ownership data from CRS report R42346.

Related Research
  • Removing all undocumented immigrants would shrink the workforce by 11 million workers

  • Removing all undocumented immigrants would shrink the economy by nearly 6 percent or $1.6 trillion

  • It would cost the federal government $400 billion to $600 billion to address all 11.2 million undocumented immigrants

Executive Summary

We examine the budgetary and economic implications of alternative strategies to addressing undocumented immigrants. In particular, we focus on the implications of immediately and fully enforcing current law, and find that it would be fiscally and economically costly. The federal government would have to spend roughly $400 billion to $600 billion to address the 11.2 million undocumented immigrants and prevent future unlawful entry into the United States. In order to remove all undocumented immigrants, each immigrant would have to be apprehended, detained, legally processed, and transported to his or her home country. In turn, this would shrink the labor force by 11 million workers and reduce real GDP by $1.6 trillion. The fiscal and economic costs are illustrated in Table 1.

Table 1: The Cost of Enforcing Current Law

Budgetary  Costs
($ Billions)

Category

Lower Estimate

Upper Estimate

Total Deportation and Continuing Enforcement Cost

$419.6

$619.4

Total Deportation Cost

$103.9

$303.7

Apprehension

$43.5

$243.3

Detention

$35.7

$35.7

Legal Processing

$13.4

$13.4

Transportation

$11.3

$11.3

20 Years Continuing Enforcement

$315.7

$315.7

Customs and Border Protection

$207.2

$207.2

Immigration and Customs Enforcement

$108.5

$108.5

Economic Costs

Category

Percent Reduction

 Billions of Dollars

People

Real GDP

5.7%

$1,556.1

Labor Force

6.4%

11,024,100

Depending on how the government conducts its apprehensions, it would need to spend $100 billion to $300 billion arresting and removing all undocumented immigrants residing in the country, a process that we estimate would take 20 years. In addition, to prevent any new undocumented immigrants going forward, the government would at a minimum have to maintain current immigration enforcement levels. This results in an additional $315 billion in continuing enforcement costs over that time period.

Not only would enforcing current law cost taxpayers, it would also burden the economy. Removing all undocumented immigrants would cause the labor force to shrink by 6.4 percent, which translates to a loss of 11 million workers. As a result, 20 years from now the economy would be nearly 6 percent or $1.6 trillion smaller than it would be if the government did not remove all undocumented immigrants. While this impact would be found throughout the economy, the agriculture, construction, retail and hospitality sectors would be especially strongly affected.

Introduction

Immigration reform is a multi-faceted issue, encompassing legal issues, security issues, employer issues, and social issues. But at the heart of immigration reform are a number of economic policy issues; reforms to the core visa program, temporary worker programs, and sectoral programs in areas like agriculture and hi-tech.[1]

Included among the key economic issues is policy toward undocumented immigrants. At one end of the spectrum is full lawful permanent residency for all undocumented immigrants. Government spending associated with the passage of such a law would be essentially balanced out by corresponding economic growth and labor market benefits. Previous research has found that in the context of comprehensive reform, such a proposal would have beneficial economic impacts for the United States.[2] AAF found that immigration reform that raises population growth would increase annual real GDP growth rate by nearly a percentage point over 10 years. As a result, during that time period the federal budget deficit would be reduced by a cumulative amount of $2.7 trillion.[3]

At the other end of the spectrum are those who oppose any legalization of the undocumented population and advocate full enforcement of current immigration law independent of any other reforms. The economic consequences of this approach merit investigation.

In this paper we focus on the budgetary and economic consequences of full implementation of current law. In effect, it seeks to estimate the fiscal costs of enforcement and the economic impact of removing these immigrants from the economy.

We find that the costs of completely enforcing current law for all 11.2 million undocumented immigrants, while keeping any new immigrants from entering unlawfully, are quite large.  We estimate that the federal government would have to spend $400 billion to $600 billion over 20 years to accomplish these objectives. $100 billion to $300 billion would be spent on removing the entire current undocumented immigrant population from the United States. Moreover, an additional $315 billion would be needed to keep new immigrants from unlawfully living in the country. 

Not only would it cost the taxpayer budget dollars, it would also greatly burden the economy. The labor force would shrink by 6.4 percent and, as a result, in 20 years the U.S. GDP would be almost 6 percent lower than it would be without fully enforcing current law. This equates to 11 million workers and $1.6 trillion lost. While this impact would be found throughout the economy, the agriculture, construction, retail and eating and drinking sectors would be especially strongly affected. 

How Large is the Enforcement Issue? 

According to most recent estimates by the Pew Research Center, the undocumented population is approximately 11.2 million people.[4] Pew estimates that undocumented immigrants account for 3.5 percent of the U.S. population and 26 percent of the U.S. foreign born population.[5]

Consistent with being such a significant population, undocumented immigrants are becoming increasingly important in American lives. According to Pew research, in 2010 about two-thirds had lived in the U.S. for at least 10 years and almost half were parents of minor children. Only 15 percent had lived in the U.S. for less than five years.[6] As a result, by 2012 6.9 percent of all U.S. students (kindergarten through 12th grade) had at least one undocumented immigrant parent. Of the students with an undocumented parent, 79.7 percent were U.S.-born and automatically U.S. citizens. The remaining were undocumented immigrants themselves. Meanwhile, undocumented immigrants make significant contributions to the U.S. economy, as 8.1 million were working or looking for work in 2012, making up 5.1 percent of the labor force.[7]

Many lawmakers have stated the U.S. government needs to “fully enforce current law” which would ultimately mean the departure of all undocumented immigrants. One possibility is that up to 20 percent might leave the United States voluntarily. If so, it would still take years and resources to remove the remaining 8.96 million undocumented immigrants. In this paper, we estimate the direct fiscal and economic costs of enforcing current law for all 11.2 million undocumented immigrants, which involves forcibly removing at least 8.96 million people. While our estimated costs are quite large, we consider them conservative in nature because we are unable to account for the capital expenditures needed to expand the immigration removal infrastructure, such as building additional prisons and court rooms, and for the possibility that a number of lawful immigrant residents would leave the United States with their undocumented family members. 

Fiscal Costs of Enforcing Current Law[8] 

To calculate the ballpark fiscal cost of enforcing current law, we make a few key assumptions. In particular, how many undocumented immigrants will the U.S. government actually have to forcibly remove and how long will it take? 

An announcement that the government would begin enforcing mass deportation could lead to a large number of undocumented immigrants leaving voluntarily before being contacted by officials. Although it is impossible to predict exactly how many will leave voluntarily, we believe those most likely to leave are the undocumented immigrants who have been in the United States for the least amount of time. 

According to the Pew Research Center, 15 percent of all undocumented adult immigrants have been in the United States for less than 5 years.[9] While we do not expect undocumented immigrants who have resided in the United States less than 5 years to be the only group to leave on their own, it is the group most likely to leave voluntarily. So, we assume that at most 20 percent of undocumented immigrants would leave the United States voluntarily.[10] This means that the U.S. government would have to forcibly remove 8.96 million undocumented immigrants. 

It is important to emphasize that 20 percent is an upper bound estimate of the percent of those who would leave voluntarily. For this reason, the cost estimates below are best thought of as conservative in character; the U.S. government would likely have to forcibly remove more than 8.96 million immigrants. 

Another assumption to make is how long would it take for the U.S. government to remove 8.96 million undocumented immigrants from the United States. The Department of Homeland Security’s Immigration and Customs Enforcement (ICE) is the primary federal agency responsible for deporting immigrants who live in the United States unlawfully. In 2013, ICE removed 330,651 immigrants who were unlawfully residing in the United States.[11] ICE has stated that it only has the capacity to remove 400,000 immigrants per year maximum.[12] This means that absent any significant investment in the immigration removal infrastructure, it would take the federal government roughly 20 years to remove 8.96 million undocumented immigrants.[13]

The Four Major Stages of Enforcing Current Law 

When enforcing the legal prosecution of undocumented immigrants, there are four stages in the process. Local, state, and federal officers must investigate, pursue, and apprehend the undocumented immigrants currently residing within the United States. After apprehending the suspected undocumented immigrants, officials must detain them in a prison. Then the undocumented immigrants must be processed legally in the immigration courts. Finally, after a judge determines the suspects to be in the country unlawfully, the federal government must transport them to their countries of origin. 

Apprehension Costs 

There are two primary types of apprehensions made by ICE, criminal arrests and administrative arrests. Criminal arrests occur when federal ICE agents investigate undocumented immigrants, pursue them, and arrest them on their own. Administrative arrests frequently occur when state and local law enforcement officers arrest immigrants for another (often traffic) violation.[14] If the local or state officers suspect the arrested persons are inside the United States unlawfully, they contact ICE. Then if ICE determines the suspects are indeed undocumented immigrants, they conduct an administrative arrest in which custody of the prisoners change from the local officers to federal ICE agents.[15],[16]

In FY 2013, administrative arrests accounted for the vast majority of total arrests ICE made and significantly lowered the average cost of apprehending an undocumented immigrant. We estimate that in FY 2013, ICE’s total budget for apprehension expenditures was $1.17 billion. Dividing that figure by the total number of criminal and administrative arrests (241,694) recorded in 2013 derives an estimated cost of $4,856 per arrest.[17] This means that it would cost $43.5 billion to make the majority of the 8.96 million arrests administratively. 

However, if ICE were to solely use its own investigators and Fugitive Operations Teams (FOT) to perform arrests of the 8.96 million immigrants, it would be far more expensive.[18] Dividing ICE’s $1.17 billion budget by the total number of apprehensions by those entities (43,218) derives an estimated cost of $27,155 per arrest.[19], [20] This means ICE would need $243.3 billion to make 8.96 million criminal arrests. 

Sources of Apprehension Costs 

In FY 2013, the ICE offices that played a primary role in apprehending undocumented immigrants were the Office of the Principal Legal Advisor (OPLA), Homeland Security Investigations (HSI), and Enforcement and Removal Operations (ERO). 

OPLA 

OPLA is ICE’s legal representative in apprehension and deportation proceedings. OPLA is responsible for handling the prosecutions necessary to apprehend undocumented immigrants in the United States. In FY 2013, OPLA’s budget was $207 million.[21]

HSI 

HSI is the central investigative team in the Department of Homeland Security. It pursues “transnational criminal enterprises seeking to exploit America’s legitimate trade, travel, and financial system.” In doing so, it investigates a mix of unlawful drug and immigration matters. In FY 2013, HSI arrested 47,052 individuals[22] with a budget of $1.834 billion.[23] Among those arrests, 11,996 or 25.5 percent were undocumented immigrants.[24] Assuming HSI devoted 25.5 percent of its resources on undocumented immigrants, it spent roughly $467.6 million arresting them. 

ERO 

ERO’s Fugitive Operations Teams (FOT), Criminal Alien Program (CAP), and Comprehensive Identification and Removal of Criminal Aliens (Secure Communities) are the primary ways it apprehends undocumented immigrants. Between the three programs, in FY 2013 ERO made 229,698 arrests.[25] Outside of the FOT’s 31,222 arrests of fugitive and non-fugitive aliens, the vast majority of arrests were made administratively. Between these three programs, ERO spent $499 million on apprehending undocumented immigrants.[26]

Detention Costs 

After apprehending 8.96 million undocumented immigrants, ICE would have to detain them until a judge formally rules on their removal. The lack of bed space is a primary reason ICE is only able to remove at most 400,000 undocumented immigrants each year. In FY 2013, ICE only had 34,000 detention bed spaces.[27] Going forward, ICE also plans to reduce its detention bed spaces to 30,539.[28]

In FY 2013, on average undocumented immigrants were detained by ICE for 33.5 days. Meanwhile, the Department of Homeland Security reports that it cost ICE on average $118.88 per day to detain a single undocumented immigrant.[29] As a result, in FY 2013, it cost on average $3,982.48 to detain each immigrant. This means that it would cost about $35.7 billion to detain 8.96 million undocumented immigrants over the course of 20 years.[30]

Legal Costs 

Once ICE detains undocumented immigrants, it must also legally process them. Prosecutions are handled by ICE’s OPLA, the cost of which was already captured in the apprehension costs. This section only covers the cost of the adjudications in immigrant courts after they have been apprehended. 

The Justice Department’s Executive Office of Immigration Review (EOIR) is primarily responsible for legally processing apprehended undocumented immigrants. In FY 2013, EOIR received 193,350 cases[31] with a budget of $289.1 million.[32] This means that it cost $1,495 to legally process each undocumented immigrant. As a result, in 2013 dollars, it would cost $13.4 billion to legally process 8.96 million undocumented immigrants through EOIR. 

Transportation Costs 

After undocumented immigrants have been apprehended, detained, and legally processed, ICE must transport them back to their countries of origin. The cost of transportation is quite high because contrary to popular conception, not all undocumented immigrants are originally from Mexico. Pew reports that in 2012, 52.4 percent were from Mexico and the rest were from all over the world. 15.2 percent were from Central America, 12.4 percent were from Asia, 6.3 percent were from South America, and the remaining were from Europe, the Caribbean, the Middle East, Africa, and others.[33]

The U.S. government, however, would not be required to transport all 8.96 million undocumented immigrants. In some cases, a judge can determine that an immigrant who is being forced to leave is eligible for a voluntary departure order. These types of orders allow immigrants who have been found guilty of being inside the United States unlawfully to leave the country on their own, giving them more time and flexibility to prepare for their departure. 

In 2013, of ICE’s 330,651 removals, 9.6 percent were able to return on their own.[34] Assuming that 9.6 percent of the 8.96 million undocumented immigrants would be able to return voluntarily, the total number ICE would have to transport is 8.1 million. According to the U.S. Marshals Service, the average unit cost of transporting federal detainees, including deportees, was $1,400 in FY 2013.[35] This suggests that in 2013 dollars it would cost about $11.3 billion to transport 8.1 million undocumented immigrants. 

Total Costs of Enforcing Current Law 

Let’s review. Enforcing current law toward 11.2 million undocumented immigrants entails that the U.S. government would have to forcibly remove at least 8.96 million people over the course of 20 years.[36] As a result, the government would spend $43.5 billion to $243.3 billion to apprehend the undocumented immigrants, $35.7 billion to detain them, $13.4 billion to process them legally, and $11.3 billion to transport them to their home country. As a result, the U.S. government would have to spend $103.9 billion to $303.7 billion to remove 8.96 million immigrants. 

Continuing Enforcement Costs 

If the government were to enforce mass deportation, in addition to removing the 11.2 million undocumented immigrants currently residing in the United States, it would also have to continue enforcing policies that aim to eliminate newly undocumented immigrants. This involves the current enforcement practices of keeping immigrants from entering unlawfully, tracking and removing those who are still able to enter, and removing those who violate the legal terms of their residency, such as overstaying a visa. 

These tasks are carried out by the Department of Homeland Security’s Customs and Border Protection (CBP) and ICE. While ICE identifies and removes individuals who are in the United States unlawfully, CBP is primarily responsible for securing the nation’s borders and preventing unauthorized entry. In FY 2013, CBP’s budget was $10.4 billion and ICE’s was $5.4 billion, totaling $15.8 billion in total enforcement expenditures.[37] Despite massive growth in immigration enforcement expenditures over the past decade, the number of undocumented immigrants in the United States has grown modestly over that time period.[38] So, AAF assumes that to prevent future unlawful entry, enforcement expenditures each year would at a minimum have to remain at these levels. To put that cost in perspective, the annual enforcement costs are equivalent to roughly four Freedom Towers (it cost $3.8 billion to build One World Trade Center).[39]

So in addition to the $103.9 billion to $303.7 billion the government would need to spend in removing the 11.2 million current undocumented immigrants, it would also have to spend $315.7 billion ($15.785 billion per year for 20 years) on keeping new immigrants from living in the United States unlawfully. This means that in 2013 dollars, it would cost a total of $419.6 billion to $619.4 billion to remove all undocumented immigrants currently living in United States and keep any new undocumented immigrants from entering. 

Economic Costs of Enforcing Current Law 

Undocumented immigrants comprise a significant portion of the U.S. labor force.[40] Deporting all of these individuals will have negative effects on the economy as a whole. Industries that rely on undocumented labor will be devastated. Our population and workforce will age, putting more pressure on Social Security and Medicare as these older workers retire. 

The Bipartisan Policy Center analyzed the economic impact of this dramatic drop in the workforce and found profound negative impacts on GDP, the deficit, and the housing market.[41] Specifically, they found that removing all undocumented immigrants from the United States would reduce the U.S. labor force by 6.4 percent.[42] This means that compared to the Congressional Budget Office’s (CBO) baseline projections, the labor force would decrease by 11 million workers by 2034.[43]

As a result, in the first ten years average annual economic growth would decrease by 0.5 percent. Most startling, 20 years from now the economy would be 5.7 percent smaller than it would be if the government did not remove all undocumented immigrants.[44] For purposes of comparison, note that the decline in real GDP during the Great Recession was quite similar – 6.3 percent. This suggests that real GDP would be about $1.6 trillion lower in 2034 than CBO’s baseline estimate.[45]

Even housing would suffer. Residential construction spending would decline by over $100 billion per year because removing all present and future undocumented immigrants would cause a large decline in the U.S. population.[46]

Removing the entire undocumented population would have negative effects on the deficit, too, although these effects are harder to calculate. Undocumented immigrants are low users of social services—they are not legally allowed to collect any federal entitlement benefits, but they do receive emergency medical care and care from federally funded Community Health Centers. Nonetheless, removing them would not result in large decreases in the cost of federal entitlement programs.[47] In addition, the vast majority children in school with an undocumented parent were born in the United States and are U.S. citizens. In fact, only 1.4 percent of all students are undocumented immigrants.[48] Thus, removing all undocumented immigrants would not result in significant declines in public school spending because the government would not be able to forcibly remove most of these children. The tax revenues that sustain these programs would decrease, although it is difficult to say by how much. Estimates from the Social Security Administration and the CBO suggest that at least 50 percent of undocumented immigrants pay federal taxes. Since the population does not frequently use social services, the loss in tax revenue associated with removing them from the country would cause the federal deficit to grow. According to the Bipartisan Policy Center removing all undocumented immigrants would result in a deficit increase of $800 billion over 20 years.[49]

Conclusion 

The costs of enforcing current law toward all 11.2 million undocumented immigrants, while keeping any new immigrants from entering unlawfully, are quite large. We estimate that the federal government would have to spend $400 billion to $600 billion over 20 years to accomplish these objectives. $100 billion to $300 billion would be spent on removing the entire current undocumented immigrant population from the United States. Moreover, an additional $315 billion would be needed to keep new immigrants from unlawfully living in the country. Not only would it cost the public fiscally, but it would also greatly burden the economy. The labor force would shrink by 6.4 percent or 11 million workers and, as a result, in 20 years the U.S. GDP would be 5.7 percent or $1.6 trillion percent lower than it would be without fully enforcing current law.



[2] http://www.remi.com/immigration-report

[3] Douglas Holtz-Eakin, “Immigration Reform, Economic Growth, and the Fiscal Challenge,” American Action Forum, April 2013, http://americanactionforum.org/research/study-immigration-reform-economic-growth-and-the-fiscal-challenge

[4] Jeffrey S. Passel & D’Vera Cohn, Unauthorized Immigrant Totals Rise in 7 States, Fall 14: Decline in Those From Mexico Fuels Most State Decreases,” Pew Research Center, November 2014, p. 6, http://www.pewhispanic.org/files/2014/11/2014-11-18_unauthorized-immigration.pdf

[5] Ibid., p. 7

[6] Paul Taylor, Mark Hugo Lopez, Jeffrey S. Passel, & Seth Motel, “Unauthorized Immigrants: Length of Residency, Patterns of Parenthood,” Pew Research Center, December 2011,  p. 3, http://www.pewhispanic.org/files/2011/12/Unauthorized-Characteristics.pdf

[7] Jeffrey S. Passel & D’Vera Cohn, Unauthorized Immigrant Totals Rise in 7 States, Fall 14: Decline in Those From Mexico Fuels Most State Decreases,” Pew Research Center, November 2014, p. 8, http://www.pewhispanic.org/files/2014/11/2014-11-18_unauthorized-immigration.pdf

[8] The methodology employed in this paper is similar to that used by Marshall Fitz, Gebe Martinez, and Madura Wijewardena in “The Costs of Mass Deportation: Impractical, Expensive, and Ineffective,” Center for American Progress, March 2010, http://cdn.americanprogress.org/wp-content/uploads/issues/2010/03/pdf/cost_of_deportation.pdf

[9] Paul Taylor, Mark Hugo Lopez, Jeffrey S. Passel, & Seth Motel, “Unauthorized Immigrants: Length of Residency, Patterns of Parenthood,” Pew Hispanic Center, December 2011,  p. 3, http://www.pewhispanic.org/files/2011/12/Unauthorized-Characteristics.pdf

[10] Notice that there would be citizens, Lawful Permanent Residents and others that could also leave because of family and other ties. A specific estimate is beyond the scope of this paper.

[11] John F. Simanski, “Immigration Enforcement Actions: 2013,” Office of Immigration Statistics, Department of Homeland Security, September 2014, pp. 5-7, http://www.dhs.gov/sites/default/files/publications/ois_enforcement_ar_2013.pdf

[12] John Morton, “Memorandum on Civil Immigration Enforcement: Priorities for the Apprehension, Detention, and Removal of Aliens,” U.S. Immigration and Customs Enforcement, March 2011, http://www.ice.gov/doclib/foia/prosecutorial-discretion/civil-imm-enforcement-priorities_app-detn-reml-aliens.pdf

[13] Notice that this assumes that there are no new undocumented immigrants arriving during that period.

[14] “Secure Communities: Criminal Alien Removals Increased, but Technology Planning Improvements Needed,” Report to the Ranking Member, Committee on Homeland Security, House of Representatives, Government Accountability Office, July 2012, pp. 8 & 14, http://gao.gov/assets/600/592415.pdf

[15] For more detail on arresting process, see Marc R. Rosenblum & William A. Kandel, “Interior Immigration Enforcement: Programs Targeting Criminal Aliens,” Congressional Research Service,  December 2012, http://fas.org/sgp/crs/homesec/R42057.pdf

[16] Our estimates focus on federal budget costs, and do not incorporate any burdens on state and local budgets.

[17] Number of arrests derived from John F. Simanski, “Immigration Enforcement Actions: 2013,” Office of Immigration Statistics, Department of Homeland Security, September 2014, http://www.dhs.gov/sites/default/files/publications/ois_enforcement_ar_2013.pdf

[18] In creating their own cost estimates, the Center for American Progress assumes arrests would only be made through ICE investigators and FOTs.

[19] Simanski reports that Homeland Security Investigations made 11,996 arrests.

[20] “FY 2015 Budget in Brief,” Department of Homeland Security, p 61, http://www.dhs.gov/sites/default/files/publications/FY15BIB.pdf

[21] William L. Painter, “Department of Homeland Security: FY2014 Appropriations,” Congressional Research Service, July 2013, p. 31

[22] “FY 2015 Budget in Brief,” Department of Homeland Security, p 59, http://www.dhs.gov/sites/default/files/publications/FY15BIB.pdf

[23] William L. Painter, “Department of Homeland Security: FY2014 Appropriations,” Congressional Research Service, July 2013, p. 31

[24] John F. Simanski, “Immigration Enforcement Actions: 2013,” Office of Immigration Statistics, Department of Homeland Security, September 2014, p. 3, http://www.dhs.gov/sites/default/files/publications/ois_enforcement_ar_2013.pdf

[25] Ibid.

[26] William L. Painter, “Department of Homeland Security: FY2014 Appropriations,” Congressional Research Service, July 2013, pp. 31-32

[27] William L. Painter, “Department of Homeland Security: FY2014 Appropriations,” Congressional Research Service, July 2013, p. 34

[28] “Budget-in-Brief: Fiscal Year 2015,” Department of Homeland Security, p. 66, http://www.dhs.gov/sites/default/files/publications/FY15BIB.pdf

[29] Homeland Security, “FY 2013-2015 Annual Performance Report,” Department of Homeland Security, p. 63, https://www.dhs.gov/sites/default/files/publications/MGMT/DHS-FY-2013-FY-2015-APR.pdf

[30] Our estimates are conservative in the sense that we do not explicitly account for new capital expenditures required to erect, for example, additional detention facilities to house the flow of undocumented immigrants.

[31] “FY 2013 Statistics Yearbook,” Executive Office for Immigration Review, Department of Justice, p. B1, http://www.justice.gov/eoir/statspub/fy13syb.pdf

[32] “FY 2015 Congressional Budget Submission: Administrative Review and Appeals,” Executive Office of Immigration Review, Department of Justice, p. 11, http://www.justice.gov/sites/default/files/jmd/legacy/2014/05/15/ara-justification.pdf

[33] Jeffrey S. Passel & D’Vera Cohn, Unauthorized Immigrant Totals Rise in 7 States, Fall 14: Decline in Those From Mexico Fuels Most State Decreases,” Pew Research Center, November 2014, p. 18, http://www.pewhispanic.org/files/2014/11/2014-11-18_unauthorized-immigration.pdf

[34] Derived from John F. Simanski, “Immigration Enforcement Actions: 2013,” Office of Immigration Statistics, Department of Homeland Security, September 2014, pp. 5-7, http://www.dhs.gov/sites/default/files/publications/ois_enforcement_ar_2013.pdf

[35] “FY 2014 Performance Budget: President’s Budget Submission: Federal Prisoner Detention Appropriation,” United States Marshals Service, Department of Justice, April 2013, p. 21, http://www.justice.gov/sites/default/files/jmd/legacy/2014/03/21/fpd-justification.pdf

[36] Obviously, attempting to shorten the 20-year period would result in even higher costs.

[37] William L. Painter, “Department of Homeland Security: FY2014 Appropriations,” Congressional Research Service, July 2013, pp. 21-22

[38] Marshall Fitz, Gebe Martinez, & Madua Wijewardena, “The Costs of Mass Deportation: Impractical, Expensive, and Ineffective,” Center for American Progress, March 2010, pp. 15-16, http://cdn.americanprogress.org/wp-content/uploads/issues/2010/03/pdf/cost_of_deportation.pdf

[39] Eliot Brown, “Tower Rises, And So Does Its Price Tag,” Wall Street Journal, January 30, 2012, http://www.wsj.com/articles/SB10001424052970203920204577191371172049652

[40] Passel, Jeffrey S., D’Vera Cohn, Jens Manuel Krogstad and Ana Gonzalez-Barrera. “As Growth Stalls, Unauthorized Immigrant Population Becomes More Settled” Washington, D.C.: Pew Research Center’s Hispanic Trends Project, September.

[41] “Immigration Reform: Implications for Growth, Budgets, and Housing,” Immigration Task Force, Bipartisan Policy Center, October 2013, p. 7, http://bipartisanpolicy.org/wp-content/uploads/sites/default/files/BPC_Immigration_Economic_Impact.pdf

[42] Ibid., p. 15

[43] “The 2014 Long-Term Budget Outlook,” Congressional Budget Office, July 2014, http://www.cbo.gov/sites/default/files/45471-Long-TermBudgetOutlook_7-29.pdf

[44] Bipartisan Policy Center, p. 17

[45] Congressional Budget Office, http://www.cbo.gov/publication/45308

[46] Bipartisan Policy Center., p. 19

[47] U.S. citizen children of undocumented immigrants are entitled to receive federal benefits just as any other U.S. citizen who meets the eligibility requirements. If minor children who are citizens leave the U.S. when their undocumented parents are removed, we will realize a cost savings in those programs. According to the Pew Research Center, there were 4.5 million U.S. citizen minor children living with at least one undocumented parent in 2012. (Passel, Jeffrey S., D’Vera Cohn, Jens Manuel Krogstad and Ana Gonzalez-Barrera. “As Growth Stalls, Unauthorized Immigrant Population Becomes More Settled” Washington, D.C.: Pew Research Center’s Hispanic Trends Project, September.)

[48] Jeffrey S. Passel & D’Vera Cohn, Unauthorized Immigrant Totals Rise in 7 States, Fall 14: Decline in Those From Mexico Fuels Most State Decreases,” Pew Research Center, November 2014, p. 8, http://www.pewhispanic.org/files/2014/11/2014-11-18_unauthorized-immigration.pdf

[49] Bipartisan Policy Center, pp. 22-23


Summary

Recently, as part of a larger initiative, the administration announced its plan calling for the Federal Communication Commission (FCC) to overturn state laws that determine how cities deploy their own Internet services. AAF examined all of the residential fiber broadband offerings by municipalities and found that municipal plans were 20 to 50 percent more costly to consumers than private broadband providers.

Methodology

In order to compare municipal and independent broadband projects, AAF collected and averaged information on prices and speeds for 218 plans from 58 different municipal providers. Municipal fiber consumers on average pay $4.36 for every 1 Mbps of download speed. At this price, these customers pay at least 50 percent more than the private offerings. New America Foundation averages private broadband cost to be $2.19 per Mbps. Ookla, which performs Internet speed tests, supports this finding, as its average price per Mbps is $3.51, which makes muni at least 20 percent higher than the national average.       

The Role Of Private Networks

Looking at selected cities, AAF finds that municipal broadband projects are often  not in the interest of consumers despite population or density of those cities. Even in the densely populated city of Evanston Illinois, which lies to the north of Chicago, costs per household were $2,500 for a new fiber project in 2013.[1] The often lauded fiber network in Chatanooga, TN fiber cost at least $3,022 per household to build.[2] And Seattle estimated that connecting the city would cost at minimum $1,900 per household but could top out at $2,200 per household.[3] Meanwhile, Google was able build its network at a cost of $564 per household in Kansas City,[4]  a city with around the same population density of Chatanooga and five times less dense that Seattle.[5] Similarly, estimates place Verizon’s fiber build out at a slightly higher cost of $882 per household.  

Countless problems arise when a local municipality attempts to build networks, contributing to the overall project cost. Building a completely new broadband network is costly, not just in the physical materials but in the need for new labor and project management. As economist Brian Deignan found, these locally run networks do little to increase private employment, but they expand government employment by about 6 percent.[6]

In spite of these costs, the administration is pressing the FCC to help overturn state laws that govern municipally owned and operated broadband networks. However, the Supreme Court has already once rejected this kind of power in Nixon v. Missouri Municipal League. In essence, the highest court was skeptical that the Federal government was better at determining local arrangements than the States on this issue.

About a third of all municipal fiber networks reside in states that have various limitations. [7] What then needs to be overturned? Colorado specifies that municipalities must conduct a referendum before offering a service. Utah places certain administrative obligations on the projects and stipulates that a feasibility study be conducted.[8] Under Louisiana law, certain benchmarks must be met while Florida requires that the project break even within four years and that a tax be applied.[9] It is hardly out the ordinary to have these kinds of limitations, especially since Wisconsin’s statute on municipal borrowing and bonds runs just over 19,000 words, making this section an 80 page novella.[10] In other words, many states are doing proper fiscal due diligence. Overturning these laws is just poor governance.

History is also on the side of these laws. Before the recent push for community broadband, there was an equally feverish push for community WiFi in the early 2000s. Most cities across the United States including St. Louis, San Francisco, and Chicago abandoned the municipal wireless plans.[11] The culprit? Each found it exceedingly difficult to create a sustainable business model. For example, the Philadelphia experiment with city WiFi collapsed after countless years of experimenting with business models due to tepid demand and a bloated budget that was nearly triple the initial projections.[12]  

In spite being cheaper than public versions, private broadband projects actually bear a hidden cost that public projects don’t. To lay wires, companies have to work with governments. As the project director of Google Fiber noted before Congress, these kinds of government regulation “often results in unreasonable fees, anti-investment terms and conditions, and long and unpredictable build-out timeframes.”[13] Cities know how much this is worth. San Francisco even called this knowledge one of its best assets.[14] Taking a closer look at these laws is the last of the five points in the administration’s plan, but it should be front and center. 

We need to get broadband right, but to do this, there needs to be recognition of reality. Consumers are not getting a better deal with broadband provided by cities and municipalities. In fact, these jurisdictions have long been the biggest hurdle for broadband development. Real broadband reform, the kind we need, would work to address these issues.  



[2] EPB Fiber received a $50 million grant in the planning stage and about $162 million was raised via bonds to serve 70,139 households. See http://www.nyls.edu/advanced-communications-law-and-policy-institute/wp-content/uploads/sites/169/2013/08/ACLP-Government-Owned-Broadband-Networks-FINAL-June-2014.pdf

[3] http://www.seattle.gov/broadband/docs/080422FeasibilityReport.pdf

[6] http://grad.mercatus.org/sites/default/files/MGPE_Deignan_0.pdf

[7] In total, 49 of the projects identified exist in states with some kind of restriction as noted by the Community Broadband Networks’ Community Network Map. See http://www.muninetworks.org/communitymap

[8] http://www.muninetworks.org/communitymap

[9] Id.

[11] http://www.businessweek.com/stories/2007-08-15/why-wi-fi-networks-are-flounderingbusinessweek-business-news-stock-market-and-financial-advice

[13] http://oversight.house.gov/wp-content/uploads/2012/01/testimonyofmilomedin_1.pdf

The U.S. House is expected to vote soon on two pieces of legislation to streamline and provide increased oversight to the regulatory system. The Sunshine for Regulatory Decrees and Settlements Act would curb practice known as “sue and settle,” where special interests are given a say in significant federal regulations and the SCRUB Act (Searching for and Cutting Regulations that are Unnecessarily Burdensome), which would establish an independent commission with the goal of reducing cumulative regulatory burdens by at least 15 percent. According to research from the American Action Forum (AAF), savings from these bills could total $48 billion annually and save 1.5 billion paperwork burden hours.

Methodology

To estimate the regulatory savings from curbing sue and settle regulations, AAF used the White House’s public database of significant regulations with a judicial deadline. We reviewed all “economically significant” and major final regulations published between January 21, 2009 and January 15, 2014. This yielded 25 total rules, but only 21 regulations with either monetized costs or benefits. The total net present value burden was $164 billion, with $23.9 billion in annual costs and more than 5.7 million paperwork burden hours. In the previous five-year window (2004 to 2009) during the Bush Administration, there were 19 major final rules with a judicial deadline.

For the SCRUB Act analysis, AAF divided federal regulatory burdens into cumulative paperwork burdens and annualized cost figures, as compiled by AAF’s “Regulation Rodeo,” our database of regulations compiled from the Federal Register. The SCRUB Act would charge an independent commission with the goal “to achieve a reduction of at least 15 percent in the cumulative costs of federal regulation.” AAF took this 15 percent goal and applied it to current paperwork burdens and annual average regulatory burdens since 2008. Obviously, cumulative regulatory burdens predate 2008, but AAF only has quantified figures since that time. However, total paperwork figures do span the history of federal regulation. 

The Sunshine for Regulatory Decrees and Settlements Act

The legislation, “The Sunshine for Regulatory Decrees and Settlements Act,” would require regulatory agencies to give public notice when they learn of a lawsuit that could eventually impose a federal rule. The act would curb this practice by giving outside parties an opportunity to intervene in the court case and it requires federal agencies to publish a notice of the proposed settlement in the Federal Register. In 2014, a version of “The Sunshine for Regulatory Decrees and Settlements Act” passed the U.S. House on a bipartisan vote.

How Sue and Settle Works

“Sue and settle” is a process by which special interest lawyers use the power of the courts and complicit regulators to initiate and expedite pricey rulemakings. A lawsuit is filed against the regulatory agency and is then “settled” out of court through a mutual consent decree, forcing an expedited regulatory process that is legally binding for the agency.

Sue and Settle Findings

The Sunshine for Regulatory Decrees and Settlements Act has the ability to increase public oversight, possibly limiting lawsuits that eventually lead to expensive regulations.

A recent GAO report noted the somewhat limited nature of sue and settle lawsuits, but the findings only highlighted EPA rulemakings. Examining all recent rules with judicial deadlines reveals seven rules from the Department of Energy (DOE) and one rule from the Department of Transportation (DOT). In fact, the annual burden from DOE rules with judicial deadlines is $3 billion annually; the DOT’s lone rule would add $470 million in burdens. These are hardly trivial figures, although they are dwarfed by EPA’s total of $20.4 billion in annual costs.

In addition to the astronomical direct costs, there are also associated paperwork burdens arising from these lawsuits. The 11 rules that quantified paperwork will impose more than 5.7 million burden hours. To put this in perspective, assuming 2,000 hours a year, it would take 2,866 employees working full-time to complete the new paperwork from these sue and settle rules.

For a local perspective, the map below details how the costs of sue and settle suits would affect states. By examining which industries are impacted by the legislation, and using Census data on the geographic distribution of industry establishments, AAF is able to approximate which states would be most affected.

The SCRUB Act

SCRUB would address cumulative regulatory burdens by establishing an independent commission to evaluate the effectiveness of past rules. The goal would be to reduce 15 percent of the nation’s cumulative regulatory cost burdens.

SCRUB Act Findings

Depending on the perspective, a 15 percent reduction in regulatory burdens is either a historic victory or simply a first step. Regardless, of the paperwork savings and regulatory cost reductions, the numbers are significant. Currently, the government imposes more than 9.9 billion paperwork burden hours. A 15 percent reduction in those hours would result in almost 1.5 billion hours in savings for taxpayers and American businesses. For comparison, the U.S. individual income tax generates 2.6 billion hours of paperwork. A reduction of 1.5 billion hours would reduce the nation’s paperwork burden to the lowest level since 2004.

If we were to quantify the timesavings, there are two main metrics: the hourly cost of a regulatory compliance officer ($32.10) and Gross Domestic Product (GDP) per hour worked ($60.59 in 2011 dollars). Assuming the compliance officer figure, 1.5 billion hours translates to $48.1 billion in regulatory savings. That figure is larger than the GDP of Paraguay. The GDP per hour worked figure yields more than $90.8 billion in possible savings. For the purposes of this study, AAF used the lower figure of $48.1 billion in savings.

For more recent numbers, AAF has data on every regulation since 2008. The graph below displays annualized regulatory costs for final rules from 2008 to 2014.

During this period, annualized regulatory costs have average $15 billion. A 15 percent reduction in average new burdens would save Americans $2.25 billion. During a ten-year period, this amounts to $22.5 billion in savings. In total, the goal of saving 15 percent in regulatory burdens is hardly radical in the context of overall regulatory costs.

Conclusion 

Adopting The Sunshine for Regulatory Decrees and Settlements Act and SCRUB won’t completely transform the regulatory state, but it will provide needed oversight of a regulatory system that some view as opaque. The results of possible savings from reform are crystal clear, however: $48 billion in annual burdens and 1.5 billion paperwork burden hours. 

Executive Summary

  • The federal direct student loan program will cost taxpayers an additional $21.8 billion in FY2016 according to the president’s budget proposal. This is reported to be the largest estimate adjustment for any federal credit program in budget history.
  • $90 billion has been borrowed by the U.S. Treasury to keep the student loan program afloat since 2009. When the Department of Education can’t cover its loans, it borrows from the Treasury.
  • The program was initially projected to save $87 billion between 2010 and 2019.

Introduction

Included in President Obama’s $4 trillion FY 2016 budget are a number of changes for higher education –fiscally, and policy-wise. The budget acknowledged the federal government would be on the hook for $21.8 billion more than expected for direct student loans in fiscal year 2016. That’s been reported as the largest adjustment for any federal credit program anywhere in federal budget history. Sadly, for the taxpayer, it’s indicative of a larger, very expensive, but mostly hidden problem with the federal direct loan program.  Suffering from a chronic cash flow shortage, the federal direct loan program has resulted in nearly $90 billion of Treasury borrowing (deficit spending) to keep the program afloat.

Reestimates

Every year, the federal government estimates the cost of credit programs – programs where the federal government acts as a lender or guarantor of loans. These programs are estimated on a net present value basis, as required by the Federal Credit Reform Act, legislation that replaced the previous method of cash flow accounting with a more credit-oriented method of accounting for federal loan programs. The government estimates the costs for new cohorts of loans for the coming fiscal year, and re-estimates the cost of prior year. In other words, in 2015, the government provides a cost estimate for new loans in the coming fiscal year, but also reviews the originally estimated cost of a group of student loans issued in say, 2005, to see what the current cost estimate looks like. If the estimated cost has changed, the government makes a statement to the revised cost estimate as part of the federal budget.  If the cost is higher than anticipated, the budget will show a shortfall. If negative, then the budget will show a surplus for that year’s worth of loans or loan guarantees.

Shortfalls

This year, however, the sheer size of the shortfall in the federal direct student loan program was sufficient to attract national media attention, and for good reason. Even in a $4 trillion budget, $21.8 billion is a lot of money. As a reference point, the $21.8 billion that the Department of Education (ED) is estimated to spend to cover up the shortfall in direct lending is:

•$6 billion more than the president proposed spending on low-income K-12 education programs (Title I);

•$10 billion more than the president proposed spending on special education (IDEA, Part B);

•70 times more than the president proposed spending on charter schools; and

•Nearly 7 times more than the president’s annual request for universal preschool, his signature education initiative.

Given the size of the shortfall, it seems the ED would do a better job anticipating the costs of the federal direct student loan program. After all, any program that’s reported to save $87 billion over 10 years at its inception only to have the savings reduced to $67 billion less than one year later, and then to have a third of that evaporate in a single budget year, should be on a budget watch list. 

Federal law requires estimates of federal credit programs to be calculated under very specific standards, stipulating that credit programs are scored on the basis of net present value, excluding administrative costs. In other words, once an estimate has been calculated, it’s basically set in stone. All any Secretary of Education can do when the costs are re-estimated and show a shortfall is to point to the law – even when those estimates are off by $21.8 billion.

In fact, the estimates are often off. Of the 21 annual cohorts of loans issued by the Department of Education, only three didn’t cost more than expected. The remaining 18 cohorts racked up over $41.2 billion in extra costs to the taxpayer, an error rate of 85 percent. As the federal government now issues more student loans than any financial institution (or possibly all financial institutions combined), errors like this are especially profound. Every minor error in the original estimate will be compounded given the tremendous volume of loans issued every year by Uncle Sam.

An extra prize is that these estimates are done annually; this year’s $21.8 billion shortfall is just part of the additional budget pain that’s coming as the cost estimate is continually revised throughout the life of each cohort of loans.  It might be a $21.8 billion hole this year, but new loans aren’t expected to be paid off for at least 10 years at a minimum. With the administration’s pay as you earn (PAYE) initiatives, it could be 20 or 25 years until pay-off, or never, for some students. It’s a gift that keeps on giving, draining taxpayer money for decades.

Borrowing from Treasury

Still, that’s not even the whole picture. To see the true damage being done by the direct lending program, observers would have to dust off the monthly borrowing statement issued by the Department of the Treasury.  These statements tell the story of how the government pays its debts. Some debts are owed between agencies, with Treasury footing the bill if programs run over budget or if agencies operating credit programs don’t recover what they expect to. In the case of direct lending, the Treasury has borrowed nearly $90 billion since 2009 to keep the direct loan program afloat. The program generated nearly $800 billion of student debt for the Treasury books.

 

Fiscal Year

Agency Borrowing attributed to FDLP ($billions)

FDLP Issuances

Over borrowing attributed to FDLP

($billion)

2009

61.6

41.4

20.2

2010

103.3

97.9

5.4

2011

153.8

133.9

19.9

2012

155.4

140.7

14.7

2013

145.3

129.3

16.0

2014

118.3

105.6

12.7

Total

737.7

648.8

88.9

Note: figures are taken from the Monthly Treasury Statement (MTS) and Credit Supplement stating annual FDLP loan volumes.

What’s happening is that for each year the FDLP returns less in principal and interest earnings than expected. The Treasury Department then has to issue debt to the ED to repay debts it owed to the Treasury. Catch that? Since the ED is short on its IOU to Treasury, it has to come up with the money somewhere, and that somewhere is another IOU from the taxpayer in the form of additional Treasury debt. Outside of Washington, this process would seem comical, as it looks a lot like asking a bank for a second loan to pay off an initial loan.

Compounding the issue is that Treasury debt is not interest free. When the ED comes up short, the Treasury still has to pay interest on the unpaid debt. That’s also true for the new debt, which means a double whammy for taxpayers who are propping up Treasury’s borrowing with their tax dollars.  According to Congressional Budget Office estimates, interest payments on the national debt are expected to hit $227 billion this year alone. The interest payments will then more than double to $480 billion by 2019 and more than triple to $722 billion by 2024.  Given the sheer volume of federal student loans and the fact that the government borrows to make those loans –the interest payments resulting from the government’s operation of the FDLP are not inconsequential.

Conclusion

All told, the direct loan program is creating a chronic cash flow shortage within the U.S. ED that is impacting taxpayers in the form of increased deficits and associated Treasury borrowing. This is an extraordinary loss given that the program was originally estimated to save $87 billion. Just as troubling is the lack of transparency around the program’s poor performance. Other than a handful of lines tucked into thousands of pages of budget documents, there is no real accountability for the program’s lack of performance.

Taxpayers should be concerned about the $21.8 billion shortfall, but they should be even more troubled by the nonchalance with which billions of dollars in additional debt are being swept under the rug by the direct loan program and its administrators.


The Federal Housing Finance Agency (FHFA), regulator and conservator of government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac, announced in December 2014 it would allow the GSEs to pay into the Housing Trust Fund and Capital Magnet Fund. This paper briefly provides background on those funds and their purpose, while also outlining the policy implications of FHFA’s decision.

Background

TABLE 1. TIMELINE OF HTF/CMF RELATED EVENTS & RULEMAKINGS

JULY 2008

HOUSING & ECONOMIC RECOVERY ACT EFFECTIVE

SEPTEMBER 2008

FANNIE MAE & FREDDIE MAC ENTER CONSERVATORSHIP UNDER FHFA

NOVEMBER 2008

FHFA SUSPENDS GSE HTF/CMF ALLOCATIONS

MARCH 2009

RFC: DESIGN & IMPLEMENTATION OF CMF FROM TREASURY DEPT

DECEMBER 2009

PR: HTF ALLOCATION FORMULA FROM HUD

MARCH 2010

NPR & RFC: REGULATIONS GOVERNING THE CMF FROM TREASURY DEPT

OCTOBER 2010

PR: REGULATIONS GOVERNING THE HTF & COORDINATION WITH HOME PROGRAM FROM HUD

DECEMBER 2010

IFR & RFC: REGULATIONS GOVERNING THE CMF FROM TREASURY DEPT

JULY 2013

NLIHC ET AL FILE LAWSUIT AGAINST FHFA TO COMPEL PAYMENTS TO HTF

SEPTEMBER 2014

NLIHC ET AL LAWSUIT DISMISSED FOR LACK OF STANDING

DECEMBER 2014

FHFA SENDS LETTER TO GSEs REINSTATING CONSTRIBUTIONS

IFR & RFC: PROHIBITION ON PASSING ON COST OF ALLOCATIONS FROM FHFA

JANUARY 2015

IFR: REGULATIONS GOVERNING THE HTF FROM HUD

Note: Notice of Proposed Rulemaking (NPR), Proposed Rule (PR) Interim Final Rule (IFR), & Request for Comment (RFC)

With the Housing and Economic Recovery Act of 2008 (HERA), Congress established the Housing Trust Fund (HTF) and Capital Magnet Fund (CMF), assigning a portion of revenues from Fannie Mae and Freddie Mac as the dedicated funding source. The GSEs must set aside 4.2 basis points of each dollar of unpaid principal balance of its total new business purchases (equivalent to 4.2 cents for every $100) and then allocate those reserved funds following each fiscal year. The funding is divided with the HTF receiving 65 percent and the CMF receiving 35 percent.

In December 2014, FHFA lifted its suspension of GSE allocations, directing them to set aside funding for the HTF and CMF this year. According to the president’s budget, the GSEs are projected to distribute $120 million and $64 million to the HTF and CMF respectively in fiscal year 2016.[1] Another estimate projects up to $400-500 million would go to the funds based on previous GSE volumes.[2] Congress may also elect to appropriate funds in the future.

The HTF is administered by the Department of Housing and Urban Development (HUD), which developed the formula by which money will be handed out to states and state-level housing agencies to increase and maintain the supply of affordable rental housing and boost homeownership for low-income Americans. The CMF, an account within the Community Development Financial Institutions Fund overseen by the Treasury Department, similarly funds a competitive grant program wherein community development financial institutions and nonprofit housing corporations apply for funding to be used to boost affordable housing projects as part of a larger community stabilization or revitalization strategy.  

Purpose of the HTF and Implementation

Attempts to establish a national housing trust fund have been made for more than 25 years, ultimately accomplished with HERA.[3][4] The program is meant to provide a stable funding source to increase the supply of affordable housing for low-income Americans. Funds from the HTF can be combined with Low Income Housing Tax Credits (LIHTC), HOME, Choice Neighborhoods funding, Rental Assistance Demonstration, and other programs to accomplish that aim. Specifically, at least 80 percent of money from the HTF must be used for rental housing; up to 10 percent of the funds can be used for homeownership programs and 10 percent to cover recipients’ administrative and planning costs.

In January, HUD issued an interim final rule establishing the regulations that will govern the HTF. In determining state allocations of funds, HUD will weigh several factors:

  • State’s relative shortage of rental housing for extremely low-income (ELI) families[5] (weighted 50 percent)
  • State’s relative shortage of rental housing for very low-income (VLI) families (weighted 12.5 percent)
  • State’s relative number of ELI families in substandard, overcrowded or unaffordable housing (weighted 25 percent)
  • State’s relative number of VLI families in substandard, overcrowded or unaffordable housing (weighted 12.5 percent)
  • Local construction costs  
  • Minimum allocation per state of $3 million

State-Level Impacts

As part of its rulemaking, HUD completed a regulatory impact analysis that includes an estimate of how much money each state would receive using its allocation formula should the HTF receive either $375 million or $1 billion (see map below).[6] With the funding formula emphasizing affordability and existing housing stock, states with large populations and higher housing costs generally receive more HTF funding than their share of the U.S. population. 

With $375 million in HTF funding, California would predictably receive the most funding, totaling $61 million. California and New York, with higher than average housing costs, would stand to receive 16.3 percent and 9.6 percent of a $375 million trust fund allocation despite making up only 11.9 percent and 6.2 percent of the country’s population, respectively.[7] If funding to the HTF increased to $1 billion, California would receive 18.7 percent of HTF funding and New York would receive 11 percent. Many small states such as Wyoming and Vermont would also receive an outsized share of HTF funding in regard to their share of the population due to the $3 million minimum, though that advantage would diminish as funding to the HTF increases. In total, 21 states, the District of Columbia, and Puerto Rico would receive the minimum $3 million allocation if the HTF receives $375 million from the GSEs.

In its interim final rule, HUD also made a few significant changes to the regulations governing the HTF from its 2010 proposed rule. HUD reversed course on a proposal to encourage transit-oriented development. HUD will also now allow HTF funds to be used on public housing. And perhaps most notably, if less than $1 billion is allocated to the HTF, all funds must be used for ELI housing so that funding can prioritize worst case needs.

End of Suspended Assessments

Shortly after HERA was passed in 2008, the GSEs entered into conservatorship overseen by FHFA and allocations to the HTF and CMF were suspended. FHFA Director Mel Watt reversed that decision in December 2014, sparking some controversy; in particular, many see FHFA’s action conflicting with mandates under HERA. By statute, FHFA must suspend allocations under a number of scenarios[8], shown below:

Furthermore, FHFA has broader mandates as conservator of the GSEs[9], shown below:

In letters to Fannie Mae and Freddie Mac, FHFA Director Mel Watt reasoned that the temporary suspension was no longer justified because the allocations “would not contribute to the financial instability” of the GSEs, citing their recent dividends to the Treasury Department.[10] He argued the two other provisions allowing the suspension of allocations were also no longer applicable because of changes to the GSE stock purchase agreement that sweep all GSE profits to the Treasury Department. Furthermore, FHFA expects the GSEs to maintain profitability in the near future and reserves the right to reverse the decision if they do not.  

Criticisms and Policy Implications

1. Funds are not subject to the scrutiny and approval inherent in the regular appropriations process.

Advocates of the HTF have acknowledged the importance of gaining funds necessary for its capitalization from outside the regular appropriations process. Sheila Crowley, President and CEO of the National Low Income Housing Coalition, stated at a Senate Banking Committee hearing, “…if we thought we could get the appropriated funds…to solve this problem, then we would not need to have this conversation.”[11] Critics argue that this mentality undermines the integrity of the congressional appropriations process, which forces lawmakers to weigh the tradeoffs of policy priorities in distributing finite budget resources.[12] Furthermore, by relying for funding outside of the appropriations process, Congress forfeits the opportunity to scrutinize the HTF’s effectiveness at meeting stated policy objectives.    

2. Government support for affordable housing is redundant with appropriated funding of more than 30 programs.

The federal government supports housing in many different ways—from tax credits and deductions to the more than 30 programs in HUD. In fact, the GAO concluded that the federal government “incurred about $170 billion in obligations for federal assistance and forgone tax revenue in fiscal year 2010” to provide housing aid to homebuyers, renters, and state and local governments.[13] A complex web of existing federal support has raised doubts about the costs, effectiveness, and efficiency by which the federal government boosts housing affordability. The HTF adds further complexity without reforming existing programs that work to accomplish similar aims, such as the HOME Investment Partnerships Program and the LIHTC.

3. FHFA’s decision ignores its statutory obligations and obligations to taxpayers. 

With obligations under the law to “preserve and conserve” GSE assets, suspend allocations that contribute to “financial instability,” and put the GSEs in a “sound and solvent condition” or receivership, some argue that FHFA is ignoring statutory obligations.[14] Additionally, all net income from the GSEs sweeps to the Treasury Department. HTF payments reduce amounts swept to Treasury, which are meant to compensate taxpayers for the risks they take on by supporting the GSEs.[15]

4. Regardless of the decision by FHFA, the unsustainable nature of GSE conservatorship undermines the efficacy of the HTF and discourages housing finance reform.   

The HTF’s aim (and the reasoning behind putting it outside the regular appropriations process) was to establish a stable, permanent source of funding for affordable housing. Yet using the GSEs as the funding source stands in opposition to that aim. The GSEs’ exact future is uncertain, though broad bipartisan and public support exists for their elimination.[16] Relying on the GSEs to fund federal programs additionally adds a disincentive for lawmakers to finally tackle housing finance system reform; growing the GSEs would elicit more money for the HTF and CMF despite increased risks to taxpayers.  

HTF allocations are expected to proceed sometime in 2016 unless FHFA reverses its decision and suspends allocations. Congress could also act to prevent the HTF allocations from moving forward. Legislation has been introduced in the House of Representatives that would apply GSE dividends toward reducing the federal budget deficit instead of funding the HTF.[17]

 

 

 


[2] John Griffith, Enterprise Community Partners, “Fannie and Freddie will soon start funding the Housing Trust Fund—now what?” (December 2014); http://blog.enterprisecommunity.com/2014/12/freddie-funding-housing

[3] See for example: H.R. 918, the Jesse Gray Housing Act of 1987, https://www.congress.gov/bill/100th-congress/house-bill/918; H.R. 4959, the National Housing Trust Act of 1988, https://www.congress.gov/bill/100th-congress/house-bill/4959; H.R. 5275, the Federal Housing Trust Fund Act of 1994, https://www.congress.gov/bill/103rd-congress/house-bill/5275; & S. 2997, the National Housing Trust Fund Act of 2000, https://www.congress.gov/bill/106th-congress/senate-bill/2997

[4] Housing and Economic Recovery Act of 2008 (HERA), Public Law 110-289; http://www.gpo.gov/fdsys/pkg/PLAW-110publ289/pdf/PLAW-110publ289.pdf

[5] Note: extremely low-income (ELI) is defined as 30 percent of area median income while very low-income (VLI) is 50 percent of area median income.

[6] Housing Trust Fund Interim Final Rule – Regulatory Impact Analysis, Dept. of Housing & Urban Development (January 2015); http://www.regulations.gov/#!documentDetail;D=HUD-2010-0101-0095

[7] Author’s Calculation based on U.S. Census Bureau Annual State Population Estimates

[10] FHFA Letters to Fannie Mae and Freddie Mac, Statement on the Housing Trust and Capital Magnet Fund, (December 2014); http://www.fhfa.gov/Media/PublicAffairs/Pages/FHFA-Statement-on-the-Housing-Trust-Fund-and-Capital-Magnet-Fund.aspx

[11] GPO, “Housing Finance Reform: Essential Elements to Provide Affordable Options for Housing,” Senate Committee on Banking, Housing, & Urban Affairs, (November 2013); http://www.gpo.gov/fdsys/pkg/CHRG-113shrg86711/pdf/CHRG-113shrg86711.pdf

[12] See Douglas Holtz-Eakin, “Testimony Before the Senate Banking Committee on Housing Finance Reform,” (November 2013); http://americanactionforum.org/testimony/holtz-eakin-testimony-before-senate-banking-committee-on-housing-finance-re

[13] Government Accountability Office, “Housing Assistance: Opportunities Exist to Increase Collaboration and Consider Consolidation,” (August 2012); http://gao.gov/assets/600/593752.pdf

[14] House Committee on Financial Services, “FHFA Director Delivers Lump of Coal to Every Taxpayer,” (December 2014); http://financialservices.house.gov/news/documentsingle.aspx?DocumentID=398566

[15] Peter Wallison, AEI, “A Test for Mel Watt,” (February 2014); http://www.aei.org/publication/a-test-for-mel-watt

[16] AAF Releases New Poll of Public Attitudes on Fannie Mae, Freddie Mac, and Housing Reform (July 2013); http://americanactionforum.org/survey/aaf-releases-new-poll-of-public-attitudes-on-fannie-mae-freddie-mac-and-hou

[17] H.R. 574, the Pay Back the Taxpayers Act of 2015; http://hdl.loc.gov/loc.uscongress/legislation.114hr574

  • Franchises employed 7.1% of the private sector and created 10.8% of all new private industry jobs in 2014

  • Last year 83.8% of all new automobile dealer jobs were created by franchises

  • In 2014, franchises added 52.5% of all new restaurant jobs

Executive Summary

Recently, policymakers and regulators have enacted an abundance of laws and regulations affecting labor policy, including raising the minimum wage, expanding overtime pay coverage, and attempting to change the legal definition of “joint employer.” Interestingly, franchised businesses seem to bear a considerable amount of the burden from these new rules. One of the key sources of private sector job growth in the nation, franchises employed 7.1 percent of private industry workers in 2014, and created 10.8 percent of all jobs added last year. Moreover, franchises have been central to job creation for automobile dealers, restaurants, and accommodation services since 2012. This report examines the role of franchises in today’s economy and provides an analysis of associated regulations.

Introduction

The franchise business model has been among the most dependable sources of new jobs over the last few years. According to the payroll processing firm ADP, franchises employ more than 8.4 million workers, representing 7.1 percent of private sector workers.[1] While the most common type of franchise businesses are restaurants or fast-food chains, the franchise model has taken hold in several industries. Workers are employed by franchise supermarkets, gasoline and auto repair stores, drug stores, real estate services, and even professional and scientific technical services. The three industries that most frequently employ the franchise model are automobile dealers, restaurants, and accommodation services.

Franchises and Economic Recovery

During the past few years, franchise industries have been a central source of job creation. Looking at the entire economy, ADP data suggest that private sector job creation has not been accelerating. As illustrated in table 1, from 2012 to 2014, private sector employment levels grew 2.0 percent to 2.2 percent each year. While growing steady, the lack of acceleration is worrisome.

Table 1: Job Creation in Franchise Businesses

Year

Total Private Sector Job Growth (y-o-y %)

Franchise Job Growth (y-o-y %)

Portion of New Jobs in Franchise Business (%)

2012

2.1

2.1

6.8

2013

2.0

3.1

10.9

2014

2.2

3.4

10.8

 

In franchise businesses, however, the story is quite different. In the same period, job creation in franchise businesses has grown more rapidly each year. Franchise employment grew 2.1 percent in 2012, 3.1 percent in 2013, and 3.4 percent in 2014. As a result, franchise businesses are responsible for a significant portion of new jobs created. For instance, in 2013 and 2014 franchise businesses added 246,100 and 273,100 jobs, while the entire private sector added 2.2 million and 2.5 million. So while franchises employ 7.1 percent of the private sector, they have created 10.9 percent and 10.8 percent of all new private industry jobs in 2013 and 2014.[2]

Moreover, employment in industries that have many franchise businesses is growing quite rapidly relative to employment in the entire private sector. Comparing total industry employment data from the Bureau of Labor Statistics (BLS)[3] to ADP franchise employment data in automobile dealers, restaurants, and accommodation services highlights how franchises in those industries are driving private sector growth.

Figure 1 illustrates the jobs created by all automobile dealers and just their franchises over the past few years.

At the end of last year, 67.4 percent of those employed by the automobile dealer industry worked for a franchise. BLS data reveal that from 2012 to 2014, that industry grew far more rapidly than the entire private sector. As table 2 reveals, employment in automobile dealers grew 2.5 percent to 3.6 percent each year during that time. Meanwhile, franchise employment within that industry grew even more rapidly, indicating that franchises drove that growth.

Table 2: Job Growth in Automobile & Parts Dealers

Year

Total Private Sector Annual Job Growth (y-o-y %)

Auto & Part Dealer Job Growth (y-o-y %)

Franchise Auto & Part Dealer Job Growth (y-o-y %)

2012

2.1

2.5

3.7

2013

2.0

3.6

2.9

2014

2.2

3.6

4.5

 

ADP data reveal that from 2012 to 2014, franchise employment in automobile dealers grew 2.9 percent to 4.5 percent annually. As a result, last year 83.8 percent of all automobile dealer jobs added were created by franchises.

Figure 2 reveals that the high rate of employment growth in the restaurant industry was driven by job growth in that industry’s franchises.

By the end of 2014, franchises employed 43.3 percent of people in the restaurant industry. As in the case of automobile dealers, employment in the restaurant industry grew far more quickly than in the entire private sector.

Table 3: Job Growth in Restaurants

Year

Total Private Sector Annual Job Growth (y-o-y %)

Restaurant Job Growth (y-o-y %)

Franchise Restaurant Job Growth (y-o-y %)

2012

2.1

3.8

1.7

2013

2.0

3.6

3.4

2014

2.2

3.0

3.6

 

While the annual job growth rate in the restaurant industry was much higher than the private sector as a whole, it decelerated over this period from 3.8 percent in 2012 to 3.0 percent in 2014. However, the data indicate that this trend was not due to franchise restaurants, where the annual job growth rate accelerated from 1.7 percent in 2012 to 3.6 percent in 2014. As a result, in 2014 franchises added 52.5 percent of restaurant jobs.

Finally, figure 3 demonstrates the employment growth that occurred in the accommodation industry.

At the end of last year, 37.5 percent of accommodation employees worked for a franchise. In this industry, the evidence suggests that franchise businesses have perhaps been the only source of stable job growth.

Table 4: Job Growth in Accommodation

Year

Total Private Sector Annual Job Growth (y-o-y %)

Accommodation Job Growth (y-o-y %)

Franchise Accommodation Job Growth (y-o-y %)

2012

2.1

1.4

1.2

2013

2.0

1.5

2.0

2014

2.2

0.7

2.6

 

Job growth in the accommodation industry lagged compared to the rest of the economy, as the industry’s annual rate of job creation decelerated from 1.4 percent in 2012 to 0.7 percent in 2014. What caused this deceleration? It certainly was not accommodation franchises. In that period, the job growth rate in franchises accelerated from 1.2 percent to 2.6 percent. While job creation in accommodation franchises was not necessarily more rapid than the rest of the economy, those franchises were a bright spot in an otherwise struggling industry. In fact, in 2013 accommodation franchises accounted for 49.2 percent of jobs added in the industry and in 2014 these franchises created 45 percent more jobs than the entire industry. This means that in 2014 the rest of the industry lost jobs and on net the entire industry created fewer jobs than the franchises did. In this case, the franchise model stood strong in a year that the accommodation industry struggled.

Jobs and Mobility

Public policymakers have made economic mobility a primary concern over the past year. Indeed, evidence does suggest that mobility has not improved over the past several decades. Ideally,  policies to improve the state of economic mobility should do so without over-burdening legitimate job creators.

Contrary to the typical American dream, mobility has been lacking in the United States over the past few decades, particularly for those at the extreme ends of the income distribution. According to researchers at the Federal Reserve Bank of Richmond, 33.5 percent of those born in the bottom quintile stay there, but 7.4 percent reach the top quintile. Meanwhile, 37.8 percent of those born in the top quintile stay there and only 10.9 percent fall to the bottom.[4]

To make matters worse, a 2014 paper in the American Economic Review provides evidence that economic mobility has been stagnant over the past several decades. Specifically, the researchers found that since 1970, the probability that someone born in the bottom quintile reaches the top has been stable at 8 to 9 percent.[5]  While the authors aimed to dispel the myth that mobility has worsened in America, the fact that it has not improved is still worrisome.

According to the researchers at the Federal Reserve Bank of Richmond, for most people to increase the value of their labor income, they must acquire greater skills. They suggest that investment in college and early childhood education could improve the chances of more people in the bottom of the income distribution being able to move up the ladder.[6] However, another way to increase labor skills is simply on the job experience, which franchises provide to many employees. Franchise businesses often hire young and uneducated workers, giving them direct experience and skills that can translate into increased incomes in the future. For instance, 28.1 percent of all workers aged 16 to 24 were employed by leisure and hospitality businesses, such as restaurants and hotels that are frequently franchises.[7] However, if the persistent enactment of laws and regulations that burden franchises continues, then it is likely that those opportunities will not be as frequently available.

Laws and Regulations Targeting Franchises

In 1986, President Ronald Reagan quipped, “Government’s view of the economy could be summed up in a few short phrases. If it moves, tax it. If it keeps moving, regulate it.” Over the last few years, the franchise model has been substantially impacted by laws and regulators. Franchises are facing a slew of new minimum wage laws, overtime pay rules, and “joint employer” standards. Combined, these laws and regulations pose a risk to an industry that employs 8.4 million Americans and created 38,000 jobs in December. The following regulations pose a unique threat to the franchise business model.

Home Health Care Workers

Recognizing that franchises are a potential source of new union employees and additional legal liability, the federal government initially just targeted specific industries. In 2013, the Department of Labor (DOL) imposed additional wage controls and overtime rules on home health “domestic service” employees, who are routinely employed by third parties and franchises. The rule, designed to protect domestic service employees, will actually “disemploy” thousands of them every year. According to the agency’s math, the measure will force 1,531 home health care providers out of work annually. The rule also contributed to deadweight losses because consumers “must now pay more to receive the same hours of service.”

Despite the costs, lost employment, and regulatory hurdles, the administration moved forward with the regulation. DOL did specifically mention AFSCME and SEIU support for their rulemaking. “Comments from labor organizations, non-profits and civil rights organizations, and worker advocacy groups generally supported the proposal.”

Minimum Wage

Since President Obama proposed increasing the minimum wage in his 2013 State of the Union Address, lawmakers and labor activists have worked to enact laws and rules to raise federal, state, and local minimum wages. These laws hit franchise-heavy industries, such as restaurants and hotels, particularly hard since they frequently employ low-skilled workers. Despite no increase in the federal minimum wage, several states have increased the minimum wage and the president used his executive authority for federal contractors. In 2014, President Obama issued an executive order to raise the minimum wage of all federal government contractors to $10.10 per hour. Meanwhile, 21 states began the year by increasing their minimum wage. Three other states and Washington, DC are slated to increase their minimum wage later this year. While many hope these laws will help low-wage workers, research consistently shows that the exact opposite is true.

Overtime Pay Regulation

In March 2014, President Obama directed DOL to issue new regulations that update existing overtime standards. Under the current rule, all wage and salary workers are entitled to time-and-a-half overtime pay for working over 40 hours per week. However, many workers can be exempt from the rule if they are classified as an executive, administrative, or professional employee. Regulations require that the exempt employee must earn a salary of at least $455 per week and the employee’s duties must match their exemption classification. Under President Obama’s order, however, the DOL will adjust the exemption requirements and likely raise the minimum salary needed to classify an employee as exempt from overtime pay. Intriguingly, the White House specifically cited “a convenience store manager or a fast food shift supervisor,” as beneficiaries of this new rule, many of whom work for a franchise. AAF found, however, that this rule change will likely be a dismal tool to boost earnings among low income workers.

NLRB Joint Employer Decision

Among all of the policy changes affecting franchises, none threaten to do more harm to the industry than the National Labor Relations Board’s (NLRB) joint employer decision. Today, NLRB stands ready to fundamentally alter the legal definition of “joint employer” as it applies to franchises. This would leave the franchise business model subject to more lawsuits and aggressive union campaigns. Since 1984, NLRB has held a firm as a joint employer only if it exercised direct control of employees in another business. For example, hiring, firing, and supervision constituted direct control. This is not the case in franchise models, as all these tasks are left to the independent franchisee owner, not the franchisor. Even before the 1980s, NLRB has long established that franchisors are not joint employers because all interactions with franchisees are aimed to preserve the franchise brand, not to tell franchisees how to manage their workers.

However, it seems that the NLRB wants to reverse course. In December, NLRB General Council Richard Griffin started taking formal action against franchises when he issued 13 complaints involving 78 labor practice charges against McDonald’s USA and several McDonald’s franchisees, labeling them joint employers.

This regulatory campaign will not end with McDonald’s. NLRB is currently considering a case known as “Browning-Ferris,” which could formally upend the generation-old franchise business model. Thankfully, some courts have already stepped in to halt this movement. In January, U.S. District Court Judge Roger Benitez found that the parent franchisor lacked direct control over the day-to-day operations of the franchisee and that there was no evidence that the franchisor controlled work schedules and direct pay. It was an initial legal success for franchises, but it is unclear if the NLRB will still move to implement the rule.

Conclusion

With labor force participation rates at record lows, there are only a few bright spots in today’s economy. Where growth is strong, in franchises, job creators are unfortunately uniquely burdened by recent regulatory efforts. The rising cost of doing business and additional legal liability pose a threat, particularly when outdated regulations remain on the books, unamended. It’s left to the courts and Congress, to stem the tide of new rules. 



[2] Figures are authors’ analysis of ADP reported data

[3] Bureau of Labor Statistics, http://www.bls.gov/data/

[4] Kartik Athreya & Jessie Romero, “Land of Opportunity? Economic Mobility in the United States,” Economic Brief, Federal Reserve Bank of Richmond, July 2013, p. 3, https://www.richmondfed.org/publications/research/economic_brief/2013/pdf/eb_13-07.pdf

[5] Raj Chetty, Nathaniel Hendren, Patrick Kline, Emmanuel Saez, & Nicholas Turner, “Is the United States Still a Land of Opportunity? Recent Trends in Intergenerational Mobility,” American Economic Review, Volume 104, Issue 5, pp. 141-147, http://eml.berkeley.edu/~saez/chettyetalAERPP2014.pdf

[6] Federal Reserve Bank of Richmond, pp. 4-5

[7] Authors’ analysis of BLS data available at http://www.bls.gov/news.release/youth.t03.htm

Executive Summary

Medicare Advantage (Part C or MA) and the Medicare Prescription Drug Program (Part D) share the feature that the government pays privately-run plans a monthly fee to provide benefits to enrollees. In both cases, the base monthly fee is determined according to a well-known formula, but there is also a process for adjusting the fees paid to plans based on the health history and status of the beneficiaries actually enrolled in each particular plan. This process, known as “risk adjustment,” is intended to increase payments for enrollees who will cost more to take care of, and decrease payments for healthier enrollees. This has two related benefits: first, it decreases the risk to each plan of attracting a disproportionate number of relatively unhealthy enrollees, and second, it decreases the incentive for plans to attempt to disproportionately attract healthier enrollees.

The risk adjustment methodology for MA takes into account each patient's previous diagnoses, as well as demographic factors. The system is “prospective”—that is, it uses a patient's diagnoses from one year to calculate a risk adjustment factor used for payments for the following year. These risk scores are calculated by statistical analysis of diagnoses and expenditures for “fee-for-service” (FFS) patients. Because MA plans are paid based on diagnoses and FFS Part B providers are paid based on procedures, FFS patients tend to have fewer documented diagnoses. To better align the programs, a uniform “coding intensity adjustment” factor is applied to reduce each MA patient's risk score before payments are calculated. The basic structure of the risk adjustment process for Part D is the same as for MA. The same diagnoses are used, but instead of using FFS costs, only prescription drug costs are taken into account. In addition, instead of using the entire cost, the risk adjustment includes only those costs for which a Part D plan is liable (that is, copayment and deductibles are excluded). Also, while the MA risk scores are based on non-MA patients, Part D risk scores are based on the prescription claims of Part D enrollees.

In the case of MA’s coding intensity adjustment, the incentives that attend a shift from services-based payment to diagnoses-based payment highlight coding discrepancies between FFS and MA managed plan structures. In an attempt to neutralize these discrepancies, Congress, through the Affordable Care Act (ACA), extended across-the-board cuts to MA plan risk factors to align them with FFS risk factors, without any evidence to support this move. A better approach would use information about coding differences between MA and FFS beneficiaries to identify a more appropriate risk adjustment factor that rewards plan efficiencies. Without this important payment evidence as a regulatory guideline to inform a better risk adjustment model, the Department of Health and Human Services (HHS) has set up a counterproductive structure that could very well penalize efficient plans caring for sick patients, and give others a competitive advantage. The agency would be wise to first, ‘do no harm’, compile the necessary data, and devise a risk adjustment system that accomplishes the important goal of paying appropriately for specific patient populations.

Program Background – Medicare Advantage (MA)

When Medicare was implemented in 1966 it included two main components, “Part A” for hospital costs (funded by a payroll tax), and “Part B” for physician services (initially voluntary, with a subsidized premium). Both were set up using a FFS system, with hospitals (“Part A”) and physicians and other professional providers (“Part B”) being paid specified fees for each service performed for a Medicare beneficiary. Beneficiaries paid a portion of those fees through deductibles and copayments, as well as a subsidized premium for Part B.

The FFS program continues and is often referred to as “traditional” Medicare. It was—and still is—plagued by rapid spending growth, delivery system fragmentation, and insufficient coverage for beneficiaries due to high co-payments and limitations on benefits. In 1982, Congress provided for an alternative, which gave beneficiaries access to private sector coverage options under what was known as the “Risk Contracting Program” (also called “Part C”). It was renamed as “Medicare+Choice” and modified in 1997, then renamed again as “Medicare Advantage” (MA) and further modified in 2003.[1]

Under MA, health plans are paid a fixed amount each month for each enrolled Medicare beneficiary, in exchange for providing at least the benefits offered by Parts A and B of “traditional” Medicare. Typically, plans provide additional benefits not offered in the FFS program, as well as lower copayments and deductibles. Often, they have networks of preferred providers that are smaller than the total universe of FFS providers, but some MA plans include providers who do not participate in FFS.

Any Medicare Part B beneficiary may enroll in any MA plan that covers the beneficiary's county of residence, and MA plans must accept all beneficiaries who wish to enroll. Each year, MA plans submit “bids” specifying their additional benefits, cost-sharing rules, provider networks, and premium for each service area in which they will accept enrollees. The Centers for Medicare and Medicaid Services (CMS) calculates—and publishes—a “benchmark” monthly rate for each county in the United States, using a formula specified by law and linked to average FFS spending on non-MA beneficiaries in that county.[2]  Beneficiaries pay their regular Part B premium, plus any difference between their MA plan’s premium and their county benchmark.[3] Plans can also receive “bonuses” based on CMS evaluations of plan quality.[4] MA plans compete for enrollees by offering lower cost-sharing than FFS Medicare, additional benefits, or both.

It would be natural to expect plan sponsors to try to structure their plans and marketing strategies to disproportionately attract more-healthy enrollees, in order to reduce their costs, and perhaps to discourage less-healthy enrollees for the same reason. In order to reduce plans' incentive to engage in this sort of behavior—and to avoid penalizing those who end up with a relatively less-healthy enrollment pool, CMS applies a “risk adjustment” methodology to adjust payments. That is, instead of simply paying plans the specified benchmark for each enrollee in each county, the payment is actually the benchmark multiplied by a factor that reflects a beneficiary's expected cost based on health history and relevant demographic factors. This methodology is explained below.

Program Background – Prescription Drugs (Part D)

When Medicare was implemented in 1966, it did not include coverage for outpatient prescription drugs. Eventually, a broad consensus developed that with the increased use of pharmaceutical treatment for both acute and chronic conditions, the absence of drug coverage from Medicare represented a significant shortcoming in the program. The result was the establishment of Medicare Part D as part of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA), to be implemented starting in 2006.

Part D was designed such that private plans would offer drug coverage to Medicare beneficiaries subject to minimum benefit requirements while still allowing substantial flexibility in terms of cost-sharing structure and the ability to offer enhanced features. Plan sponsors would submit bids to CMS each year based on their expected costs for providing the benefit, and then a national average of submitted bids would be used to determine the amount of the government subsidy (74.5 percent of the national average bid) as well as the monthly premium paid by beneficiaries (the actual bid minus the government subsidy). With beneficiaries offered a choice among plans, the bidding process would allow plans to compete for enrollment based on benefit offerings and premium. 

As with MA, one might expect that insurers might try to structure their plans and marketing strategies to disproportionately attract more-healthy enrollees, in order to reduce their costs. And as with MA, CMS applies a “risk adjustment” methodology to adjust payments to mitigate this problem. That is, instead of simply paying plans the national average bid for each enrollee, the payment is actually that amount multiplied by a factor that reflects a beneficiary's expected prescription drug cost based on health history and relevant demographic factors. While the basic approach is the same as for MA, the adjustment factors for Part D naturally take into account outpatient pharmaceutical costs rather than medical and hospital costs.

Principles of Risk Adjustment

The basic principles of risk adjustment are the same, whether applied to the MA program, the Part D program, or some other similar system. While the average cost of treating patients across the over the course of a year is in theory simple to calculate, the average over the entire population is not a particularly accurate estimate of the cost of treating any particular patient. Furthermore, when patients choose among plans with different characteristics to best suit their individual circumstances, it is quite likely that the average cost of patients in a particular plan will not be the same as the average costs over the entire population.

Thus, in the context of MA and Part D, it would be inappropriate to pay plans based on population averages without risk adjustment. Doing so would penalize plans that do a good job taking care of—and therefore disproportionately attract—patients with expensive chronic conditions, or higher probabilities of developing expensive acute diseases. Neglecting risk adjustment would also encourage plans to find ways of attracting healthier patient pools and discouraging sicker patients from enrolling. If risk adjustment works as intended, then plans will not suffer from enrolling the patients who need them the most.

Risk Adjustment in the Medicare Advantage Program

Prior to 2000, risk adjustment for MA plans was limited to “demographic factors” such as age, gender, and residence location.[5] From 2003 to 2007, a new system was phased in that takes into account demographic factors, and each patient's previous diagnosis codes. The system is “prospective”—that is, it uses a patient's diagnoses from the past year to calculate a risk adjustment factor which is used for payments for the following year. Diagnosis codes are from the International Classification of Diseases, 9th edition (ICD-9), published by the World Health Organization.[6] Several thousand diagnoses are grouped into 79 Hierarchical Condition Categories (HCCs).  Each HCC is assigned a risk score that reflects its relative contribution to health care expenditures, after accounting for age, gender, and residence. These risk scores are calculated by statistical analysis of expenditures for FFS patients in one year, using their diagnoses, condition categories and demographic factors for the previous year. This produces an estimate of the amount by which a given condition category known in one year can be expected to increase expenditures on a particular patient in the following year.[7]

The risk scores are calculated separately for Medicare beneficiaries in different broad categories. The largest categories are for those living in the “community,” those living in institutions,[8] and new enrollees. There are also separate risk score calculations for beneficiaries with end-stage renal disease (ESRD).

Because the scores are supposed to reflect relative risk, the model is calibrated to ensure that the average risk score across beneficiaries in each subpopulation is 1.0. Because costs change over time—and do not generally change uniformly for different conditions—the model is recalibrated each year. This can cause an individual beneficiary's risk score to change, even if his or her diagnoses are the same.

It is worth noting that the risk scores are based on costs (i.e., claims paid) in the FFS system, even though those scores are used to adjust payments for patients in MA. The plans are not required to use the FFS payment rates or structures; indeed, one of the reasons the MA plans can offer additional benefits is that they can make arrangements with providers that are completely different from the FFS system. The result is the paradoxical fact that MA risk scores are based on the costs of treating those patients who are not enrolled in MA.

Risk Adjustment in the Medicare Prescription Drug Program

The basic structure of the risk adjustment process for Part D is the same as for MA. The same diagnoses are used, but the category groupings and the costs are necessarily different. Instead of using FFS costs, only prescription drug costs are taken into account. In addition, instead of using the entire cost, the risk adjustment includes only those costs for which a Part D plan is liable (that is, copayment and deductibles are excluded). This makes it a “plan liability” risk model, rather than a “drug costs” model as such.

In general, the Part D model will produce different risk scores than the MA model for the same beneficiary since a given set of diagnoses will generally have a different impact on outpatient prescription drug costs than on physician and hospital costs.

Also, while the MA risk scores are based on non-MA patients, Part D risk scores are based on the prescription claims of Part D enrollees. Part D plans are required to report detailed data on beneficiaries' prescription claims, including the plans' cost of filling those prescriptions. CMS is able to match that data with diagnosis records for those same beneficiaries, obtained from Part A and Part B claims.

Risk Adjustment Challenges:  Coding Adjustment

As noted above, the MA risk model is based on diagnoses and expenditures for FFS patients. MA payments, on the other hand, are adjusted based on that risk model applied to the diagnoses of MA enrollees. It would be neither practical nor desirable to base the risk model on expenditures of MA plans to care for MA enrollees. 

One of the features of MA is that plans have an incentive to find efficient ways to care for their patients; that includes finding ways to care for them at lower costs. If MA plans received lower payments for treating patients at lower costs, this would, in effect, be penalizing them for doing their job well. If MA plans received higher payments for treating patients at higher costs, this would be rewarding them for doing their job poorly. More to the point, it would give plans an incentive to artificially increase costs in order to obtain higher payments. It should be clear, therefore, that MA plans should be paid based on the health status of patients who enroll, rather than on the costs they incur treating those patients.

However, there is an additional problem that arises from using FFS expenditure data to calibrate MA payments. Physicians or other Part B providers treating FFS patients are paid based on the specific services they provide to patients. They are not paid for the health status, or diagnoses, of the patient, and have no incentive to record every diagnosis a patient has; it is sufficient to include only those diagnoses necessary to justify the services provided.[9]

On the other hand, MA plans are specifically paid based on their patients' diagnoses. Therefore, there is an incentive to document every diagnosis. Although payments are not changed immediately for new diagnoses, the patient might stay with the same MA plan the following year, and more diagnoses would result in a higher risk score and therefore a higher payment.

The result is, even for patients with exactly the same actual diagnoses, a patient in an MA plan is likely to have more documented diagnoses. Because risk scores are necessarily based on documented diagnoses, MA patients will tend to have higher risk scores than identical patients in the FFS program. Because the FFS program is older and more “traditional,” the phenomenon is commonly referred to as “MA up-coding.”  Conceptually, however, it is more accurate to think of it as “down-coding,” since it more likely results from a lack of documentation for existing diagnoses of FFS patients than from spurious diagnoses of MA patients.

This phenomenon does not reflect any dishonesty on the part of providers or MA plans. An FFS Part B provider who is paid on the basis of services performed has an incentive to fully document all services performed, but not necessarily all diagnoses a patient has; an MA plan paid on the basis of diagnoses has an incentive to fully document all diagnoses, but not necessarily all services performed. 

CMS dealt with the “up-coding” issue by applying a uniform “coding intensity adjustment” factor to all risk scores. In 2010, the adjustment was 3.41 percent, meaning that all MA enrollee risk scores were reduced by 3.41 percent before being used to calculate payments.

This is a brute-force approach that does not, for example, take into account that over- and under-diagnosis might be more prevalent in some disease categories than others; or even in some demographic categories than others. For example, just as FFS utilization varies across regions, coding intensity differentials might vary geographically as well. It is likely, for example, that the coding intensity difference is smaller in regions where FFS utilization is higher.  Also, because hospitals (Part A providers) are paid in part on the basis of diagnoses, it is likely that there is less “up-coding” for diseases that often require hospital care than for diseases that are primarily treated on an outpatient basis under Part B.

Complications in the ACA

The ACA imposed an even more brute-force adjustment for future years, by requiring that the MA coding adjustment factor be increased by at least 1.3 percentage points in 2014, and by at least an additional 0.25 percentage points in each subsequent year until 2018, and remain no less than 5.7 percent in 2019 and thereafter.[10] This schedule is not based on any data demonstrating that up-coding will increase at that rate in future years.

In 2014, CMS changed the HCC categorization; combining, splitting, and changing categories. The net effect was that average risk scores were lowered. CMS states that some HCC changes were made to “address MA coding intensity.” They also proposed using an across-the-board adjustment factor to prevent the change in average risk scores from reducing average MA plan payments in that year. The Medicare Payment Advisory Commission (MedPAC), in official comments, noted the presence of the existing brute-force coding intensity adjustment in the ACA and argued that the risk adjustment model should not be used for coding intensity adjustment as well.

Furthermore, the law now specifies that these coding adjustment factors be used “until the Secretary implements risk adjustment using Medicare Advantage diagnostic, cost, and use data.” As noted above, implementing MA risk adjustment using MA cost and use data is counterproductive to the goals of the MA program, and is likely to result in penalizing efficient plans, rewarding inefficient plans, and providing incentives for MA plans to increase costs.

A Better Approach

A solution would be to actually measure the differences in coding intensity. If this is not possible for the entire Medicare population, it is at least possible to measure differences in coding for Medicare beneficiaries who move in or out of the MA program, by comparing their diagnoses in consecutive years. This would not be a perfect solution, since changing diagnoses might motivate someone to move in or out of an MA plan. One way to check this would be to examine the differences in coding intensity changes between people who switch from MA to FFS because their MA plan exited their area, compared to those who switch from MA to FFS voluntarily, without any significant change in their MA plan.

While this approach would not be perfect, it would certainly be better than a brute-force multiplier applied across all diagnoses and all demographics based on a legislative schedule rather than any knowable difference in actual coding intensity. It would also be better than the proposed alternative of penalizing efficient plans, rewarding inefficient plans, and providing incentives for MA plans to increase costs.


[1]  For more background on the Medicare Advantage program, see Robert A. Book and James C. Capretta, “Reductions in Medicare Advantage Payments: The Impact on Seniors by Region,” Heritage Foundation Backgrounder No. 2464, September 14, 2010, pp. 2-3, at http://report.heritage.org/bg2464.

[2] See Book and Capretta, op. cit., pp. 4-5.

[3] In the event that a plan bids less than the benchmark, the plan must “rebate” 25 percent of the difference to the government, and 75 percent to the beneficiaries in the form of additional benefits, lower copays, and/or a rebate of a portion of the Part B premium.

[4] Douglas Holtz-Eakin, Robert A. Book, and Michael Ramlet, “Medicare Advantage Star Ratings: Detaching Pay from Performance,” American Action Forum, at http://americanactionforum.org/sites/default/files/Medicare%20Star%20Ratings%20Detaching%20Pay%20from%20Performance.pdf.

[5]  It is well established that FFS expenditures are correlated with the geographic location of beneficiaries, although the reasons for this are not well understood. See, for example, John E. Wennberg, Elliot S. Fisher, and Jonathan S. Skinner, "Geography and the Debate over Medicare Reform," Health Affairs Web Exclusive, February 13, 2002, at http://content.healthaffairs.org/cgi/content/full/hlthaff.w2.96v1.

[6]    The 10th edition, known as ICD-10, has been published, and CMS plans to transition to ICD-10 in the near future.

[7]    For more details on the model calculation, see CMS, Medicare Managed Care Manual, Chapter 7, “Risk Adjustment,” at http://www.cms.gov/Regulations-and-Guidance/Guidance/Manuals/downloads/mc86c07.pdf.

[8]    Although Medicare does not generally cover the cost of long-term institutional (e.g., nursing home) care as such, it does cover Part A and Part B services needed by Medicare-eligible individuals who are also receiving long-term care

[9]   While a diagnosis does not directly affect payment in the FFS program, it might be required in the case of an audit of claims filed by the provider.

[10] Section 1102(e) of the HCERA, replacing section 3203(e) of the PPACA, amending section 1853(a)(1)(C)(ii) of the Social Security Act.


  • The House introduced a bill, aptly named H.R. 529, which will amend the Internal Revenue Code to improve 529 college savings plans.

  • There are two types of 529 plans: pre-paid tuition plans and college savings plans.

  • 529 savings plans ensure affordability without the burdens of student loan debt.

As a response to the recent attempts to eliminate the tax-free treatment of the popular 529 College Savings Plans, the U.S. House of Representatives has taken a different approach: moving legislation to strengthen them.  The House introduced a bill, aptly named H.R. 529, which will amend the Internal Revenue Code to improve 529 college savings plans.

529 College Savings Plans

Named after Section 529 of the Internal Revenue Code, a 529 college savings plan is a state or educational institution sponsored savings plan designed to encourage saving for future college costs. There are two types of 529 plans: pre-paid tuition plans and college savings plans. Prepaid plans let you lock in tuition at today’s rate, and pre-pay all or part of the costs of an in-state public college education. They may also be converted for use at private and out-of-state colleges. Savings plans work much like a 401K or IRA by investing contributions in mutual funds or similar investments. In general, earnings from both plans are not subject to federal tax, and in most cases, state tax, as long as withdrawals are for eligible college expenses, such as tuition and room and board. Money withdrawn from 529 plans and not used for eligible expenses is subject to income taxes, as well as a 10 percent federal tax penalty.

Over the last fourteen years, investments in these plans have grown from $8 billion to nearly $227 billion (as reflected below). In 2013, $14 billion was withdrawn from some 1.6 million plans to pay for costs related to higher education. Estimates have shown that 70 percent of account holders earn less than $150,000, with another 10 percent having incomes below $50,000. On average, families contribute $175 a month amassing roughly $19,700 in their accounts.

Total Assets in State-Sponsored Section 529 College Savings Plans in 2013 Dollars (in Billions), 1999 to 2013

House Proposal

The legislation (H.R. 529) would expand, modernize, and strengthen tax-free 529 college savings plans. Specifically (as explained by Rep. Lynn Jenkins the bill's sponsor) the bill:

  • Expands  qualified expenses for 529 account funds;
  • Eliminates unnecessary paperwork regarding aggregation requirements
  • Removes the penalty and taxes associated with re-deposit of refunds from colleges, provided that it occurs within 60 days of the student withdrawing from the college. 

These are modest changes to a program that America’s middle-class has clearly embraced as simple straight forward way to invest in their children’s future education costs. Additionally, improving the savings plans further reinforces the policy objective of ensuring that students utilize a savings-based, rather than a loan-based, method of financing higher education.

Conclusion

Policymakers should consider ways to make college more affordable for all students – middle and low-income. Historically, however, a basic role in higher education for the federal government has been to ensure access and affordability for more low-income students to enroll in post-secondary education. Over the last 30 years, as the cost of higher education has spiraled upwards, the need for policies assisting the middle-class with affordability have led to a myriad of tax incentives. In recent years, participation in 529 plans have grown considerably - incentivizing middle-class families to plan and save for college costs. For most families, the tax policies 529’s are common sense. 529 savings plans ensure affordability without the burdens of student loan debt, and is an example of a policy that is worth strengthening.

In the coming days, Congress is expected to consider the REINS Act (Regulations from the Executive in Need of Scrutiny). Broadly, the law would allow Congress to approve or disapprove of “major” regulations before they take effect. According to American Action Forum (AAF) research, if adopted, the REINS Act could save more than $27 billion in annual regulatory costs and 11.5 million paperwork burden hours.

Methodology

AAF reviewed all recent proposed rules by their annual regulatory costs and paperwork burden hours. All of the rules discussed have annual costs exceeding $100 million or impose more than one million paperwork burden hours. If REINS passed, Congress is likely to scrutinize every major rule, but the following regulations have high cost burdens and national affects. Although there are dozens of major rules issued annually, AAF’s sample contains rules still in their proposed form and likely eligible for a resolution of disapproval vote once finalized.

Findings

The following rules are strong candidates for Congress to consider during disapproval votes. Under the proposed law, Congress has the freedom to approve or disapprove all major regulations, but if Congress disapproved these regulations, they could save Americans $27 billion in annual costs and 11.5 million paperwork burden hours.

 Possible REINS Act Regulations

Rule

Cost (in millions $)

Paperwork Hours

EPA’s Ozone Standards

$15,000

339,930

EPA Standards for Power Plants

$8,880

316,217

CFPB’s Home Mortgage Proposal

$527

90,000

DOE’s Efficiency Standards for A/Cs

$430

 

DOE’s Efficiency Standards for Dishwashers

$413

 

FDA’s Imported Food Proposal

$397

2,917,603

FDA Standards for Produce

$386

1,197,369

FDA’s Preventive Controls for Human Food

$371

74,692

DOT’s Tank Car Standards

$291

93,808

SEC’s Standards for Clearing Agencies

$225

14,124

EPA’s Definition of Waters of the U.S.

$166

 

EPA’s Agricultural Worker Standards

$73

6,540,862

Totals:      $27,080                       11,584,605 Hours


Many of these measures are familiar to Members of Congress and relevant committees. The revised ozone proposal could be the most expensive regulation the White House has approved in a generation. New greenhouse gas standards for power plants could raise electricity prices by more than six percent and eliminate 42,000 jobs, and that’s according to EPA’s math. From higher energy prices to more expensive consumer goods, there are plenty of regulations that Congress could examine if REINS passes.

However, those twelve rules are still just a fraction of the major rules issued annually. Last year alone, the administration published 79 major rules. In 2010, the Obama Administration set a modern record by finalizing 100 major rules. The graph below displays the number of major rules issued during the last ten years (based on Government Accountability Office data).

During the last ten years, regulators have issued 755 major rules, or measures that have an annual effect on the economy of $100 million or more. Yet, given the pace of rulemaking, Congress has scrutinized just a fraction of these regulations and held disapproval votes on an even smaller proportion. In practice, REINS wouldn’t disapprove of all major rules, but it is likely that a handful of regulations, perhaps five to ten a year, could receive significant debate.

State Impact

For a local perspective, the map below details how the possible costs of the twelve rules would affect states. By examining which industries are impacted by the legislation, and using Census data on the geographic distribution of industry establishments, AAF is able to approximate which states would be most affected.

Conclusion

As regulators continue to publish dozens of major rules every year, Congress is naturally motivated to enhance its oversight. The REINS Act attempts to claw back some congressional power that it has delegated over the generations. If exercised aggressively, REINS could save more than $27 billion in regulatory costs and 11.5 million paperwork burden hours. 

On December 26, 2013 the president signed into law the Bipartisan Budget Act (BBA), which established discretionary spending levels and enforcement provisions, for FY2014 and FY2015. This was designed to add certainty to the appropriations process and avoid government shutdowns. The Act set overall discretionary spending by a combined $63 billion above the lowered caps for FY2014 ($45 billion) and FY2015 ($18 billion). The BBA set defense caps for FY2014 and FY2015 at $520.5 billion and $521.3 billion, respectively, with allowances for adjustments to accommodate additional funding for overseas contingency operations (OCO) and emergencies. For both FY2014, and FY2015, appropriations have hewed to the caps established under the BBA (Table 1).

This modest relief from cuts to defense spending acknowledges that the “post-sequestration” defense caps do not reflect a defense policy or national strategy, but rather the political failure of Congress to find deficit savings. Indeed, the downwardly revised defense caps follow $487 billion in existing reductions to future defense spending. Accordingly, the current defense spending path represents an inadequate resourcing of national security needs, a challenge reflected in testimony from senior military leaders, the president’s budget, and congressional action.

Table 1: Status of Defense Spending Limits and Appropriations[1]

Billions ($)

FY2014

FY2015

Defense Cap

$520.5

$521.3

OCO

$85.4

$64.5

Emergency

$0.225

$0.112

 

 

 

Adjusted Caps

$606.3

$585.8

Appropriations

$606.3

$585.8

 

 

The Outlook for Defense Spending for FY2016 and Beyond

The relief provided by the BBA from the downwardly revised defense caps ends for FY2016 and thereafter. For FY2016, the original Budget Control Act (BCA) cap was set at $577 billion. This was revised down to an estimated $523 billion – essentially representing a flat-lining of base defense spending for 4 consecutive years. The spending caps grow, albeit modestly through 2021 (Table 2).

Table 2: Estimated Original and Revised Defense Discretionary Funding Caps[2]

Billions ($)

FY2016

FY2017

FY2018

FY2019

FY2020

FY2021

BCA Cap

577

590

603

616

630

644

Spending Reductions

-54

-54

-54

-54

-54

-54

Revised Caps

523

536

549

562

576

590

 

 

Compared to the original BCA caps, the downward revision on the defense discretionary spending caps owing to the failure of the “Super Committee” amounts to $324 billion over the next 6 fiscal years.  The president’s FY2015 budget has proposed to partially relieve this reduction and has proposed an “Opportunity, Growth, and Security Initiative” that would increase the cap by $157 billion through FY2021 (Table 3).

Table 3: Revised Discretionary Caps vs. the President’s Budget[3]

Billions ($)

FY2016

FY2017

FY2018

FY2019

FY2020

FY2021

Revised Caps

523

536

549

562

576

590

 Opportunity, Growth, and Security Initiative

38

33

29

24

19

14

President's Request

561

569

578

586

595

604

 

Possible Policy Approaches to Defense Spending

Under current law, defense funding, excluding OCO, will be $523 billion. Absent a change to the BCA caps, even if Congress appropriates more funding, OMB would have to cancel any budget authority appropriated above the cap. As a result, any potential for relief from the revised caps must involve a change to the BCA itself, as was required with the Bipartisan Budget Act. Accordingly, relief from the cap would require several steps.

The first opportunity to provide funding above the cap would be during the formulation and deliberation on the budget resolution. In formulating a budget resolution, Congress agrees to discretionary spending levels, which are then enforceable through points of order, which can be waived with super-majorities.[4] Should Congress seek to provide funding for defense above the revised caps, it could include that level in the budget resolution. The advantage of this approach would be to cement bicameral agreement, to the extent there is, to the higher defense funding well in advance of debate on the defense appropriations measure. Moreover, if a budget resolution assumed defense funding at levels consistent with the cap, a subsequent defense appropriations bill that violated the level in the budget resolution could be subject to a point of order, which would require 60 votes to waive. There are drawbacks to this approach. First, a budget resolution is often a partisan document, and rarely receives the support of the minority. Accordingly, it is not a meaningful vehicle for developing bipartisan agreement. Also, the president does not sign a budget resolution, so it is not an appropriate vehicle for engaging with the executive branch. Lastly, to ultimately fund national defense above the discretionary spending caps will require the passage of two changes in law – a funding bill and a change to the BCA. Both will face 60-vote thresholds during deliberation in the Senate. Accordingly, shielding a defense appropriations bill from a budget point of order is less meaningful.

A key element of many budget resolutions is the inclusion of reconciliation instructions. Reconciliation is an optional legislative process that allows for the filibuster-proof consideration of legislation that achieves budget outcomes specified by the budget resolution. Special rules, however, limit the type of legislation that may be passed through reconciliation. Key among these limitations is the “Byrd Rule,” which among other restrictions, precludes inclusion in reconciliation bills of any provision that does not change spending or revenues. Changes to the spending caps would not change spending or revenues, and this could not be altered through reconciliation. The spending caps themselves have no effect on spending levels. Rather, the appropriations acts that adhere to them do. Accordingly, the cuts to defense currently embedded in the spending limits cannot be altered through reconciliation. However, to the extent that any increase in the defense spending cap would have to be offset – the offsets could be achieved through the reconciliation process.[5]

The second key step in providing for relief under the current spending caps would be in the passage of the appropriations bill itself. This measure would have to reflect bicameral and bipartisan agreement in Congress, and be signed by the president. Accordingly, how the caps are adjusted matters a great deal. Any increase in the caps and the associated increase in defense spending would likely have to be offset through mandatory spending. This reflects several important dynamics. First, the Republican Congress has no appetite for tax increases, and any such measures would be unlikely to receive any support. Second, non-defense discretionary spending has been deeply cut along with defense spending, and Congress, and Democrats in particular, would be unwilling to assent to finance higher defense spending in exchange for reductions in non-defense spending. Indeed, if the Bipartisan Budget Act is any indication, any increase in the caps on defense spending would likely be paired with an increase in the non-defense cap. As such, that dynamic only leaves mandatory spending as the available source of potential offsets.

The president has proposed a $38 billion increase in defense funding relative to the current cap. He has proposed an equal increase in non-defense spending, for a combined increase of $76 billion in discretionary spending for FY2016. If Congress must offset this increase, it would have to find an equal amount of mandatory savings that Democrats and the president would be willing to endorse. One initial possibility is an extension of the automatic spending reductions affecting mandatory spending for an additional 2 years. The Bipartisan Budget Act extended these reductions for 2 years until 2023 and raised $28 billion. Congress could consider extending these reductions an additional 2 years, raising about $30 billion or more. Additional savings can and should be found within the defense-space, particularly related to health, retirement, and compensation as well as essential reforms to how the federal government procures major weapon systems and technologies. The most recent Future Years Defense Program, for example, called for redirecting $31 billion in personnel savings and devoting them to other priorities. This could include the areas of readiness harmed by the discretionary cuts. The president’s budget, while replete proposals anathema to Republicans does include some savings that could be redirected to offset an increase in discretionary spending for FY2016. Indeed, the president proposed $27.7 billion in non-tax offsets for the Opportunity, Growth, and Security Initiative.[6] Combined with other reforms, these could more than offset a paired increase in the discretionary caps for FY2016. These savings could be enacted through a reconciliation bill.

Having agreed on the right funding level and the appropriate offsets, the last step to relieving the defense cap for FY2016 would be amend the BCA to accommodate the higher level of spending. This would require a change in law that would again have to gain approval in the House, Senate, and the White House. Accordingly, this would likely be included in the funding measure itself, as was the case with the Bipartisan Budget Act.

Appendix: Recent Defense Funding History and The Evolution of the Budget Control Act, American Taxpayer Relief Act, and Sequestration

On August 2, 2011 the president signed the Budget Control Act of 2011 (BCA, P.L. 112-25). The BCA re-imposed a regime of discretionary spending caps that had previously been in place through the 1990’s and lapsed in 2002. The BCA provided a mechanism, an across the board rescission of budget authority, a sequester, in the event that those caps were breached.

The Budget Control Act also imposed an additional mechanism, variously referred to as the “Joint Committee Reductions,” “Automatic Spending Reductions,” or more colloquially just as the “sequester,” which was designed to reduce the deficit by $1.2 trillion (including interest costs) over and above the reductions imposed by the BCA discretionary caps. This mechanism was designed to come into force if the Joint Select Committee on Deficit Reduction, also referred to as the Super Committee, failed to produce a plan that reduced the deficit by equal measures. The Committee failed to do so, thus triggering the automatic enforcement provisions of the BCA through FY2021. The Act required OMB to issue a sequestration order on January 2, 2013 cancelling $109 billion in budget authority, split evenly between defense and non-defense categories, already enacted for FY2013 – a true sequester of budget authority already in place.[7] For FY2014-2021, discretionary savings result from spending caps lowered about $90 billion per year below the original BCA discretionary caps. Remaining savings would come from cancelation of mandatory budget authority through an annual sequestration order.[8]

On January 2 2013 (effective for January 1), the president signed the American Taxpayer Relief Act (ATRA) of 2012, which addressed a number major expiring provisions contributing to what was referred to as a “fiscal cliff.” Among these provisions was the sequestration set to take place on January 2. The Act delayed these cuts by two months – pushing the order to March 1, 2013. It also reduced the amount to be sequestered to $85 billion, again split evenly between defense and non-defense funding.[9]

OMB issued a sequester order pursuant to ATRA on March 1, 2013, and cancelled $85 billion in enacted budgetary resources for the balance of the fiscal year. The order reduced defense funding by $42.6 billion, non-defense discretionary funding by $25.8 billion, and non-defense mandatory spending by $16.9 billion.[10]

Federal funding faced a troubled road in the remainder of 2013, including a partial government shutdown beginning October 1, 2013 owing to a failure between the House and Senate to agree to discretionary spending levels and other policy matters. On October 16 Congress passed a continuing resolution through January 15, 2014 that provided $986 billion in overall discretionary budget authority on an annualized basis for FY2014, and essentially extended FY2013 post-sequester Defense spending at $518 billion on annualized basis. However, at this funding level and in the absence of a change to the BCA, defense spending would face a $20 billion sequester in January. This is because for FY2014, the lowered spending cap for defense was $498.1 billion.[11] The enactment of the Bipartisan Budget Act set forth defense spending limits in place for FY2014 and FY2015.

 



[8] ibid

  • FHA’s reduction in annual premiums will expand the government’s role in housing and add doubt to its fiscal recovery.
  • Reducing annual premiums gives FHA a pricing advantage over private companies for many low downpayment loans, particularly those to borrowers with FICO scores under 720.    
  • Despite receiving more than $4 billion in money from legal settlements and the Treasury Department, FHA’s capital reserves remain below their congressionally mandated level and will be impacted by the recent premium announcement.

In laying out his Administration’s agenda for 2015, President Obama announced in early January that the Department of Housing & Urban Development (HUD) would reduce Federal Housing Administration (FHA) mortgage insurance premiums from 1.35 percent to 0.85 percent.[1] While some have labeled the announcement “welcome news for prospective FHA borrowers,” the costs merit further exploration;[2] HUD’s decision will certainly affect FHA’s finances and lead to an expanded government role in the mortgage market.

Market-Shifting Implications

The outsized role the government plays in housing continues to be a primary bipartisan concern, and lowering FHA premiums will exacerbate the problem. The effort will likely preserve or even expand FHA’s market share by making its mortgage insurance cheaper for prospective borrowers than what is offered by private companies. In Table 1, boxes are shaded red if FHA-insured mortgages elicit cheaper monthly payments than conventional mortgages (private mortgage insurance with a GSE[3] guarantee). Conversely, they are shaded green if conventional mortgages would have lower monthly payments than FHA.

The 50 basis point (bp) reduction in premiums allows FHA to undercut conventional pricing for most low-downpayment (i.e. high loan-to-value or LTV) mortgages and those for borrowers with low FICO scores, loan characteristics generally acknowledged to have greater inherent risk. With the reduction, FHA dominates mortgages with FICOs below 680 and narrows the gap for FICOs between 680 and 720, regardless of LTV. Though other factors play into choosing whether to opt for private mortgage insurance or FHA, pricing premiums lower across the board makes FHA a cost effective choice for many more borrowers.

Adding to the likelihood that FHA captures business that might otherwise go to private mortgage insurers, the implementation window for the premium reduction was made curiously short when compared to prior premium changes, giving companies little time to adjust. Shown in Table 1, the announced 50 bp decrease in annual premiums becomes effective January 26, only nine business days after FHA released a mortgagee letter instructing lenders on the change. Earlier premium changes gave mortgage market participants anywhere from 22 to 82 business days between announcement and effective dates.

TABLE 2. FHA PREMIUM CHANGES SINCE 2010

 

MORTGAGEE LETTER

ANNOUNCEMENT DATE

EFFECTIVE DATE

DAYS TO IMPLEMENT*

50 BP UFMIP INCREASE

2010-02

JANUARY 21, 2010

APRIL 5, 2010

50

125 BP UPMIP DECREASE &

35 BP ANNUAL MIP INCREASE

2010-28

SEPTEMBER 1, 2010

OCTOBER 4, 2010

22

25 BP ANNUAL MIP INCREASE

2011-10

FEBRUARY 14, 2011

APRIL 18, 2011

43

10 BP ANNUAL MIP INCREASE & 75 BP UFMIP INCREASE

2012-04

MARCH 6, 2012

APRIL 9, 2012

23

25 BP ANNUAL MIP INCREASE**

2012-04

MARCH 6, 2012

JUNE 11, 2012

67

10 BP ANNUAL MIP INCREASE

2013-04

JANUARY 31, 2013

APRIL 1, 2013

40

45 BP ANNUAL MIP INCREASE***

2013-04

JANUARY 31, 2013

JUNE 3, 2013

82

50 BP ANNUAL MIP DECREASE

2015-01

JANUARY 9, 2015

JANUARY 26, 2015

9

* BUSINESS DAYS ONLY—EXCLUDES WEEKENDS AND FEDERAL HOLIDAYS

**FOR LOANS EXCEEDING $625,500

*** FOR LOANS WITH LTV < 78 PERCENT

NOTE: FHA CHARGES TWO FEES – AN UPFRONT MORTGAGE INSURANCE PREMIUM (UFMIP) AND ANNUAL MORTGAGE INSURANCE PREMIUM (ANNUAL MIP). THE RECENT PREMIUM ANNOUNCEMENT IS A 50 BP REDUCTION IN THE ANNUAL MIP, WHICH IS PAID OVER THE LIFE OF THE LOAN WHEREAS THE UPFMIP IS DUE WHEN THE LOAN IS INITIALLY MADE.

The short implementation window stands in contrast to previous changes, but is also unexpected given the market-shifting and controversial nature of the decision. Yet, more importantly, when coupled with FHA’s resulting competitiveness, private mortgage insurers have been given little time to respond despite the obvious impacts on their businesses. Mortgage insurers may find it further difficult to compete given ongoing efforts to raise their capital standards in anticipation of still pending Federal Housing Finance Administration (FHFA) private mortgage insurance eligibility requirements (PMIERs).[4][5]     

Effect on FHA’s Fiscal Outlook

While the effect FHA’s premium reduction will have on private competition certainly merits attention, FHA’s decision affects all taxpayers, not just companies and prospective homebuyers. Lawmakers have voiced concerns over whether a price reduction is warranted given the fact that FHA needed a $1.7 billion appropriation from the Treasury Department to bolster it mutual mortgage insurance fund (MMIF) just over a year ago. Many are concerned FHA is not suitably prepared to weather a future economic downturn since its capital buffer has not been restored to the minimum level mandated by Congress.[6] In particular, Rep. Jeb Hensarling (R-TX), chairman of the House Committee on Financial Services, has requested the written analysis and data used to justify the premium change in a letter written to the HUD Secretary.[7]

These concerns have merit given the fact that lowering premiums stands to substantially alter the projected fiscal outlook. First, lower prices result in less revenue than anticipated. FHA must pull in enough new volume, whether from borrowers previously priced out of buying or borrowers that would have chosen private mortgage insurance, to make up for the lower revenues. Yet both options come with caveats.

Buyers who found FHA cost-prohibitive previously and were therefore unable to purchase a home are likely to have high LTVs and/or low FICO scores that private mortgage insurers would not serve. Shown in Table 3, lower credit score borrowers are more likely to become delinquent and therefore result in losses for FHA, which could easily turn to taxpayer losses without a restored capital reserve. The second option, pulling borrowers away from private companies, runs firmly against the bipartisan policy objective of limiting the government’s involvement in mortgage markets following years of unprecedented and risky government support.

TABLE 3. ALL FHA SINGLE FAMILY LOANS BY CREDIT SCORE

CREDIT SCORE RANGE

SERIOUSLY DELINQUENT RATE (%)

UNDER 500

27.64%

500-579

25.83%

580-619

18.79%

620-659

8.84%

660-719

4.11%

720-850

1.79%

ALL

6.59%

SOUCE: FHA[8]

Uncertainty surrounding the effects the premium reduction will have on the FHA’s fiscal health is made all the more pressing by FHA’s documented history of missing financial projections. Shown in Figure 1, in every actuarial review since 2004 the economic value of FHA’s single-family fund has come in lower than what was projected the previous year, and 2014 was no exception.[9] For many, this enhances the perception that FHA downplays risks borne by taxpayers and casts doubt on the assumption that FHA will continually improve as projected, particularly as it lowers prices to entice high LTV, low FICO borrowers.

 

In fact, legislative attempts to reform FHA in the last Congress would have raised its mandated capital reserve ratio even higher to either 3 percent or 4 percent, levels FHA’s MMIF is not expected to reach until 2018 and 2019 respectively.[10] FHA’s capital buffer is meant to protect taxpayers in an economic downturn while preserving FHA’s ability to fulfill its mission. The decision to lower premiums may jeopardize the MMIF’s return to its mandated capital level. Furthermore, many worry that FHA’s current economic value is overstated due to the influx of money from major mortgage-related legal settlements and the one-time appropriation of $1.7 billion from the Treasury Department (See Table 4).

 

TABLE 4. MAJOR SETTLEMENTS

 

SETTLEMENT PAYMENT TO FHA

AUDIT REPORT

DATE

BANK OF AMERICA/COUNTRYWIDE

$471,000,000

2012-CF-1809

JUNE 2012

DEUTSCHE BANK/MORTGAGEIT

$196,000,000

2012-CF-1811

JULY 2012

CITI

$122,800,000

2012-CF-1814

SEPTEMBER 2012

ALLY FINANCIAL, BANK OF AMERICA, CITI, JPMORGAN CHASE, & WELLS FARGO

$315,200,000

2012-CH-1803

SEPTEMBER 2012

JP MORGAN CHASE

$336,000,000

2014-CF-1807

SEPTEMBER 2014

REUNION

$1,040,000

2014-CF-1810

SEPTEMBER 2014

BANK OF AMERICA

$437,600,000

2014-FW-1808

SEPTEMBER 2014

US BANK

$144,199,970

2014-CH-1801

SEPTEMBER 2014

SUNTRUST

$300,000,000

2015-PH-1802

DECEMBER 2014

TOTAL                                

$2,323,839,970

 

 

SOURCE: HUD OFFICE OF INSPECTOR GENERAL (HUDOIG) AUDIT REPORTS

Since 2012, the MMIF has been bolstered by approximately $4 billion in funds from legal settlement money and a one-time Treasury infusion.

Conclusion

While the decision to lower FHA premiums has been lauded as a solution to boost housing markets and aid low-income borrowers, it is not costless. It has the potential to hurt private businesses, expand the government’s role in housing markets, and add hosts of risky borrowers to FHA’s already weak portfolio. Without a restored capital reserve, FHA may easily need another taxpayer-funded appropriation to boost its finances. Though implementation of the plan is already underway, revisiting the costs and benefits may be warranted given the risks involved.

 

Note: A previous version of this paper listed the FHA pricing for a 95 LTV loan after premium reductions as $1,262. This has been corrected. 


[1] Remarks by the President on Housing – Phoenix, AZ, (January 8, 2015); http://www.whitehouse.gov/the-press-office/2015/01/08/remarks-president-housing-phoenix-az

[2] Karen Kaul & Bing Bai, Urban Institute, “Four impacts of the Federal Housing Administration’s premium cut;” (January 21, 2015); http://blog.metrotrends.org/2015/01/effects-federal-housing-administrations-premium-cut/

[3] Government-sponsored enterprises, i.e. Fannie Mae and Freddie Mac

[5] Joe Light, Wall Street Journal, “Regulator Updates Goals for Fannie, Freddie,” (January 14, 2015); http://www.wsj.com/articles/housing-regulator-updates-goals-for-fannie-and-freddie-1421252306

[6] Letter from Senators Corker (R-TN) & Vitter (R-LA) to HUD Secretary Julián Castro (January 8, 2015); http://www.corker.senate.gov/public/index.cfm/2015/1/corker-and-vitter-call-on-administration-to-protect-taxpayers-and-reconsider-recent-fha-decision

[7] Letter from Rep. Hensarling (R-TX), Chairman of the Committee on Financial Services, to HUD Secretary Julián Castro (January 8, 2015); http://www.scribd.com/doc/252088964/Hensarling-Letter-to-Castro#scribd

[8] FHA, “Single Family Loan Performance Trends – Credit Risk Report,” (October 2014); http://portal.hud.gov/hudportal/documents/huddoc?id=FHALPT_Oct2014.pdf

[9] Andy Winkler, “Reviewing the Financial Health of the FHA,” (November 18, 2014); http://americanactionforum.org/insights/fha-actuarial-review

[10] See S.1376, §9 - FHA Solvency Act of 2013; https://www.congress.gov/bill/113th-congress/senate-bill/1376 and H.R. 2767, §256(b) - the Protecting American Taxpayers and Homeowners Act of 2013; http://thomas.loc.gov/cgi-bin/bdquery/z?d113:h.r.02767