19 December 2007

one last push


16 November 2007

02 November 2007

Guest Blogizzle

Posted by: The Wife

What?! Whose wife? Yes, sorry ladies, mastro is married. I know you must be surprised since he has never mentioned that fact, but you must have wondered how a man who obviously has many many interesting topics and insights to converse about has not been snatched up. Well I did snatch him. I guess marriage is just not related to economics enough for him to mention his marital status. Wait a minute -- "economics of families" Seriously, mastro, the opportunity to come clean was all over that blog post! We are very happily married, btw.

Anyway, mastro does not know I am posting today, but he has suggested that I guest blog for him in the past, and I find myself with nothing else to do at the moment, so the time has come. He also told me that my post does not have to be related to economics... but isn't everything?? We actually do converse about economics quite often, and mastro thinks I have great ideas. But apparently I use too many phrases like "some ratio type thing or something" and not enough like "the exponential integral of a finite array of sets" to be a good economics grad student.

Here are my latest thoughts related to economics.

I have been considering whether or not I should get a flu shot this year. Buzzword: externalities. Flu shots provide positive externalities. If I get a flu shot, I won't get the flu, and you won't catch the flu from me. Well, there's my solution -- I'm not getting a needle stuck in my arm (or any muscle of my choosing), so y'all better get flu shots, and if not, I'll see you next spring, okay?

Two of my favorite things to do are probably running and cooking. I guess you could call them hobbies, although I don't really know if a lot of people drag their tight butts out of bed at 5:30 a.m. to collect stamps, nor do I know anyone who considers their responsibility for the survival of their spouse to be a hobby... But lets call them my hobbies for this economic analysis. Buzzwords: tradeoffs and opportunity cost. I only have so much free time, so I have to figure out how much of it I am going to devote to my hobbies, because if I am running, I sure as heck can't be cooking. And vice versa, obviously (although I'm not sure if that is a safe assumption in real economics). So I have to decide how much I like (or rather how much utility) I get from each activity and I must maximize my total utility. Sounds easy enough. The combination of running and cooking, however, brings a slight complication to the standard tradeoff problem. You see, the more I cook (and inevitably eat tons of chocolate, lick all kinds of beaters, and "test" the outcome), the more I must run. This particular issue is getting a little scientific maybe, so I will save the details for when I guest blog for a biology student. Back to the main problem here, tradeoffs say the more you do one thing, the less you may do the other, but like I said, the more I cook, the more I must run. And opportunity cost says that if I run 1 extra mile, I cannot cook one extra cookie. But I say if I run one extra mile, I can cook (and will eat) one extra cookie. What a predicament. I need a solution. I don't know if there is a buzzword to describe this crazy situation yet. If not, this is my gift to one lucky grad student. Do you dissertation about it, and let me know how in the world I can solve this problem!

I hope you all enjoyed today's guest blog. Unfortunately, it is not out on podcasts.

Keepin' it economical,


13 October 2007

Back in Full Swing (of grad school)

As many of you have probably noticed, if you still check this site, posts have dropped dramatically. That is mostly due to a killer courseload of grad classes. Don't worry, i haven't stopped following the FCC auction or worrying myself about economic efficiency of policy. I've just been preoccupied with developing my economic analysis toolbox. This semester I'm taking 4 homework & paper crazed classes. It's a lot of work, but all great material.

I'm in Industrial Organization with Ken Hendricks. We study the operation of markets. Competitive allocations and prices when firms are competing for profits in host of environments; horizontally differentiated goods, vertically differentiated goods, when firms have varying degrees of market power, when there is uncertainty in the environment, in dynamic time settings (when repeated competitions are considered), when goods are complements or subsititutes, and in auction environments. We look primarily at Nash, sub-game perfect, and Perfect Bayesian equilibria allocations and prices. We also are concerned with bringing the theory to data to test whether firms are acting competitively or collusively and the impact on consumer surplus. This class is advanced micro meets game theory meets econometrics.

Also, i'm in Computational Macro with Dean Corbae. In this class, so far we have been occupied with calculating steady state equilibrium of allocations and prices and wealth distributions in intricate dynamic macro-economic models. We will also be solving for dynamic equilibrium, equilibrium tax functions, and social welfare gain in macro and political economy models to asses the economic efficiency of policy. This class is stokey-lucas meets Matlab.

Then, i'm also in Public Finance with Rob Williams. We are studying optimal tax schedule in environments where the government has an exogenous budget requirement for which revenue must be generated. By optimal tax we mean a tax schedule that achieves a desired blend of pareto efficiency and equality. So far we've studied optimal commodity taxes, optimal commodity and income taxes, optimal nonlinear income taxes. Also, we've compared these tax schedule with observed US tax codes and attempted to reconcile the differences. Prof Williams lectures are crystal clear and deliver a great message of the intuition and mathematics.

Last, but not least, I'm also in Econometrics II with Professor Donald. In this class we are achieving skills in statistical inference in nonlinear regression models under only general assumptions on our data set and using advanced estimation techniques. We work on the theory, solve some simple theory exercises on paper, and solve some large scale real-world examples with the help of Stata and Matlab computer code.

Anyway, the point of this post was to just give a brief insight into what i've been up to, what i'm going to be up to, and to show why that does not include a ton of blogging time. Also, i wanted to say that in the off chance I plan to do something fun with some of our austin friends i plan to post some logistics here on this site. Since our group of friends has been growing, and it isn't always feasible to stay in real time phone contact on logistics, i'll just post them here. so, for those loyal econ readers or fcc readers, don't be suprised to see some logistics for some Friday night Texas high school football games posted here. check yall later.

09 August 2007

Forums Comment

Recently Steven Levitt, author of Freakonomics and a very successful blog, moved to the NY Times. One of his first postings there proved to be very controversial. The subject was “If you were a terrorist, how would you act?” Levitt provided his ideas and rationale and asked that readers do the same. Some readers saw the idea as progressive and therefore participated, while others saw the idea as definitely non-economic and culturally subversive. What’s going on here? Let’s take out our economic balance sheet to identify the costs and benefits of such a posting.

(It’s important to note that Levitt is not being cavalier in his question. It is how his mind works as a Micro-economist working in the science of incentives. Thinking this way is a characteristic of intelligence. It is how Einstein came up with relativity. He was able to imagine the properties of physics while traveling at c. It is the same skill that allows humans to empathize.)

First, let’s look at the costs and benefits to Dr. Levitt. He has stated before on his blog that he appreciates all types of commentary. Well, he received ~600 (and counting) comments on this posting. Also, I’m sure the number of hits his site got spiked this week as well and may stay high at least for the near future. On the other hand, he may also have permanently lost some readers and some readers will visit less frequently because of the post. Dr. Levitt probably also cares a little about the NY Times perspective, which depends on their costs and benefits of Levitt’s post.

For the NY Times there are similar benefits as Dr. Levitt’s. Hits and exposure go up. Costs include the possibility that long run hits decrease, but also that there are corporate reputation risks involved. The NY Times does not want to be potentially culpable for publishing methods that terrorists may learn from or even use. Interestingly, there is also a small benefit associated. It might be a positive for the NY Times to be part of a project that identifies some terrorist hatches and therefore become the initiating step in preventing terrorist activities. Econometrically, this is a difficult benefit to quantify. Every time a terrorist activity does not occur does it become a feather in the cap of Levitt and NY Times? (Actually, Dr. Levitt is probably one with the best ability to quantify the benefit.)

There is a glimmer of brilliance in the post. I believe that what Levitt and the NY Times are trying to tap into, wittingly or unwittingly, is a concept outlined in a book called Wisdom of Crowds by James Surowiecki. What the concept states is that if there exists a well-structured public forum, then decisions made with crowd input can always far exceed those without.

The way Surowiecki’s concept would work for Levitt’s blog is that if you believe that Levitt’s blog & commentary format constitutes a well-structured forum, then Levitt’s posting can successfully improve security by preempting plots that bureaucratic experts and homeland security contractors may not have considered.

Thankfully Surowiecki has provided us some guidance into what characteristics are needed for a well-structured forum; diversity, independence, and practicality. If these conditions are met (starting to sound like a theorem) then the existence of the forum guarantees that the decision made will be (weakly) superior to a decision without a format, according to any metric of superiority. We are not saying that the forum should equally weight all input or specify any other aggregation technique (that is a discussion for Social Choice theory). But, what we are saying is that for any definition of superior, having input provided by a well-structured forum will yield superior policy than not having it available, having only the input of a small group of “experts”, or any forum (information market) failure.

Maybe it’s me, but this claim sounds a lot like the Fundamental Theorem of Economic Welfare, which says that under some conditions (if government has intervened to correct the market failures of imperfect competition and externalities) the resulting competitive equilibrium will be Pareto efficient, a generalized notion of optimality. What we’ve just stated in the preceding paragraph about the potential Wisdom of Crowds is that, under some conditions (independence, diversity, practicality; which the government can intervene to guarantee) the resulting policy is superior under a class of metrics.
Perhaps what we’re driving at is a Fundamental Theorem of Public Policy.

The Fundamental Theorem of Welfare Economics is a mathematically proven statement. Its concept (first postulated by Adam Smith) was eventually translated into precise conditions and a conclusion justified by mathematical tools and reason.

Paul Samuelson, in his graduate dissertation advised by an Economist and Physicist (a polymath academic descendent of Gibbs), imported the tools of Thermodynamics into Economics and placed them on the proper shelves. It is with these tools and reasons that The Fundamental Theorem of Welfare Economics was developed and formalized.
Is it time or is it even possible to again import those tools into the field of Public Policy?

Upon consideration, what Surowiecki purports makes sense.
Diversity is critical because it can illuminate the subject matter of the discussion from different perspectives, leaving no facet shadowed and the pull of potential factions negated.
Independence is important so that members do not correlate each other prior to providing their input. It is well understood that in failed forums, those who speak first, those who speak most frequently, those who are verbose, those with better rhetoric, and those who speak with intimidation enjoy disproportionately large bias relative to their input.

How do Levitt’s and Mankiw’s blog and comment forum measure up to these desired characteristics?
Diversity. Given the large amount of people with access to the internet and subsequently their blog, both have the potential to garner sufficient diversity. For Levitt, being now associated with the NY Times site should even further his forum’s diversity. As self-noted on his blog however, his comments are subject to a self-selection bias. Independence. This has become a major concern in Mankiw’s blog and comment forum. It is not so much bias from rhetoric or intimidation, but rather bias generated from mass. With the quantity of comments generated in his forum, there becomes an enormous amount of repetition of the same factors; the equivalent of individual verbosity or a group over-representing a factor. Independence, lack of bias, and lack of pre-correlation are crucial. There is, however, a balance. Should individuals have access to the factors presented already, in case they had failed to consider certain aspects? I think so, minimally. You may argue that this necessarily introduces bias. I claim that it should be the role of a forum governing body to intervene minimally to provide this service in order to correct the potential market failure of information asymmetries. What do you think now? Or did I just present too much bias?
Practicality. This is the most dangerous failure of Levitt’s forum given his post and its greatest detriment in my mind. Since there is no enforcement agency to actively address the results of the forum’s discussion, then we have only illuminated failures without providing for means to correct them. Hence we have a major forum failure.
Indirectly we do have such a means though. It is possible that the post and comments will generate enough interest to actually guide public resources to addressing these failures. It is important to point out that it is not necessary to post the comments in response to Levitt’s post. Levitt could only offer an incentive such as one of his yo-yos to the comments that he deems most insightful in order to compensate the readers for the lost incentive for no longer enjoying seeing their comments published.

Evidently there appears to be a real science to generating efficient forums, just as there is a real science to generating efficient markets (called Economics).

I would like to highlight one of Milton Friedman’s quotations.
“Abraham Lincoln talked about a government of the people, by the people, for the people. Today, we have a government of the people, by the bureaucrats, for the bureaucrats, including in the bureaucrats the elected members of Congress because that has become a bureaucracy too.
And so undoubtedly the most urgent problem today is how to find some mechanism for restructuring our political system so as to limit the extent to which it can control our individual lives. You know, people have the image, have the idea, that somehow ‘we the people’ are speaking through the government. That is nonsense.”

Maybe that is what our discussion is pointing at; working to find some efficient mechanism for our political system to listen to the people.
For the people to be heard we need a forum, and as we’ve seen, it is absolutely critical that that forum needs to satisfy some conditions in order to be efficient.

Just as efficient market operation is important in the regional to the global environment; so is the efficient forum important in local, state, federal and international policy. Efficient forums are needed in academia, private industry, and government.

I believe that structuring efficient forums can in the long run refine an umbrella of policies; from tax to national security to campaign finance (normally a direct enemy to efficient forums and our political system by biasing input with wealth and generating sustainable constitution-endangering factions).

Surowiecki provides a number of examples of forums successfully at work in a variety of situations.
1) The average group guess of the weight of an ox at a fair is consistently significantly closer than any individual. [The reason is that the group is comprised of individuals who will overestimate the weight (a cattle truck driver who may associate the animal with all of its transport gear) and underestimate the weight (a butcher who views the animal as a combination of steaks and forgets about pre-trimmed portions]
2) The significant consistent accuracy of the “ask the audience” in Who Wants to be a Millionaire relative to the performance of “ask a friend/expert.”
3) A navy vessel lost at sea whose discovery was due only to a forum effective at gathering the input of weatherman, coastal current experts, sailors, sociologists familiar with behavior of lost crew, and committees well versed with subtle but critical techniques of rescue operations.
4) The significant accuracy and consistency of bettor’s prediction in sports games relative to the bookies/experts initial lines (remembering that bookies have incentive to accurately predict games to maximize their vigorish)

Beginning to tread in the world of research-needed speculation, I might venture that there exists a preferred location in the political compass dependent on the present position of society relative to the Fundamental Theorems of Economics and (our new) Fundamental Theorem of Public Policy. http://en.wikipedia.org/wiki/Political_compass
Milton Friedman hints at something like this in Capitalism and Freedom.

My claim would state that the optimal position on the fiscal axis of the political compass should depend upon present market situation in the economy.
An economy with perfectly compensated market failures should be optimally situated for no further government intervention and would belong completely at the right end of the fiscal spectrum.
Similarly an economy in desperate need for government intervention to correct gaping market failures should be optimally situated at the left end of the fiscal spectrum.
Further, a policy making forum without any forum failures (including information asymmetries as well as the others outlined earlier) should be optimally situated for the total liberal behavior side of the social spectrum.
Similarly, a policy making forum desperately suffering from lack of diversity, independence, and total information asymmetries would be optimally situated for the power-vested leader representation side of the social spectrum.
Therefore, our (US) optimal position in the political compass depends on our individual perspectives of our current situation; that is, the health of our free markets and public forums. Again, this is not one particular point, but a set characterizing the generalized notion of optimality representing different individuals’ relative weightings between security and freedom.

07 August 2007

About Pareto Efficiency

In response to Adam’s comment about the idea of Pareto Efficiency.

Following the last posting, Adam asked the definition of “Pareto Efficiency,” namely is it unique and what metric is it based on?

Let me first point to a technical definition - http://en.wikipedia.org/wiki/Pareto_efficiency

Let me sat that this is a good question. Pareto efficiency is a subtle point. It means only that resources are not being wasted and that no one can improve his/her welfare (“utility” to economists) without lowering someone else’s welfare. So, as long as a government is enforcing property rights, once a Pareto Efficient allocation has been achieved, no one can do any better from further transactions.

What Pareto Efficiency does not say is that if individuals begin with non-efficient initial allocations (A would like to trade to improve A’s welfare and there is a B out there who is willing to trade with A and both A and B end up at least as well off as before they transacted) which of the many possible Pareto Efficient allocations will be achieved. A might increase his utility and B may stay the same in one efficient allocation. B might increase while A stays the same. They might both increase the same amount…. Because it is seemingly arbitrary to place weights on the importance of different individuals’ welfare, Economists sometimes only characterize optimality of allocations up to identifying the set of Pareto improving efficient allocations, a more generalized notion of optimality.
Again, this can all be seen diagrammatically in Figure 2 and explained with the accompanying discussion at http://cepa.newschool.edu/het/essays/paretian/paretoptimal.htm

If initial allocations are not efficient (as they usually are not), which of the very many Pareto Efficient allocations will be realized depends on who holds the bargaining power in the transaction. If A has full bargaining power then A can make B a take-it-or-leave-it offer and A will enjoy the surplus from transacting while B’s utility will remain the same or vice versa.

Bargaining power may depend on multiple factors; your desperation, your outside alternative options, whether bargaining is allowed to banter back and forth, your response time in the bantering relative to the depreciation of the good being bargained for.

In the presence of an externality, when there exist indirect effects to others welfare, which Pareto Efficient point gets achieved depends on the government’s intervention in determining who holds property rights. In essence, politicians weight welfare of individuals or firms against each other by decreeing initial property rights. In the common example, the government’s decree about whether a construction firm has the right to build an airport near a subdivision or whether the homeowners have the right to quiet surroundings determines which Pareto Efficient point in the set will be realized.

05 August 2007

Economics in Family Decisons

Economics in Complicated Family Decision Making Situations

I was talking to my sister, a MSW (Masters in Social Work), how Economics has changed my perspective in how markets and societies work.

I said that I am a big believer in near laissez faire behavior.
The response was, “well, what about within marriage and families?”
In the interest of conversation simplicity, I had left out a significant stipulation. And the retort question had hit that stipulation right on the head. I didn’t, and don’t have a simple short-winded answer, as good questions don’t usually have that. But, here’s something.

The Fundamental Welfare Theorem of Economics says that under some basic conditions, if everyone does what is directly best for them, then the resulting competitive equilibrium allocates resources efficiently.
[efficiency (pareto efficiency), means that nobody can do better without someone else necessarily being worse off]
This is a good thing in that resources get used without waste and nobody ends up preying on anybody else.

But those basic conditions are crucial. They say that markets need to have perfect competition, and (this next one is the important one in our conversation) that there should not exist externalities (spillover effects). It is the role of government to intervene minimally to correct those potential market failures.

Externalities occur whenever an individuals’ actions indirectly effects the welfare of others. It probably happens all the time in sociology. In economics, it doesn’t happen all the time because most effects are reflected in the price, but still it happens a lot. For example, if you buy a loud stereo, then the direct effect is the change in your wealth and the firms profit, but the indirect effect is the degradation of the welfare of your neighbors. There are positive externalities too. For example, when you pay a good street performer, the direct effects are a deduction in your wealth and an increase in his, but the indirect effects are an increase in the welfare of other passer bys who also get to enjoy watching the performance.

Normally, as stated before, the remedy should be minimal govt intervention to restore fair pricing. In the stereo example, it could be a tax on loud stereos with the government's revenue being used to compensate the disturbed. In the street performer example, it would be each individual paying according to their benefit, and the performer getting rewarded for the total effect of his performance. For the economists out there, you would recognize the fact that externalities are correctable through government intervention as the Coase Theorem (http://en.wikipedia.org/wiki/Coase_theorem).
(New Orleans used to actually do this, in that they would help subsidize some of the performers based upon how they viewed their impact. And other governments do similar things. Good moves. Granted, governments do some pretty bone-head moves also, but that is another story).

The parallel here with marriage and families is that the externality effect is huge. The decision by your spouse has huge indirect effects upon your welfare and vice versa. And similarly the solution should be like a government intervention. A separate entity, the family (or the couple or group), should make that decision for the family (or the couple or group). What are this aggregate unit’s values? What is the best decision for this aggregate unit? Subsequently communication is critical. Now all component parties should know exactly the effect that their decision will have upon the other parties and can now be accurately incorporated into the aggregate’s decision. For instance, if a solution really benefits one party, then that member could use of some of their surplus to appease the other party. This is good that there is a solution, but it is a total burden in the form of ample communication.

For large groups, deciding on the organization and fair operation of the group is a very complicated problem. One must protect against the possible formation of factions that can deviate for subgroup improvement at the cost of the other members. This was the source of the majority of debate by the framers of our constitution in the Federalist papers. In many ways they have succeeded, but in some ways factions have definitely achieved real power. This is another discussion for another post.

So, there is the long winded answer. Good in theory, but not sure if it’s any good for practice.

If still reading, or still interested, the intro on the Wikipedia page kind of explains the same thing.
also, see the introduction and discussion associated with figure 2 on the site

01 August 2007

Communications Technology 4 (commentary on FCC auction decision)

Before I finish filling in the details of my vision of communications, we should take a momentary detour to discuss the results of yesterday’s FCC auction announcement.

In my mind, and also for Google corporate, the announcement was definitely a disappointment. The FCC did choose to endorse “hardware interoperability”, but they did not endorse pure “network neutrality.”

As outline in CommTech 2, the loss here is that the website equality we currently enjoy will not be preserved. Website hosts with deeper pockets will be able to pay for access to faster download speeds and therefore preferred status. The result will be good for some high-definition media viewing, but it will really hurt new startup sites and the diversity of news sources. The usual info-tainment news sources (FOX, CNN, NBC…) will continue to enjoy a major access advantage to viewers.

Even for those of us who are fiscal-right-wingers, we should easily recognize that news and information are nearly pure public goods. That is, informing yourself about the situations in the world does not prevent others from doing the same. In fact, the more informed everyone is, the more everyone benefits. Therefore even fiscal-right-wingers would promote government intervention in the form of ensuring equality among news and information sources, ala net-neutrality and the internet.

Without information access equality, the news corporations with preferred status become even more preferred. As a timely example, note Rupert Murdoch’s latest media conquest; control of the Wall Street Journal. Now, even less diversity is readily available.

Fundamental economics tells us that when free-market failures exist (lack of competition, existence of public goods and externalities, information asymmetries, incomplete markets…) laissez faire competition will not lead to efficient resource allocations. Even if the poor continue to work and compete at their utmost, even if the whole pie gets larger, the poor will definitely get poorer. These are necessary situations for government intervention. It is the government’s role to enforce market corrections (in a minimalist fashion if you are fiscally right wing, or in an authoritative fashion if you are left wing).

Information is like national defense. Citizens benefit from their personal contribution to national defense, but they also benefit from everyone else’s contributions. Citizens benefit from their decision to become informed, but ultimately also benefit from their neighbors decision to become informed. Because of these indirect (un-internalized effects), left to their own device, normal citizens would acquire insufficient and suboptimal amounts of that good, the information. Therefore, in this case there should exist a governing body to step in and provide for those untapped benefits. In this particular case, citizens benefit from their availability to diverse news sources, but they also benefit from their neighbors becoming informed from diverse news sources. The way for the government to step in and provide for those untapped benefits is for them to support the availability of those diverse news sources; ie support network neutrality.

How did this happen? With altruistic economic reasons and Google’s money, why was their motion for network neutrality unsuccessful?

Perhaps hubris. If Google seemed to think that by playing according to the rules (building good products, generating sizeable profits, and then putting their money on the table) they would be able to effect policy, they were mistaken. The reality appears to be that the money under (and around) the table is what is important. What do I mean?

Google had a pretty strong lobby at ~$4.6 Billion, but that is really not that much compared to the lobby of AT&T. A telecom giant in existence for over 100 years with major subsidiaries employing politicians’ constituents in tens of states and contributing large amounts of money to finance those politicians campaigns.

$4.6B might seem like a lot to the FCC, but getting reelected and while still expected to bring $15B from the entire auction speaks more loudly. Google participates in campaign finance too, but just not as well as AT&T. The role of campaign finance is another story for another day. Does campaign finance reform infringe on freedom of speech as the Supreme Court says it does? Even if it does, is it fair for citizens’ constitution-granted vote to be distorted by the economic inequalities of the day?

Overall, the FCC’s policy on the auction does not spell death for Google. As outlined in CommTech 3, it definitely does make it harder and more costly for them to maintain their preeminence in the internet application market. They will have to pay the profit-extracting fast-access tolls to maybe hold on to their market share.

Yes, the FCC decision means more business competition for Google, but it’s competition from the wrong end. Rather than perpetually competing with entrepreneurial web innovators for their market share, Google will be competing against the already supremely-rich and dominating Murdoch and company media empires.

30 July 2007

Communications Technology 3

In the last post (CommTech2), we mentioned that the Sprint/Google relationship was about more than just securing finances for the FCC 700 MHz auction. It’s also about posturing for a strategic post-auction situation.

Sprint is a nationwide leader in industrial hardware for mobile networks. They have excellent nationwide coverage thanks to access rights on thousands of antennae towers, regional offices and neighborhood stores. With the advent of WIMAX, that is all you need in order to deliver a high-speed broadband connection to the public.

Beforehand, Wifi needed to have their high-speed connection piggy-back on in-ground cable and then broadcast from millions of in-ground cable hotspots. Now, WIMAX signals with extended range can broadcast and receive from the Sprint towers; and higher bandwidth means that the signal can provide high-speed service to all the people within range.

All Google needs for success with their internet application products is for the mobile public to be connected. Google currently does decent business with the stationary consumer , but mobile high-speed users are who they could really appeal to.

Stationary (primarily home and office) users already routinely use Google’s Search engine, Google Earth and Google Maps. But the stationary consumer aren’t desperately in love with Google Desktop, gmail, google calendar or google chat. Yeah Gmail is really nice in that you get free forwarding, free Pop-ing and that it syncs with your Outlook calendars and contacts, but the stationary consumer already has those things with Outlook, IM, Messenger or other similar programs.

If the FCC supports net-neutrality, mobile users will experience an internet similar to todays high-speed connections and all the Google applications will thrive. Gmail, Google calendar and Google chat will become much more highly desired. Mobile high-speed users will not want to carry around fragile hard drives and expensive processors in bulky handheld devices. Why should they? All they would need is a good touch screen display and an antennae ( and maybe a memory card slot for enough media to last the flight). They can have all their calendars, contacts, and files synced, and when they hit the road, they will have everything available to them on Google servers via Google applications. With Sprint providing a high-speed connection and Google providing quality applications, consumers will choose this option. Plus, the bandwidth will still support other goodies such streaming internet radio, YouTube videos, and even some television broadcasts. And for Sprint, they will no longer need to worry about losing customers to Google Chat and Voice Chat. (skype on the other hand should be pretty scared).

If the FCC does not support Net Neutrality, mobile users will not necessarily choose Google internet applications anymore. It depends on the business relationships between the Service provider and Google. Even Sprint would only favor Google as long as Google was willing to shell out market price for fast-lane data access speeds. Even if Google did pay the necessary money to be top-tiered, the natural oligopoly power of the major service providers would be able to vertically extract the majority of the Google profits. Sure, Google would still have good market share, but not great profits.

This may all seem like a pretty decent characterization of the motivation of some of the major communications technology corporations, but it leaves out a couple of giants. What about Microsoft and Apple? What will be their influence and desired positioning in the community? And also, what will become of the other medium in the short term? Next Posting.

27 July 2007

Communications Technology 2

Because of 1 & 3 (the impending auction for the 700Mhz blocks and the FCC’s decision on how to hold the auction), Google sought to persuade the FCC into allowing some “open blocks” in order to maintain “net neutrality” and “interoperability of hardware.”

“open blocks” are regions of the bandwidth that will not be controlled by a natural monopoly. For the gov’t the cost here is that if they mandate some blocks be open, they will be foregoing auctioned resources and would make less money. Also, this may make the controllable blocks also worth less because they will, in the future, be up against more competition. So, the government might be losing some revenue, which should translate into a negative for the American citizen consumer. On the upside for the consumer, open blocks provide the opportunity for entrepreneurs to introduce new technology to compete with and challenge the products of services of the blocks controlled by Verizon or whomever wins rights to the other blocks. Are the value of the entrepreneurs future product, service and price improvements to the quality of life of American citizen consumers in how the communicate and are availed to information now and in the future worth the foregone revenue of open blocks. It depends on what economists call your individual beta, your patience in discounting the future relative to the present. Probably, but it’s up to you to decide.

“interoperability of hardware” refers to the ability for you to buy any phone or mobile communications device and to have it work regardless of who your service plan provider is. Having “interoperability of hardware” and “open blocks” might mean that entrepreneurs would only need to challenge major providers in their products or services, but not necessarily both. At this point, the trade-off in the “interoperability of hardware” decision currently is a personal decision between bulkiness of phone and being tethered to your service provider. This is not a small decision. As the phone becomes a pda-phone becomes a supped up mediaplayer-phone-pda-gps-videocamera, are you going to want to trash it every time your provider treats you poorly. I believe major service providers generate a significant amount of profit from consumers feeling tethered trough the phone and incurred service plans. With “interoperability of hardware” requirement instituted into auction guidelines, this also would most-likely decrease how much service providers and their phone-making partners would be willing to bid. Do you believe that simplifying the phone and service buying process is worth the foregone FCC gov’t revenue. This is a close call. Let me know what you think. Post a comment. Are there any factors I’m leaving out here? I’d love to know.

“net neutrality” refers to the inability of major service providers to favor certain internet users. Currently all websites can be viewed with equal access and download speeds. If major service providers were to favor certain websites and users, the service providers (currently time warner, cox, ..; but after the auction, Verizon, …) could generate higher revenues and profits through a more free open market of selling preferences. There are also pros to the consumer; you could download higher quality videos, news sites, and popular sites in higher quality. The downside to the consumer is that it will make it harder for website newcomers to crack thru. Also, you will have less access to diversity of news. Is the information held by society and its ability to network a “public good”? It’s non-rival, so I think so. Which means, that even if you are fiscal-rightwinger, you should believe that this is one of those rare instances that gov’t should at least moderately intervene. Sure TimeWarner, Cox and in the future Verizon will suffer slightly smaller short term profits, but total quality of life will improve for everyone in the long run. I think that net neutrality is a must.

Why does Google want to persuade the FCC to have “open blocks, interoperability of hardware, and net neutrality?” Are the altruistic, competition loving, or just crazy? Maybe a little bit of altruism, but I believe that they would like to maintain their strategic position in the World Wide Web. They are the premiere internet application provider, and rightfully so. Their products are excellent.

How is Google going to persuade the FCC to do these things? If you noticed, all three of those actions result in less direct revenue for the FCC. This week Google offered to provide the FCC with a guaranteed ~$4.6Billion from the auction if those items were implemented. This is a big auction and to drive policy you will need big money.

Latest speculations have the FCC splitting the auction into different geographic regions, making it harder for a nationwide open block to come about. That is, probably 3 of the 4 Google requests (see Google’s blog) will be met, under which Google is less than satisfied. It appears that their position is to have to guarantee the FCC more money if they would like all 4 objectives to be met. Hence Google’s plans to partner with Sprint/Nextel and some Canadian financing sources. Mind however, that Google’s partnering with Sprint/Nextel is about more than just financing now. It is also about preparing themselves for a nice post-auction teammate.

Take note that Google isn’t the only company trying to partner up. I believe Verizon and Vodaphone are partnering in order to get a large financial basis in order to fair well in the auction. Others will do the same in order to be in solid financial situations for auction day.

So what exactly does all this have to do with which technologies (wifi, satellite, cell phone, inground cable, inground phone and wimax) will develop to be used for mainstream communication, and in which ways?

Stay posted and you’ll see how I see this picture unfolding and how the different players (Google, Sprint, Microsoft) should best respond.

Communications Technology

"In the beginning" [meaning a couple years ago] there were 5 major communications medium(there are more if you count cb radio, walkie talkie, ham radio...).
They were Wifi , Cell phone (cdma/...) , Satellite, Inground Cable, Inground Phone.

Inground Cable-
inaccessible for the mobile user
great bandwidth
(services provided by TimeWarner, Cox, SBC,...)

impractical for common 2 way communication (unless you have a huge transmitter)
great coverage
(GPS, DirecTV, Sirius, XM)

2 way accessible
decent bandwidth (~60Mbps)
weak range (1/4 mile at best without a relay from a standard router antenna)
currently practically implemented mostly by piggybacking on cable signal and broadcasted from cable hotspots
(802.11 abg)

Cell phone-
2 way accessible
weak bandwidth (constantly being reoptimized, now can get mediocre transmission speeds on your cell phone, but nothing you want to use consistently)
decent range (couple of miles depending on cell phone tower and phone antennas)

Inground phone-
2 way accessible
original voice transmission mission being picked up by inground cable
current focus to support cell phone communications

with all these contenders, which technology was going to win out?
which technologies would develop to be used for mainstream communication, and in which ways?

the answer depends on 3 critical developments.
1. the digitization of old tv broadcast frequencies, which in turn frees up a whole new bandwidth, the 700MHz blocks
2. creation of WIMAX
3. how the fcc decides to auction the blocks on the 700Mhz bandwith. whether they keep a block reserved as open. whether they make "net neutrality" a priority. and how communications companies end up when the auction is said and done.

How do I see the communication picture unfolding?
Stay tuned for the next posting.

First Post

first blogging. check check check