I wasn't initially moved to respond to this article. But I wanted to "weigh in" after reading some of the initial responses.
I was at Automation Fair '07 when one of the presenters discussed the approach of allowing multiple control architectures "in house." As I looked around the room to see the general response, via body language, I saw some people nodding in agreement and others laughing off this suggestion. As I talked to some people it seems that most of the people that were laughing off this suggestion were "plant level."
This topic goes beyond just the "technical" into how a typical manufacturing company is set up. Of course, there are a myriad of organizational structures. But to simplify, I would say that companies either have a strong, centralized engineering group or a de-centralized engineering structure. Within a strong, centralized group, the engineers are not "based" in the plant. Sure, they understand their company's business; but in reporting through a corporate structure, their concern, based on leadership direction, is to execute to the absolute lowest cost possible and look at an "installed cost." And they view this topic as one method of "getting there."
Within a de-centralized engineering structure, the engineers are based in the plant. Sometimes they even report through the plant leadership. Theoretically, they are more in tune with what that plant needs to be profitable and successful. They have more of a view of the "total cost of ownership," past just the installation phase. (I am sure this will elicit many responses from corporate engineering groups that will take issue with what I am arguing here, and I will concede that there are corporations, and individuals, that do a better job than others in examining the total cost of ownership, even in a central engineering structure.)
I don't personally recommend one structure over the other. Each has it pros and cons depending on the metric you are using to judge its ability to execute. But one of the areas that I think is "mis-represented" in this discussion is the impact of this decision on the plant. With multiple architectures in place you have more training. This is typically "written off" by saying that you should rely more heavily on your suppliers to support the technical and that industry standards are driving development languages to a common structure (though most would agree we are not there today). But who pays for this support? The "corporate project?" No, the plant. Another concern at the plant level is spare parts cost and inventory. Most plants do not want to have to purchase and store many different components. There is a trend in lean manufacturing to push this "stock" to the supplier, typically the local distributor in these cases, but even then when a component fails, you are going to be down for a relatively significant amount of time (not to mention that the distributors don't want to hold stock either!). Finally, the actual diagnostic/development software is different for each architecture. Again, this is a significant cost for a plant to incur in accepting all of these different architectures. In a nutshell, there is an "installed cost" and a "long term cost" to consider. And these "long term costs" hit the plant, not the corporate project.
To get where everyone is saying we need to go, the technology providers need to get to a common hardware platform. Let's use the standard analogy here. If I go buy a PC it is basically standard and will work the same as another PC (I'll conveniently ignore the existence of Apple in this discussion). If you can gain hardware independence, then these other concepts open up. With hardware independence, I don't care if I were to purchase a controller from Allen-Bradley, Siemens, GE, or Elau. They would all be the same. Plus, IEC would need to continue to define and standardize the programming languages. But they would need to take it a step further by defining what instructions were allowed in each language (think of it as the defining C++, Java, etc.). And then the development/diagnostic software becomes capable of supporting different platforms as well because the hardware and language is standardized. (By the way, both of these concepts open up these markets to increased competition and low cost suppliers- think the existing players want to see this happen? Think IBM was happy with what happened to "their" PC market when clones started appearing on the scene?)
Now you still have some of the same concerns/arguments. How does one supplier innovate and differentiate their product? Now the hardware is simply becoming a commodity with no value add. How do you bridge the gap between PLC, motion controllers, DCS systems, PAC's, etc.? And these are all valid points that the technology providers and industry would need to work through. And back to the analogy, even though the base hardware of every PC is basically the same, there is room for add-on hardware (sound cards, graphic boards, etc.) for specific applications and tasks.
My personal opinion is that industry will not accept the "PC concept" due to existing fears of "blue screen of death" operating systems, device specific device drivers that can cause instability, etc. And maybe I am not creative enough, but I don't see how you achieve what industry is asking for under the current system, or a simple derivative of that same core system, when you factor in the long-term costs identified above.