Saturday, September 15, 2007

Interview with Craig Mundie

Craig Mundie, Microsoft's technology chief talks about everything from Vista's low sales numbers, cloud computing, competing with Google, Linux desktop, and his Spotwatch in this long interview . No surprises so far.
  • Cloud computing: He believes that the world is not black and white and that people won't ditch their desktop lients anytime soon to completely migrate to software as a service. I think this makes sense. Microsoft is significantly investing into cloud computing via their "live" initiative but the strategy is to complement the on-premise model to achieve the client-server-service deployment model.
  • Vista: He admits that number of Vista copies sold so far is a small fraction of Microsoft's overall customer base. He calls it a cycle of diffusion and exploitation and Vista being in diffusion cycle waiting to be exploited. This is a chicken and egg problem. There are not enough Vista consumers out there and that's why developers are not that excited, but there is not enough incentive to migrate to Vista for consumers unless the development community adopts it and adds value to it.
  • Google: This is my favorite: "Google's existence and success required Microsoft to have been successful previously to create the platform that allowed them to go on and connect people to their search servers.". This is a twisted argument. He makes it clear that it is not only infrastructure play but a combined infrastructure and client play to reach out to the consumers. He is betting on people needing a desktop and other clients to connect to whatever Google offers.
  • Office Open XML Standard: Not there yet, but he promises that it is not far. He says "There are a lot of people who have raised a great many issues which we don't think have a lot of practical merit, but serve the purpose of creating some anxiety during this process." This is a classic standard related problem and people are super cautious when it comes to Microsoft. It is not about technology but how you come clean and make people happy that you are listening to them.

Friday, September 14, 2007

Design thinking and designers

The conversation with Brandon Schauer, design strategist at Adaptive Path, about design thinking is worth reading. Brandon talks about topics such as critical thinking and design thinking, design attitude versus decision attitude, and the importance of business fluency amongst designers.

I agree that for a business problem, you do want to apply design thinking to explore as many alternatives as you can , but you do want to critically think through all the alternatives before you reach a solution and keep your stakeholders informed about your decisions. Not only business fluency is critical to do this but a designer needs to have empathy for the stakeholders as well. Traditional ethnography techniques such as contextual inquiry can be used to understand user's goals and aspirations, but the designers need to go a step further and understand their stakeholders better and for that the designers need to acquire skills in the business and strategy area.

Sunday, September 9, 2007

The eBay way to keep infrastructure architecture nimble

eBay has come a long way from the infrastructure architecture perspective from a system that didn't have any database to the latest Web 2.0 platform that supports millions of concurrent listings. An interview with eBay's V.P of systems and architecture, James Barrese, The eBay way describes this journey well. I liked the summary of the post:

"Innovating for a community of our size and maintaining the reliability that's expected is challenging, to say the least. Our business and IT leaders understand that to build a platform strategy, we must continue to create more infrastructure, and separate the infrastructure from our applications so we can remain nimble as a business. Despite the complexity, it's critical that IT is transparent to our internal business customers and that we don't burden our business units or our 233 million registered users with worries about availability, reliability, scalability, and security. That has to be woven into our day-to-day process. And it's what the millions of customers who make their living on eBay every day are counting on us to do."

eBay's strategy to focus on identifying the pain points early on and solving those problems first and keep the infrastructure nimble to adapt to growth has paid off. eBay focused on an automated process to roll out the weekly builds into their production system and tracking down the code change that could have destabilized a certain set of features. The most difficult aspect of sustaining engineering is to isolate the change that is causing an error; fixing the error once the root cause is known is relatively easy most of the times. eBay also embraces the fact that if you want to roll out changes quickly, the limited QA efforts, automated or otherwise, are not going to guarantee that there won't be any errors. Anticipating errors and have a quick plan to fix it is a smart strategy.

If you read the post closely you will observe that all the efforts seem to be related to the infrastructure architecture such as high availability, change management, security, third-party API, concurrency etc. ebay did not get distracted by the Web 2.0 bandwagon early on and instead focused on platform strategy to support their core business. This is a lesson that many organizations could probably learn that be nimble and do what your business needs and don't get distracted by disruptive changes, instead embrace them slowly. Users will forgive you if your web site doesn't have round corners and does not do AJAX, but they won't forgive you if they could not drum up their bid and lost the auction because the web site was slow or was not available.

One of the challenges eBay faced was lack of any good industry practices for similar kind of requirements since eBay was unique in a way it grew exponentially and had to keep changing their infrastructure based on what they think is the right way to it. eBay is still working on grid infrastructure that could standardize some of their infrastructure and service delivery platform architecture. This would certainly alleviate some of the pains that they have from their proprietary infrastructure and could potentially become the de facto best practices for the entire industry to achieve the best on-demand user experience.

eBay kept it simple - a small list of trusted suppliers, infrastructure that can grow with users, and a good set of third party API and services to complete the ecosystem to empower users to get the maximum juice out of their platform. That's the eBay way!

Thursday, September 6, 2007

Are RDBMS obsolete?

Today Slashdot has picked up a storycolumn-oriented databases. The story claims that one size fits all approach does not work well for the current data warehousing requirements and that the organizations should explore other options beyond legacy RDBMS. The post says "Hence, my prediction is that column stores will take over the warehouse market over time, completely displacing row stores."

The fundamental assumption here is that somehow the data warehousing solutions are drastically different than OLTP ones and that's why has different storage, or I should say access, needs. What the post is missing is that many modern OLTP applications require real time analytics side-by-side and cannot really depend upon a separate data warehousing. The technology such as in-memory databases and materialized views that run on top of OLTP RDBMS make it feasible for an application provider to just have one hybrid system - OLTP or data warehousing, whatever you want to call it. This was obviously not the case few years back and you could get shot if you propose to run your analytics on a production (OLTP) database. I do believe that there is a need for special purpose databases that are different in architecture for very specific kind of applications but RDBMS is far from being obsolete. I heard the similar arguments in the past when object oriented database vendors claimed that RDBMS would become obsolete when people would switch over to object-oriented programming languages. Deja vu all over again!

Tuesday, September 4, 2007

SugarCRM hops on to multi-instance on-demand architecture bus

SugarCRM announced Sugar 5.0 which has multi-instance on-demand architecture. This is opposite to multi-tenancy model where many customers, if not all, share single instance. Both models have their pros and cons and adding flexibility for an on-premise option complicate the equation a lot. But the fact is that many customers may not necessarily care what on-demand architecture the products are being offered at and any model can be given a marketing spin to meet customers’ needs.

The multi-instance model resonates well with the customers that are concerned with the privacy of their data. This model is very close to an on-premise model but the instance is managed by a vendor. This model has all the upgrade and maintenance issues as any on-premise model but a vendor can manage the slot more efficiently than a customer and can also use utility hardware model and data center virtualization to certain extent. The customizations are easy to preserve for this kind of deployment, but there is a support downside due to each instance being unique.

Multi-tenant architecture has benefits of easy upgrade and maintenance since there is only one logical instance that needs to be maintained. This instance is deployed using clusters at the database and mid-tier levels for load balancing and high availability purposes. As you can imagine, it is critical that architecture supports "hot upgrade". You take the instance down for scheduled or unscheduled downtime and all your customers are affected. The database vendors still struggle to provide a good high available solution to support hot upgrades. This also puts pressure on application architects to minimize the upgrade or maintenance time.

And, this is just a tip of an iceberg. As you dig more into the deployment options, you are basically opening a can of worms.

Tuesday, August 28, 2007

Build, buy, or OpenSource

JP has written up an interesting post, Build versus Buy versus Opensource. He argues that these are the three options that IT has when it comes to software. I would change these options to build, acquire, or consume and would also argue that these options are not mutually exclusive. Customers could build a system that runs on open source software and could pay for commercial support for the open source software and could integrate with a proprietary, free, but non-open source software. You get the point. It's intertwined and most of the times customers do combine the options and that's why I would say build when you have to on top of what you acquired (free or open source) and consume (services) whenever you can to avoid both. There are obviously other factors IT considers when they pick software and its deployment model but I don't see the world as black and white as open source and non-open-source. Though I see plenty of opportunities to structure and sell software to minimize the "build" part on the IT side - personalize against customize.

I really liked what the V.P and Chief Marketing Officer of GE shared during their China Olympic sponsorship efforts. He said "Our number-one revelation is that customers don't necessarily organize their buying behavior the way we structure our business." I could not agree any more and this is applicable to software as well.

Saturday, August 11, 2007

SOA Security – A crystal ball?

Well, I hope not. The enterprise architecture should always consider the security aspects of various systems – authentication, authorization, audit trail, and non-repudiation. These fundamentals do not change when extended to SOA. Any SOA implementation should address these concerns. As this article suggests, there are multiple competing standards when it comes to SOA security and I personally believe that it is a good thing (at least in the beginning). Competition keeps vendors on their toes to follow a standard that works well and satisfies customers' needs. Loose consensus over rigid agreement works well for standards. CORBA is a good example of that. It took a lot of people many years to come up with this bloated standard and eventually what people got as a standard was a superset of all the possible features that addressed all the OMG members' needs and satisfied their egos. The end result was a comprehensive but useless standard.

In the SOA security world, there are competing standards, but they do not compete at the same level. If you are using WS-Federation, you can still use SAML tokens and if you are using SAML you can still use Liberty Alliance standards. All these standards will evolve and eventually the one that works well, and easy to use will win. I understand that organizations have concerns over investing too much into single identity management standard, but that does not justify organizations not investing into any security standards at all.

The companies are hard-pressed to open up their services to their partners to stay relevant in this competitive market. Don't listen to your IT department if they use the security card to scare you on your SOA efforts, instead work with them and prototype few simple ad-hoc federation solutions before venturing full-throttle into hub-and-spoke or complete identity federation solutions. This is similar to a kid learning how to bike. Use the training wheels, get rid of your fear, and once you understand how security works, get rid of the training wheels and go for a full-fledged solution. SOA security should not be a crystal ball; do your homework, follow your SOA governance and decision making framework, and most importantly have faith in your decisions – you will be fine.

Tuesday, July 24, 2007

Open standards and closed code

IBM is opening up a small part of it's patent portfolio to drive SOA adoption. The article has a quote - "This is telling people go for it and code using these open standards,". What exactly is open here? If it is open, why does IBM have patents on it? I haven't seen the patents here (you know the lawyers actually ask you not to do a patent search!) but it rather seems odd that you have patents on open standards that you make it available to the developers and actually get credit for them. The article also says - "I'm not sure how much developers worry about this stuff anyway," - right on. This is absolutely true. This is a pure PR play and demonstrates some of the issues with our current patent system and the patent stockpiling tactics that organizations employ. The actual impact of the patents to SOA adoption in general is questionable.

Monday, July 23, 2007

The innovation is like Paris Hilton

The innovation is sort of like Paris Hilton. She's everywhere and nobody really knows why. This is what Krisztina Holly, a serial entrepreneur and engineer tells us in a plea to preserve the true meaning of innovation. I liked the definition of innovation - "true innovation is the process of translating new ideas into tangible societal impact.” I couldn't agree anymore. The innovation is not just about the product or a process and it should certainly be not confused with an invention. I am not sure if innovation is a buzzword yet, but to me, it is nothing but common sense.

It will be all about criteria and not results

I haven't seen the search results user interface change a lot in the past few years and I don't expect any significant changes in coming years. Jakob Nielsen talks about what search results interface would look like in 2010 during a recent interview. I don't like to predict what will happen in 2010 but I do like to spot the trend and identify the opportunities to improve user experience in general and help improve search semantics. I firmly believe that the search results relevancy is likely to get better and better and we will certainly see more heuristics and machine learning to personalize results based on user's needs and importantly to understand the user's intentions in that moment. The search engine improvements are likely to shift from pure indexing science to better understand the search criteria to achieve relevant search results and there are plenty of opportunities in this area. The “Did you mean this?” correction is just the beginning. This is an area where psycholinguistics can contribute improve the search relevancy. Having said that, I do believe that there is plenty of room to improve the search/results interface and interaction model. Companies like ask.com are gearing their efforts in this direction. Semantic search engines haven't been that successful so far but this is also an area for an improvement in the overall search world.

The interview also brings up the fact that people are generally lazy, but I believe that given the right incentive people might be willing to express themselves better. Spelling and grammar correction is a good example. Though an overly enthusiastic interface asking a lot of information from a user is less likely to succeed. Jakob also talks about perceptual psychology and the ability of showing the images that users are expected to see and actually like and how this approach could turn into banner blindness. The multimedia search results are a big issue going forward as we see more and more user generated multimedia content. A picture is worth a thousand words and a video is worth a million. The images are easy to look at and to comprehend than the pure text but there are challenges on how to select the right images and the same is true with the videos. The video search is an interesting problem and there are many different evolving techniques such as meta information and audio scanning. We will see a lot of progress in this direction.

Friday, July 6, 2007

Innovation and design

"How can I do Apple"? I liked Cordell Ratzlaff's quotes in this Business Week article. "The most successful products I was a part of at Apple started with only a few people with no formal structure or hierarchy and little corporate oversight." Cordell managed Apple's Human Interface group in 1990 and now he is a director of User-Centered design at Cisco. He also says "Democracy works well for running a country and choosing a prom queen. The best product designs, however, come from someone with a singular strong vision and the fortitude to fend off everything and everyone that would compromise it." Yes, we all know and I agree that Steve Jobs is the king. To "do an Apple" you can either hire Steve Jobs or you ask your C-level executives to do what he does. Apple does not sell products, it sells user experience and apparently they are doing a good job marketing and selling this experience. We all can learn from Apple and understand the connection between innovation and design.

Apple has made mistakes in the past that resulted into some failures. Many people have blamed Apple for causing cognitive dissonance that resulted into bad design but Apple at least believes in design and gets it. Design-led innovation is not just about interaction, sensory, or information design but it is about design thinking. Apple does get a lot of credit for providing design a first class seat in their organization and enjoys the halo effect or cognitive bias to certain extent. The Business Week article talks about designers sharing the same philosophy and thinking long after they left Apple and this is a good thing as long as the designers don't introduce self-referential design. You want all the people in your organization to believe and practice design-led innovation but you don't really want to copy Apple when you "do an Apple".

Tuesday, July 3, 2007

SOA ROI - interoperability and integration

If you are a SOA enabled enterprise application vendor trying to sell SOA to your customers you quickly realize that very few customers are interested in buying SOA by itself. Many customers believe SOA investment to be a non-differential one and they compare that with compliance – you have to have it and there is no direct ROI. A vendor can offer ROI if the vendor has the right integration and interoperability strategy. For customers it is all about lowering the TCO of the overall IT investment and not about looking at TCO of individual applications. SOA enabled applications with standardized, flexible, and interoperable interfaces work towards the lower TCO and provide customers sustainable competitive advantage. Generally speaking customers are not interested in the "integration governance" of the application provider as long as the applications are integrated out-of-the-box and has necessary services to support inbound and outbound integration with customer's other software to support customer's vision of true enterprise SOA.


It has always been a long debate what is a good integration strategy for SOA enabled products. Organizations debate on whether to use the same service interfaces for inter-application and intra-application integrations. Intra-application integration have major challenges, especially for large organizations. Different stakeholders and owners need to work together to make sure that the applications are integrated out-of-the-box. It sounds obvious but it is not quite easy. In most cases it is a trade off between to be able to "eat your own dog food" by using the published interfaces versus optimizing performance by compromising the abstraction by having a different contract than inter-application integration. There are few hybrid approaches as well that fall between these two alternatives, but it is always a difficult choice. Most of the customers do not pay too much attention to the intra-application strategy, but it is still in the best interest of a vendor to promote, practice, and advocate service-based composition against ad-hoc integration. There are many ways to fine tune the runtime performance if at all this approach results into performance degradation.


The other critical factor for ROI is the interoperability. The internal service enablement doesn't necessarily have to be implemented as web services, but there is a lot of value in providing the standardized service endpoints that are essentially web services that have published WSDL and WS-I profile compliance. The interoperability helps customer with their integration efforts and establish trust and credibility into the vendor's offerings. I have also seen customers associating interoperability with transparency. Not all the standards have matured in the area of Web Services and that makes it difficult for a vendor to comply to or to follow a certain set of standards, but at the minimum vendors can decide to follow the best practices and the standards that have matured.

Sunday, June 17, 2007

SOA Governance - strategic or tactical?

It is both. SOA governance is not much different than any other kind of governance in an organization. Successful SOA governance cannot be achieved without people framework. Socioeconomic factors such as organizational dynamics (I think it is a good synonym for politics) drive the SOA strategy for an organization.This is especially true for IT organizations where the organizations are on the supply side of SOA for their product offerings. Many people miss the fact that the governance efforts are not only limited to the internal employees in an organization but are typically extended to customers and partners. Many organizations co-innovate with customers and partners and these partners and customers significantly influence the SOA governance policies of an organization.

Many architects view SOA governance as a technical challenge, but I beg to defer. Strategic SOA governance is not just a technical problem; it is a business and process problem that has socioeconomic implications. I already talked about the people part. Talking about SOA economics, there is no good way to calculate ROI based on just SOA. Few people have actually tried doing this and I am not sure if this is a right model. Number of services or number of reusable services or any other QoS for SOA don't help to build an economic metrics. SOA is quite intertwined with the business and it is your guess versus mine in extracting a monetary value out of it. Having said this, people do work hard on making a business case for their organizations since SOA is hard to sell.

The strategic to tactical transformation of SOA is not easy. This is where people argue on several reference architectures, policies etc. These are very time consuming and dirty efforts and include several technical, domain, and functional discussions. Cross-functional team works well to tackle this kind of governance problem since it is critical to have a holistic (horizontal) view of SOA with enough help from the experts in several (vertical) areas. SOA architects have to have good people and project management skills since as I already mentioned governance is not just a technical problem. If you are a technical architect, you end up with a diagram like this. This diagram does not help anyone since it mixes a lot of low level details with high level details and the information is difficult to consume. Communicating the architecture is one of the difficult challenges for an architect and it even becomes more difficult if you are describing strategic SOA governance.

Monday, June 11, 2007

Visual Design versus Interaction Design

I have seen and participated into this debate many times - what design we should tackle first, visual or interaction? There is no one answer, but here are some thoughts. What we really need is a good framework in place during the design phase before we work on the details of any of these designs. The actual design cannot be accomplished until we have idioms, metaphors (for interaction design) and brand, visual theme (for visual design) etc. flushed out and agreed upon.

One designer describes visual design as skinning the wireframes to prevent the end of wireframes and hence death of interaction design. This is a bit extreme and many visual designers won't be thrilled with this opinion. Wireframes are good tools to document interactions and to get a quick validation via cognitive walkthroughs. Visual design is horizontal and it should be made sure that it is consistent across all the parts of an application so that they have the same visual appeal. Interaction design is vertical and could describe some very specific interaction scenarios for each set of pages.

Users would however recognize the visual design appeal first and may not even have direct appreciation for good interaction design until they figure out they can accomplish everything in an application without putting in a lot of thoughts. The visual design is easy to demonstrate than to document it and that's why people jump to photoshop since transformation of visual artifacts from photoshop to actual web-based application is not that difficult. On the other hand interaction design deals with data, user's intents and actions, feedback etc. There is no one-one relationship from wireframes to the actual screens but having detailed wireframes help developers establish good understanding of the interaction model and designer's expectations. We highly encourage that designers co-locate with developers but if that's not the case, especially for remote teams, documenting design is critical.

I am not trying to downplay the role of visual design - it is the "look" part of look and feel and it's not just about the skinning of wireframes. Visual designs need their frameworks too such as CSS, typography, symmetry, balance, etc. but there are many ways to slice and dice time on interaction and visual design in a project - these are not the alternatives at the same level. I don't want to stereotype developers, but most of the developers think that the only design that they need is visual design and not the interaction design since the discipline of interaction design is less known in the developer community. Alan Cooper's work, especially "The inmates are running the asylum", describes this conflict in detail. Developers follow "system model" as described by Don Norman which is essentially an implementation centric design. Designers should make every possible effort to document and emphasize the interaction design to achieve overall user experience. The visual design is, well visual, and developers are more likely to embrace it or even ask for it.

Saturday, June 9, 2007

Apple and Google alliance

Few bloggers have picked up this Wired's post on a speculation of Steve Jobs announcing a possible alliance between Apple and Google. Interesting - that's all I can say. The post has a quote from Eric Schmidt that Apple actually gets the design but does not have the necessary computing infrastructure. I agree. Apple is a company that delivers innovation with heavy focus on design (of all kinds) where as Google has brought in simplicity and agility by nailing down few very simply problems with state of the art technology innovation. Google certainly does not have bad design but Google has a long way to go and has plenty of opportunities when it comes to interaction or visual (sensory) design. I was told that the person who leads the user experience efforts at Google has "office hours" during which developers can (and do) drop in and expect that person to solve design problems. This was quite a challenge for that person since the design process does not quite work that way. On the other hand, design is one of the biggest strength that Apple has and given the computing resources Apple can do miracles. I don't think it is about or limited to .Mac. iTunes can have a great story if offered on Google's cloud since it could have great sharing and preview potential with Google's cloud. Apple did have issues with iTunes after last Christmas when people started using their iTunes gift cards to buy music and the store could not handle those concurrent users. It was a true load test and iTunes did not do well. Imagine the same scenario - iTunes running on Google's cloud and tightly integrated with AdSense. YouTube is a great platform for video syndication with a community angle. Apple does have community but the syndication is limited to downloading and does not have sharing semantics. Having said this despite of these product synergies an alliance cannot be successful unless it can offer a tangible business value. But hey, this is a speculation, isn't it?

Friday, June 1, 2007

Moore's law for software

Software design has strange relationship with the computing resources. If the resources are low it is difficult to design and if the resources are in abundance it is a challenge to utilize them. It is rather odd to ask the designers and developers to have Moore's law for software, but this is true and it is happening.

The immense computing resources have opened up a lot of opportunities for the designers and developers to design agile and highly interactive web interfaces by tapping into this computing cloud. Effective resource utilization by software is by far lagging the fast growing computing resources. Google has successfully demonstrated a link between the humongous cloud infrastructure and the applications that effectively use these resources. Gmail and Google Maps are examples of agile and highly interactive interfaces that consumes heavy resources. Google's MapReduce is an example of effective utilization of the computing resources by designing the search to use heavy parallelization. One of the challenges that designers face these days is to be able construct an application from an interaction perspective such that it can actually use the available resources effectively to provide better use experience. Traditionally the performance tuning is all about fixing software to perform faster without adding extra computing resources. The designers and developers now have a challenge to actually use the resources. The cloud computing is going to be more and more relevant as various organizations catch up on Web 2.0 and Enterprise 2.0. Google, Yahoo, Salesforce, and Microsoft are betting on their huge infrastructure that can deliver the juice required for their applications. Cloud computing is not just about hardware - it is about the scale of computing and the infrastructure that is required to get to that scale such as physical location, energy and cooling requirements, dark fiber etc.

Not every single piece of code in software can be parallelized. Developers hit a set of serial tasks in the code flow for many dynamic conditions. Semantic search is a classic example that has many challenges to use parallel computing resources since you do end up serializing certain tasks due to the dynamic nature of many semantic search engines and their abilities of natural language processing. Cognitive algorithms are not the same as statistical or relevancy algorithms and require a radically different design approach to effectively utilize the available resources.

Intel has been pushing the industry to improve performance on its multi core CPU . Microsoft recently announced an initiative to redesign the next Windows for multiple cores. The design is not just about one, two, or three cores. The resources are going to increase at much faster pace and software designers and developers are late to react and follow this massive computing. Computing in a cloud requires a completely different kind of approach in software design and there are some great opportunities to innovate around it.

Sunday, May 13, 2007

Hello World from JavaOne 2007!

Just came back from JavaOne 2007. The experience was as good as I expected it to be, well sort of. The energy and excitement have been going down at JavaOne year after year. This is my seventh JavaOne and I could feel that. Not to sure what to make out that but there are a lot of missed opportunities on Sun's side. On a positive side the sessions were good and the live demos did work! As promised Sun announced the OpenJDK with GPLv2 license. Finally the open source community will get their hands around Java. Sun is going to maintain the commercial (and free) version of JDK and that should allow organizations to continue embedding it without worrying about any GPL issues around derivative work. I attended a session by Eben Moglen and he was quite pleased with this announcement. He was also optimistic that GPLv3 will be aligned with Apache license when it is finalized. I really hope that happens.


Apparently it was a crime if speakers did not to talk about Ajax. The Ajax discussion was hot last year but this year it was almost mandatory for all the sessions. SOA was not a big hit this year. I liked Sun's approach in embracing the popularity of dynamic languages and scripting. Instead of inventing something on their own, Sun pushed jMaki as a solution that wraps all the popular widgets and provides nice abstraction, compatibility, and inter-widget communication. The solution fits well with JSF. It was clear that I am not the only one who hated JSR 167. A couple of speakers expressed their frustration around JSR 167 and hoped that JSR 286 would solve some of these problems. The "convention was over configuration" was a popular message. It took these many years for the vendors to figure out that developers want something up and running when they install a toolkit. They don't want to go through configuration hell just to get few simple things done. Grails and Seam are good examples on "we get it". There were quite a few developers at JavaOne who also coded in .NET and PHP. Zend had a session and they demoed PHP to Java integration. There was a session on .NET interoperability as well. Sun pushed JSF as a flexible yet powerful application development framework. Since JSF is now officially part of J2EE, I hope it gets more tooling support, scalable runtime, and open source faces.