Tuesday, May 18, 2010

On Model-Based Modeling Builds...

In principle, I think most people agree that builds should be a shared responsibility, i.e., everyone should be equally able to do builds and the effort to do so should be equally distributed. Unfortunately, equal effort can sometimes result in nobody doing anything at all, because hardly anybody wants to! Trying to do the right thing in this context has launched many an accidental career in release engineering. So the best we can do is make the build effort as efficient as possible, giving everyone less reason to complain.

This has become essential for the Modeling project, which now has roughly sixty active sub-projects, many of which are one or two committer efforts with no way to justify a full-time release engineer. With that in mind, the Modeling PMC has recently decided to standardize on one build engine - Buckminster (often affectionately referred to as "Bucky") - for all of its projects, starting with the Helios release.

Why standardize? The obvious reason is to spread the joy of supporting build infrastructure across multiple projects. Less obvious, but no less important, is our not-so-distant goal of having a single build chain that can support true continuous integration for the entire Modeling stack, which should be much simpler if all of the builds are using the same technology.

Why Buckminster? The people and technology were familiar, so that was obviously a factor. But we tried to make as objective a decision as possible. Key considerations were the following:

  • CDO and Teneo, having independently Buckminsterized last year, were enthusiastic supporters and made a strong case for the benefits.
  • Unlike the alternatives, Buckminster is model-driven (it uses EMF). This makes it a no-brainer for us modeling zealots.
  • We wanted to be able to reuse existing metadata, which is an advantage that Bucky has over Maven alternatives.
  • Having a build that runs the same way in a developer workspace as on the server makes it much more efficient to spread build responsibilities across the teams.
  • Adopting Buckminster gets us a step closer to using b3 (Buckminster will soon be supported as a build execution engine for b3), which we think is the future.
  • Last, but not least, someone (i.e., Cloudsmith) stepped up to do the work!

Upon closer inspection, Buckminster had a few holes that needed filling. Support for automated build identifier generation/insertion, CVS tagging, and dependency version range management were non-negotiable for build slackers like Ed Merks (not to mention the rest of us mere mortals), and automated build promotion via Hudson was also highly desireable. So we rushed these changes through in time for Helios.

The effort of migrating from various older build systems (PDE Build, Athena, and variants) was not inconsequential. However, it ended up being relatively painless. One reason I can say this is because Michal Ruzicka (Buckminster committer and my colleague at Cloudsmith) did pretty much all the work. Michal was able to Buckminsterize most of the key Modeling projects in roughly a month of effort, which was pretty amazing, all things considered. Thanks again, Michal!

The first Buckminster build of EMF went live with M7 two weeks ago and the many other Modeling projects will soon follow. We'll be cutting a few key projects over as Helios heads toward completion. A number of others have chosen to postpone switching until just after the Helios release.

We'll send out periodic updates as the individual projects adopt the new build system over the coming weeks, so stay tuned for more details. In the meantime, if you want to hear more about what we're doing (and how), let us know!

Monday, May 10, 2010

On Google I/O...

I'll be out at Google I/O on May 19 and 20, talking up the work we've been doing with EMF on GWT and just generally learning more about all the great Google technologies we depend on.

Regarding EMF support for Google Widget Toolkit, we hope to have a working implementation of full modeling support for GWT applications before too long. Ed has been hard at work on this, and we'll soon have optimized object serialization between client and server, and a generic GWT editor for EMF-based models. We think this work will be really useful for GWT development once it's done.

With respect to other Google technologies, Cloudsmith is particularly interested in App Engine and BigTable; we're using them now but still coming up the learning curve. Next after that is Wave, which we'd like to use but doesn't seem quite ready for prime time. We're hoping/expecting to see a renewed Wave commitment and inspirational roadmap from Google at I/O next week.

Planning on being there? Let me know if you'd like to meet up!

Thursday, May 6, 2010

On Where We're Using EMF...

Where are you using the Eclipse Modeling Framework (EMF)? I've blogged recently about how perhaps the "E" in EMF ought to stand for Extensibility. More and more, I wonder whether maybe it should stand for "Everywhere" instead. While many feel a burning need to bring the Web to Eclipse, at Cloudsmith we see things a little differently. We see big potential in leveraging the great technologies at Eclipse in new and interesting ways (and places!), one of which is to bring Eclipse (and, more specifically, EMF) to the Web.

When EMF made its debut at Eclipse some eight years ago, it was a framework for developing IDE-like applications. Then, it followed the lead of the Eclipse platform and expanded its reach to support Rich Client Platform (RCP) applications. Earlier in the Helios release cycle, we added support for the Rich Ajax Platform (RAP), which - thanks to the RAP folks' great work, particularly support for "single sourcing" an application - can almost be treated as a variant of RCP.

With Helios M7, however, EMF moves past the boundaries of the Eclipse platform, and desktop applications in general, by adding support for the Google Web Toolkit (GWT) as a new application runtime. We've done this by formalizing the EMF code generator's notion of a "runtime platform" through an enumeration. Platforms that previously were only implicitly supported - 'IDE', 'RCP', and 'RAP' - are now explicit enumeration literals. And now we've added a new literal for 'GWT'.

So, what does this mean? Well, depending on which runtime platform you choose in your generator model (and which platform you're targeting), you'll get a different result when you generate your code. For IDE and RCP, the only difference is in the editor (since RCP comes with certain limiting assumptions). With RAP, your edit and editor code isn't all that different from RCP, except that you'll have the ability to run against alternative versions of EMF's runtime UI plug-ins, which have been customized for RAP.

In the case of GWT, however, when you generate your model and edit code (support for editor and tests will come over the next few months), you'll be targeting an entirely different EMF runtime, tailored to be translatable into Javascript modules and to leverage the capabilities of GWT (RPC serialization, localized message resources, image bundles, etc.).

Ed and I will have more to say about the technical details of this new runtime over the coming weeks. In the meantime, you can refer to the New and Noteworthy page for Helios to help you get started with developing EMF-based applications for GWT!

Wednesday, May 5, 2010

On Looking Good...

Appearance isn't everything, but it certainly goes a long way, especially for things that are inherently visual. In light of this, the MDT Papyrus project is about to provide some new eye candy for its users. The project already has an impressive logo, but now they've made some fresh new icons for UML model element types. The Papyrus project lead, Sébastien Gérard, created a clever mosaic to show them off - see below. Let us know what you think!

Monday, April 19, 2010

On Noteworthy Pairs...

Phew. I've finally caught up after the frenzied activity of the past few weeks and posted a New and Noteworthy entry for the M6 milestone of EMF. If you attended the EMF tutorial or RAP BoF at EclipseCon, or perhaps saw Ben's blog post (gotta love the title of that one!), you're probably already aware of a great new enhancement that was added to EMF, thanks to generous sponsorship from another one of my clients.


That's right, EMF now supports Rich Ajax Platform (RAP) out of the box. This means that you can now generate a sample "single-sourced" application that can be run against either an RCP (Rich Client Platform) or a RAP runtime target. Details can be found on the New and Noteworthy page for the Helios release of EMF. Thanks again to Ed and Ben for their help in making this happen in time for M6!

Thursday, April 15, 2010

On Architecture...

As you may have already gathered from Kim's blog (yes, we're both from the same province and home town, and no, the tidal bore is not a pig), I've recently been appointed to the Eclipse Architecture Council. It's an honor to be in the company of such great technical leaders, especially my Cloudsmith colleagues, two of whom (Thomas and Ed) are on the council as well.

I've actually been spending a lot of time looking at architecture (or lack thereof) of late, primarily within the Modeling project. As I've mentioned before, vision is one of the key contributors to a successful project, and a guiding architecture is an important part of such a vision. With the number of Modeling projects growing at an alarming rate (60+ and counting), it's going to be increasingly important to "bring order to the chaos", or risk the loss of potential consumers and contributors due to frustration, confusion, or both. Initiatives like Amalgam and the recently proposed Sphinx project certainly help, but there's a lot more that could be done.

Speaking of Sphinx, Stephan Eberle (one the proposed project leads) and I presented a talk at EclipseCon entitled "The Twenty Modeling Things", the slides for which can be viewed at Slideshare or via the EclipseCon session page. The basic idea of the presentation was to propose set of essential services that might one day form the basis for an integrated modeling workbench at Eclipse. Which "things" would you have included? Can you think of other services which ought to be on our list?

Tuesday, April 6, 2010

On Looking Up...

It's been a while since I last blogged, and much has happened in the meantime, including the completion of contracts with two different clients (more about those later), the M6 milestone of the Helios release, EclipseCon 2010, a vacation in the Dominican Republic, and, most recently, a case of Scarlet Fever (what a way to put a damper on a vacation!). It's funny, though, how much clearer you can see when your head is in the clouds.

As I've stated previously, I've been taking my time to carefully decide what my next venture would be. Well, I'm happy to say that, as of this week, I'm now working on a full-time basis with Cloudsmith Inc., as lead of product development. If you were at EclipseCon, you'll no doubt have heard of some of the great things Cloudsmith is doing. In case you haven't, you definitely will over the coming weeks.

Tuesday, February 23, 2010

On Those Sexy Models...

This is the moment you all (OK, maybe not all) have been waiting for! Step aside, NetBeans girls! Get ready for the new sensation! It's time for members of the Eclipse Modeling team to take their rightful place among the unforgettable images of EclipseCon 2010. And this is your chance to help make it happen!

We've done the bobble head thing. Some suggested that this year we should do the Barbie thing. But we've got something even better in mind.

The Challenge

Take images of prominent committers (see the attachment to bug 303637) from Modeling projects and transform them into the models we know they're capable of being! For example, you could take their undeniably handsome heads and superimpose them on otherwise "superior" bodies.

The Rules

  • Each entry must be in the form of an attachment to bug 303637 (be sure to choose 'BigFile').

  • Each entry must consist of altered versions of all ten original images (already attached to the bug).

  • The altered images must be in good taste - give your peers the respect they deserve.

  • Entries must be submitted no later than March 17, 2010.

The Reward

Our esteemed judges (the infamous Ed Merks and Chris Aniszczyk) will decide on a winning image for each of the ten "models", to be revealed during the "Modeling Project Runway 2010" talk at EclipseCon 2010. Winners (those who submitted one or more winning images) will be presented with some great prizes (e.g., Eclipse schwag) at the end of the runway session (or we'll mail it to you if you're not there... but we know you will be!).

Come on, Eclipse, let's show everyone how creative we can be!


Monday, February 8, 2010

On the Catwalk...

Yeah, on the catwalk. We'll do our little turn on the catwalk. We've got models, you know what I mean, and we'll do our little turn on the catwalk.

Speaking of evangelism, we're trying something a little different this year to promote modeling at EclipseCon. We're holding one session, "Modeling Project Runway 2010", where we'll be showing off new and noteworthy enhancements from ten of your favorite modeling projects. We've got a great lineup of presenters... lined up:


To add to the fun, we'll also be holding a photo contest over the coming weeks, to see who can best transform these fine gentlemen into visions of beauty befitting a proper modeling runway. Stay tuned for your chance to shape the face of modeling at Eclipse!

Friday, January 8, 2010

On The Future of BPMN (Too)...

The future of BPMN (once Business Process Modeling Notation, now Business Process Model and Notation) is finally here. Or is it? After much politicking, design by committee, and intellectual debate, the OMG (Object Management Group) finally adopted the long-anticipated BPMN 2.0 specification last June (I know, old news). For those that are unfamiliar with the OMG Technology Adoption Process, when a specification is "adopted", it enters a finalization phase, during which vendors are expected to implement the specification and work together to iron out any of its kinks. Having gone through that process with the UML2 project at Eclipse, I have first-hand experience with the challenges of balancing specification finalization against the realities of shipping a product ... But suffice it to say that the age of tooling support for BPMN 2.0 is at hand.

Eclipse has actually had a decent BPMN editor for a few years now, courtesy of the SOA Tools Platform project. It's good enough that I know of at least two vendors that considered scrapping their internal efforts in favor of adopting the open source tooling. However, the BPMN Modeler was never based on a standard metamodel, for various reasons, among them being the fact that, well, the OMG didn't really have one (unless you count BPDM, but that's a whole other ball of wax) - until now. When I proposed the BPMN2 subproject of MDT back in late 2007, I was pleased to receive interest from the BPMN Modeler team in adopting the metamodel implementation, once available. Fast forward two years beyond project creation and six months beyond specification adoption and, unfortunately, as a result of changing priorities among the project's original participating companies (what else is new?) - none of which is participating in the project any more - we still don't have a metamodel implementation. In fact, I was on the verge of contemplating a termination review for the project when, unprompted, Intalio stepped forward with a willingness and ability to contribute the metamodel implementation themselves! So, I'm pleased to say that, in the not too distant future, we'll have an open source implementation of the BPMN 2.0 metamodel at Eclipse!

So, is that the end of the story? Well, not quite. To their credit, the OMG has started looking at the long standing issue of overlap between UML and BPMN (not to mention its other metamodels) and general lack of architectural cohension between its various modeling specifications (often referred to unaffectionately as the "metamuddle"). In fact, the OMG Architecture Board recently charted the “Architecture Ecosystem AB SIG” (or “AE SIG” for short), which is being chaired by Cory Casanave (Model Driven Solutions) and Jim Amsden (IBM). The mission of the AE SIG is to work with OMG domain and platform task forces, other relevant OMG SIGs (special interest groups), external entities, and related industry groups to facilitate the creation of a common architectural ecosystem (sound familiar?). This ecosystem will support the creation, analysis, integration, and exchange of information between modeling languages across different domains and viewpoints, from differing authorities. In particular, the need for business and enterprise level architectural viewpoints must be better integrated with the technical viewpoints that define systems to address enterprise needs. The AE SIG will focus on the capability to define and integrate languages and models in various viewpoints and support other groups that will focus on the specific viewpoints required for their specific domains. A set of viewpoints, supporting models, and supporting technologies will comprise the ecosystem.

Recently, Cory issued a poll to prospective members of the AE SIG on the topic of integrating BPMN and UML. Details of the poll, reproduced here with permission from Cory (thanks!), are below.

The Question

There has been substantial discussion on the needs and issues with integrating UML and BPMN. Using this as a "test case" we would like to take a poll on what would be the best way for this integration to happen, strategically. In other words, if you could design this from the ground up, what would you do?

The Options

[1] They remain separate standards. There is a BPMN standard with metamodel and a UML standard with metamodel. These standards are separate, intended for separate communities and tools. There is no relationship between these standards. This is, essentially, the current condition.

[2] BPMN is a UML profile with notation. The separate metamodel for BPMN is deprecated and the formal specification of the BPMN notation is as a profile of UML, using the BPMN notation. The result looks and feels like BPMN as it is defined today, but it is defined “on top of” UML. This option would include any adjustments in the UML metamodel required to make such a profile well-formed. Another interpretation of this option could be that the BPMN metamodel is retained and there is also a UML profile for BPMN, presumably with a mapping between the two. However, the profile of BPMN should look the same in either case. (The latter may be the interpretation most people who voted for the option intended, so one should interpret this option to be silent on the question of retaining the separate BPMN metamodel or deprecating it.)

[3] Create a unified model encompassing both. A MOF or MOF-like metamodel is created that is the superset of the capabilities of UML and BPMN as unified conceptual system. This model would have the semantics of process layered in such a way that redundant concepts have identical metaclasses (perhaps with different notations) and similar concepts have like capabilities factored into common superclasses. The nature of this unified model would be much like the UML and BPMN models today, but including the concepts and specifications of both notations.

[4] Semantic models with UML and BPMN viewpoints. This option pre-supposes more advanced meta modeling capabilities where an underlying semantic model (or set of models) is defined and then various “viewpoints” on the semantic model provides the specialization of those semantics for the needs of a particular kind of stakeholder. In this option both UML notations and BPMN notations share the same or related underlying semantic model and have an additional specification that specifies the specific structures required for BPMN and UML viewpoints. The difference between this and the prior option is that the viewpoints and semantic models are more loosely coupled. The models are constructed with the expectation that multiple languages and viewpoints will be constructed out of the semantic building blocks. The semantic building blocks are, likewise, loosely coupled.

[5] BPMN replaces UML activity diagrams. Activity diagrams as currently defined in UML are deprecated and replaced with BPMN notations and semantics. BPMN essentially replaces a portion of UML behaviors.

[6] BPMN grows to make UML not required. BPMN grows to encompass all the capabilities required for business-focused modeling and architecture, thus making any integration with UML redundant. BPMN may, some day, replace UML.

[7] BPMN and UML are separate models, mapped with QVT. BPMN and UML are separate metamodels as they are now. A QVT mapping is specified between them such that a portion of a model in BPMN can be used to create a UML model and a portion of a UML model can be used to create a BPMN model. Since the notations are not the same, notations would not be mapped.

[8] There are ways to make links between them. Both the BPMN and UML metamodels exist in parallel, much as they do now, but there are ways to “link” elements between them. This may require some additions to the OMG's metamodeling capability. The links would, for example, allow a behavior specified in the BPMN model to be the implementation of an operation on the UML side. There are, of course, questions and issues about how this is done and how the context and types on each side reference the other with some precision. This may require changes to both specifications to be less assertive about the types of elements used in associations.

[9] Other. Any option not reflected above.

The Results


From the results, it's clear that (for those that responded, anyway) the most popular option is to create yet another metamodel that encompasses both BPMN and UML (maybe UUML, the Ultra Unified Modeling Language?). How would you have voted? It's perhaps worth noting that the integration options (2, 3, 4, 5), when combined, are by far in the majority, so it seems that anything would be preferable to the status quo. Time will tell, I suppose. In the meantime, the MDT project will focus on providing another de facto reference implementation of an OMG specification. As always, if you're interested in helping, we'd love to hear from you.

Wednesday, January 6, 2010

On the E in EMF...

What do you think the E in EMF really stands for? Of course, officially it stands for Eclipse, but given how useful the framework is, even independently of Eclipse, folks often question whether it should stand for something else. I'm sure many of you have heard suggestions that it should instead be "Ed" (because, after all, it is Ed's framework, isn't it?) or perhaps "Excellent" (despite some beliefs to the contrary).

Given the nature of enhancements that have been made over the past few years, and in light of more recent efforts to port the runtime to other platforms, like GWT and Android, I've come to think EMF should stand for Extensible Modeling Framework. Indeed, as of the recent M4 milestone of the Helios release, EMF is even more extensible, thanks to investments from two of my other clients. I worked with NexJ Systems to add support for delegation of constraint and invariant evaluation and with eXXcellent solutions to introduce similar delegation mechanisms for feature settings and operation invocation.

Validation Delegates

The core validation framework in EMF previously provided a way to declare invariants and constraints on EMF-based implementations using annotated Ecore models. From the perspective of this mechanism, a constraint is a statement that must be valid at some point in time, whereas an invariant is an assertion that must always be true. However, one limitation of this mechanism was that invariants and constraints had to be implemented, by hand, in Java source code; there was no means of specifying the behavior of invariants or constraints in an alternative format (such as expressions in some language), nor was there a way to delegate their execution to an external mechanism (such as an expression evaluation engine). With this enhancement, the core EMF validation framework allows the behavior of invariants and constraints to be defined via additional annotations on Ecore models, and for them to be executed, both from generated code and from dynamic models, via registered external expression engines.

Execution of validation expressions can now be delegated to external expression engines via validation delegates. A validation delegate is a class that implements an interface defining methods that can be called by a validator to perform validation, i.e., evaluate constraints and invariants. Validation delegates can be registered against specific URIs in a registry which can then be queried by validators when performing validation. A global registry of validation delegates, which can be populated via an extension or programmatically via Java code, exists, but it is also possible to create new registries for use in specific contexts, e.g., in cases where it is desirable to override the default (global) validation delegate for a given URI during a particular diagnosis. In order to use a registered validation delegate within a given package, it needs to be declared as a value in an annotation details entry on the package.

An invariant is implemented as a method on a class, defined on the model, and is considered a “stronger” statement about validity than a constraint. The behavior of an invariant can now be defined as a string expression in the details entry value of an annotation on the Ecore operation representing the invariant. In order to delegate evaluation of the expression to a registered validation delegate, the URI for this annotation needs to match one of the values in the details entry of an annotation on the nearest Ecore package. Evaluation of a properly annotated invariant is delegated to the corresponding registered delegate by a validator during a validation operation either statically (via generated code) or dynamically (via reflection) on a model instance.

A constraint is implemented as a method on an external validator class, not on the model itself, and is considered a “weaker” statements about validity than an invariant. The behavior of a constraint can now be defined as a string expression in the details entry value of an annotation on the Ecore class or data type for which the constraint is defined. In order to delegate evaluation of the expression to a registered validation delegate, the URI for this annotation needs to match one of the values in the details entry of an annotation on the nearest Ecore package. Evaluation of a properly annotated constraint is delegated to the corresponding registered delegate by a validator during a validation operation either statically (via generated code) or dynamically (via reflection) on a model instance.

Feature Setting Delegates

EMF previously provided a way to declare that features in EMF-based implementations are derived via metadata in Ecore models. From the perspective of EMF, a derived feature is a feature whose value is to be computed from other, related data. However, the computation of derived features had to be implemented, by hand, in Java source code; furthermore, there was no means of specifying the values of features (derived or not) in an alternative format (such as expressions in some language), nor was there a way to delegate their computation to an external mechanism (such as an expression engine). With this enhancement, EMF allows the values of features to be defined via additional annotations on Ecore models, and for them to be computed, both from generated code and from dynamic models, via registered external expression engines.

Computation of features can now be delegated to external expression engines via setting delegates. A setting delegate is a class that implements an interface defining methods that are called by the EMF runtime to access the feature’s value. Setting delegates can be registered against specific URIs in a registry which can then be queried when accessing the values of features. A global registry of setting delegates exists, which can be populated via an extension or programmatically via Java code. In order to use a registered setting delegate within a given package, it needs to be referenced as a value in an annotation details entry on the Ecore package.

The computation of a feature can now be defined via an annotation on the Ecore structural feature. In order to delegate computation of the feature’s value to a registered setting delegate, the URI for this annotation needs to match one of the values in the details entry of an annotation on the containing class’s owning Ecore package. Evaluation of a properly annotated structural feature is delegated to the corresponding registered delegate when the feature’s value is accessed, either statically (via generated code) or dynamically (via reflection) on a model instance.

Operation Invocation Delegates

EMF previously provided a way to declare operations in EMF-based implementations via metadata in Ecore models. From the perspective of EMF, an operation is a behavioral feature whose specification is beyond the scope of the framework. Although a mechanism already existed to specify the bodies of operations, in Java syntax, via annotations, the behaviors of operations generally had to be implemented by hand, in Java source code; there was no means of specifying the behaviors of operations in an alternative format (such as expressions in some language), nor was there a way to delegate their execution to an external mechanism (such as an expression evaluation engine). With this enhancement, EMF allows the behaviors of operations to be defined via additional annotations on Ecore models, and for them to be executed, both from generated code and from dynamic models, via registered external expression engines.

Execution of operations can now be delegated to external expression engines via invocation delegates. An invocation delegate is a class that implements an interface defining a method that is called by the EMF runtime to execute the operation’s behavior. Invocation delegates can be registered against specific URIs in a registry which can then be queried when executing the behaviors of operations. A global registry of invocation delegates exists, which can be populated via an extension or programmatically via Java code. In order to use a registered invocation delegate within a given package, it needs to be referenced as a value in an annotation details entry on the Ecore package.

The behavior of an operation can now be defined via an annotation on the Ecore operation. In order to delegate execution of the operation’s behavior to a registered invocation delegate, the URI for this annotation needs to match one of the values in the details entry of an annotation on the containing class’s owning Ecore package. Execution of a properly annotated operation is delegated to the corresponding registered delegate when the operation’s behavior is invoked, either statically (via generated code) or dynamically (via reflection) on a model instance.


Details of these and other enhancements being made to EMF for the Helios release can be found on the project's New and Noteworthy page. I'll also be covering the new extensibility mechanisms described above as part of a proposed tutorial at EclipseCon 2010 (assuming the submission is accepted). Hope to see you there!

Tuesday, January 5, 2010

On What I've Been Doing...

At the Eclipse DemoCamp in Ottawa a few weeks ago, someone mentioned to me that it's not entirely obvious from my blog what I've been up to lately. So, in the spirit of blogging more about what I do than what I think, I figured I ought to rectify the situation, perception being reality and all.

When suddenly faced with freedom last June, I thought I'd take some time to carefully decide what my next venture would be. In the meantime, started my own consulting company, focused on making my clients successful with open source. Weeks turned into months and, well, I'm still "deciding". To date, I've been involved in several client projects, some related to Eclipse and some in other areas (but still using Eclipse tooling!). I'll blog about each of them in my "spare" time over the coming days, in no particular order, starting with Zeligsoft.

I worked with Zeligsoft to prepare some project proposals for one of their clients and then represented them at Eclipse Summit Europe in Ludwigsburg. They are taking a serious look at the feasibility of building an open source, industrial strength model based engineering environment using Eclipse technology. While at the Summit, I presented a long talk with Raphael Faodou and Patrick Tessier entitled "Papyrus: Advent of an Open Source IME at Eclipse". We were stuck in a room with only 36 chairs and ended up with nearly 60 people in attendance. Our message seemed to resonate very well and many people seemed quite interested in, and impressed with, what's being done in Papyrus.

Of particular note in Ludwigsburg was evidence of a growing interest in an open source modeling workbench at Eclipse. The Birds of a Feather (BoF) session I held on that subject on the Wednesday night was also very well attended; it was scheduled for only one hour, but after over two hours in a stuffy room, nobody had left. We had a good discussion about the various efforts that are either already underway or in the works, followed by some disagreement about how best to proceed, i.e., this project vs. that project vs. a working group vs. an external consortium. Finally, Martin Fluegge, from the Dawn project, gave a demo of some really cool technology for collaborating on diagrams over the Web (without using Google Wave).

One initiative that I became aware of during the Summit was Sphinx, an emerging project proposal to create a generic DSL workbench at Eclipse. There was much overlap between what was being proposed in Sphinx and what the backbone in Papyrus is intended to be. As a result, we've started looking at extracting the Papyrus backbone and combining it with what is being contributed in Sphinx, working together as one team. The proposed project lead, Stephan Eberle, is looking for feedback and is keenly interested collaborating with other parties. Might you be interested in participating?

In November, I delivered an updated version of the Papyrus talk at the Eclipse Modeling Day in Toronto (which was also quite successful), and have submitted two proposals for related talks at EclipseCon 2010 in March. The first would be another update to the talk I presented at the Summit and the Modeling Day. For the second, I'm collaborating with Stephan Eberle to take a look at "The Twenty Modeling Things", i.e., essential services that might make up a modeling workbench at Eclipse. If either of these is of interest to you, why not express your support by adding a comment to the submission(s)?

Monday, January 4, 2010

On Grievances...

A year ago, I started an annual tradition of creating a wordle of my blog, so here is this year's visual.



Comparing it with the one from last year, not a lot appears to have changed, at least on the surface - still a lot of Eclipse and modeling. By contrast, this New Year will no doubt bring a lot of change.

Last week I tweeted about ten things from 2009 that I hope to do without in 2010. I thought it would be fitting to start the year by posting them here so that I can reflect on each item in more detail.

10. Corporate Politics

I've never been a fan of office politics, but having worked closely with executive teams over the past few years, I've had more than my fill for a while. So far, working independently has been a welcome breath of fresh air.

9. OS Upgrades

Somehow I allowed myself to get sucked into the hype of Snow Leopard and jumped the gun. The outcome of my various installation attempts was probably best summed up by Mosspuppet in his video review. Luckily, I had the foresight to back everything up with my Time Capsule ahead of time, so all was not lost. I was amused, though, upon taking the media back to the Apple store for a refund, at the salesperson's suggestion that I try another copy, implying that somehow my copy may have been defective. Huh? Oh, and Windows 7? I don't think so.

8. Piracy

I had to laugh when more than one person replied to this one with a suggestion that I avoid boating in Somalia. I had to clarify that I was referring to things more torrent-related. Yes, I believe in open source and I do feel that the concept of ownership is evolving rapidly in response to the new economy, but I refuse to use my altruistic beliefs as a justification for pirating content (movies, music, games, software, etc.). I guess that means I'll be waiting to see the second season of True Blood until it's finally (if ever!) released on DVD. Assuming I can convince my wife to wait. ;)

7. Overdue Invoices

One of the hardest things to get used to about being a freelancer (for me) is cash flow (or lack thereof). On more than one occasion I found myself waiting longer than I should have for invoices to be paid. In the future, I'll consider front-loading my engagements or building interest charges into the contract terms.

6. Protracted Renovations

We hired a contractor for what we were told would be a three week project which ended up taking over three months. The irony was that we went with a contractor in the interest of expediency, thinking that it would take us much longer to do it ourselves. Next time, we'll think twice.

5. Airline Status

I achieved airline status for the second year in a row. To me, that's a indication that I've been traveling too much. Luckily, I've had much fewer reasons to travel since becoming an independent, so I doubt I'll achieve status next year.

4. Censorship

This is my blog. The thoughts and opinions expressed on it are, and always have been, my own, and I intend to keep it that way. I'll not again consider changing the content of any of my posts to placate any of its readers.

3. Staycations

Our plan for summer vacation last year was to spend a few weeks at a fractional ownership cottage we were buying. But, the economy took its inevitable toll on that venture, and we wound up staying around home for a summer "stay"cation. Unfortunately, though, I was too busy dealing with my new employment situation to relax, so it wasn't much like a vacation at all. The past couple of weeks home with the kids have reminded me what's most important in life, so we'll be taking that vacation this coming summer whether we can afford to or not!

2. Legal Fees

Lawyers are there to protect you when you need them, but ultimately they're out to make a living too. One of the tidbits from the Lead to Win program that really resonated with me was the notion that the law is often argued on the basis of principal, not principle. In any case, I plan to keep situations (real estate, business, or otherwise) where I need a lawyer to a minimum.

1. Cancer

Some are shocked by my openness about the details of my personal life, and in particular my mother's recent battle with breast cancer. Personally, I've been surprised by the number of people I know that have since shared details with me about encounters with the disease in their lives. I'm not sure why we don't talk more openly about things like this, but we should. I'm happy to say that my mother's treatments (chemotherapy and radiation) were a success (as far as we can tell) and she's well on her way to getting back on top of her life again (not that she ever really faltered). As of today, I start training again for the Weekend to End Women's Cancers in Ottawa this coming June. Donations are, of course, welcome and appreciated!

Here's to 2010 and all the changes (for the better) that it will bring!