First of all: UML is not about modelling object-oriented software.
But maybe we should go back to what object-orientation is. OO (shorthand for object-orientation) is invented around 1970. Xerox had a group called the Software Research Group which was part of a think tank created to do research into the possible threats of the modern computer for Xerox’s prime business: copying machines. This group invented in a short period of years almost everything around what we now call the modern computer: displays with bit-mapped overlapping windows, a keyboard with a mouse to manipulate the objects on the display, icons to represent various types of information, and even the network to link all those computers together called ethernet.
To create the complex software that was needed to run those personal computers, an object-oriented programming language, as well as by the way an object-oriented operating system, was deemed necessary. Alan Kay originally coined the term “object-orientation” although he later stressed that a better term would have been “message-oriented” since he envisioned a complex system of interacting elements creating complex behaviour by sending messages to each other. For more info on this original vision please read the august 1981 Byte magazine devoted to Smalltalk.
The assumption was that we needed a powerful new way of thinking about problems, to enable creating multiple orders of magnitude more complex software. But you see, this was not just about software. It was about a paradigm that helps in managing complexity. OO was just that, and it still is.
When the UML effort started it only tried to merge a multitude of approaches that helped in visually representing those OO programs. So UML is not so different from OO. It is just a view on the same thing: a complex system.
UML introduced something new, however, and that was the meta model. Mainly for the tool developers, this meta model helps in designing the power of the OO modelling paradigm itself. It defines classes and metaclasses, properties and associations (as access paths for message passing).
The metamodel of UML is extensible. You can extend the metamodel with Profiles, effectively creating a specific set of language elements with a tightly defined semantics for a specific problem domain. This should not be confused with Domain Specific Languages (DSL’s), because a DSL specifies a set of elements or building blocks in the domain, for example the financial domain. A UML Profile contains the semantic definitions of the syntax used to describe those domains. For example you might create a Profile for Entity Relationship modelling. And you might define a Profile for functional languages.
One of my first endeavours when I learned object-oriented programming was to create a planetarium. This has been a hobby of mine all my life. To simulate the movements of bodies in the solar system a mathematical model is used. The orbit of, say, the moon can be described with an equation with a lot of variables (to approximate the orbit, since there is no analytical solution of the many body problem in physics, that is until recently). My first thought was, well this is mathematics, I probably will have a hard time moulding the mathematical equations into objects and methods and messages. But to my delight I found this was not the case at all. Once I realised that my problem domain was mathematics, and specifically equations, the follow-up was easy and everything fell into place perfectly. I had Equation objects, CelestialBody objects using those to tell their location, and time nicely proceeding helping the celestial bodies to move.
To summarise: object orientation, and UML as a domain-independent language, can be used to describe any problem domain efficiently, and help with the complexities in those domains. And you are free to implement your solution in an object-oriented language like Smalltalk, or a functional language like Haskell. OO is domain-agnostic, and implementation-agnostic.
Abonneer op de nieuwsbrief
(Americans often refer to a method with the term “methodology”, which is not entirely correct semantically, as it would mean “the science of methods”)
Examples of methods are ORM, RUP, and one could argue Scrum or agile approaches like DAD.
Inspired by the book by Ian Graham et.al. The OPEN Process Specification, I share the following UML model of a method with you. It shows what a (proper) method consists of. To illustrate, UML itself for example is a Modelling Language, consisting of a notation and a metamodel. ER diagrams for example are not much more than a notation (with a rudimentary and inconsistent metamodel).
I like the model, simplistic as it is, because it gives a nice overview of all aspects that are (or should be) present in any framework that wants to call itself a method, including the main relationships. As you notice this UML diagram is more a structural model, and says nothing about responsibilities (which is what you normally would want to see in a UML diagram).
Abonneer op de nieuwsbrief
This post is still evolving. If you have questions or criticisms, please feel free to share them and I will attempt to incorporate them into the article. A living article you might say
Recent responses to my article on Mirrors prompted me to attempt to clarify the approach in that article. I think there is a fundamental philosophical discovery to be made in what I try to say, and indeed some of those responses confirm that. But it is as much a path to discovery to me as it is to my readers.
Architects create models. Indeed they do.
People, human beings, use language. Certainly.
The two things are related. In fact they are the same cognitive function, with the same effect. It is creating a map of the world, and through that faculty called language being able to transfer that map to others without requiring them to actually have the same sensory input you had in prompting you to create that map.
It is also called symbolic representation. What we human beings also do, one might say beyond creating those maps, is manipulate those maps, play with them, morph them. Sometimes so much so that in effect the result is a new thing, unrelated to any original sensory input, a map of a world that can only be called imaginary. This is called symbolic manipulation.
Symbolic manipulation is tremendously powerful. Those imaginary worlds are evoked with those maps. This is what we humans did for countless centuries, roaming the savannahs, sitting by the fireplace in the dusk, and telling stories. The power of storytelling — or should I say story-listening? One listens to a story, and immediately the images spring to life. You have no problems experiencing the story as vividly as you experience your “real” life.
We cycled through a series of powerful inventions that created ever powerful maps, after that first creative leap of genius that created language: drawing, writing, the printing press. Until we stumbled on an invention that was the same, but also in some fundamental way totally different. The invention of the modern computer.
I think that, in order to realise the tremendous hidden promise of this invention, we should learn to see what is different from the previous inventions, not what is the same. What is different is that within a computers’ vast memory those maps are beginning, tentatively and for now awfully handicapped, to hatch and spring to a life on their own. Until now those maps only lived in our minds, and the maps we created were static, representations frozen in time. That is changing, and that change is a fundamental one, of that I am growing more and more convinced.
Some of the “worlds” we have created are almost entirely “imaginary”, that is, you cannot actually say they are related to sensory inputs or physical experience. The entire financial system is an example of such a world. However what we usually create in our computer systems of that “world” is static representations. Our accounts are not really “alive”. We do not let them. Why can we not let our credit account figure out how to manage itself? Why not create a new insurance product that is alive and tries to sell itself and compute the best business model based on knowledge regarding risks it accumulates itself during its deployment? It may sound somewhat convoluted, but the actual system I envision is not much different from what is usually created. The devil is in the details, and with this other approach I advocate only a few things need change (see the Active-Passive Pattern for an explanation on one of the characteristics of such a system).
For more, you will probably want to read these related articles:
Abonneer op de nieuwsbrief