Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at


Search all posts for:   


Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  HPDiscover  data center  enterprise architecture  Ariba  data analytics  HP DISCOVER  SOA  HP Vertica  Open Group Conference  Ariba Network  SAP  security  VMWorld  Tony Baer  desktop virtualization  Jennifer LeClaire  TOGAF  Ariba LIVE  Business Intelligence  IT 

Enterprise Architecture pioneer John Zachman on gaining synergies among the major EA frameworks

Posted By Dana L Gardner, Thursday, April 16, 2015

What is the relationship between the major Enterprise Architecture (EA) frameworks? Do they overlap, compete, support each other? How? And what should organizations do as they seek a best approach to operating with multiple EA frameworks?

These questions were addressed during a February panel discussion at The Open Group San Diego 2015 conference. Led by moderator Allen Brown, President and Chief Executive Officer, The Open Group, the main speaker John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework, examined the role and benefits of how EA frameworks can co-exist well. He was joined by Steve Nunn, vice president and chief operating officer of The Open Group.

Download a copy of the full transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Zachman: A friend of mine recently did a survey of 108 CEOs, around mostly North America. I was shocked when I saw that the survey said that the biggest problem facing the enterprise is change.


And when I heard that, my reaction was, well, if the CEO thinks the biggest problem facing the enterprise is change, where is the "executive vice president in-charge of change management"? If nobody is in-charge, the high probability is low to zero that you are going to be able to accommodate change.

There are two reasons why I do architecture. One is complexity, and the other one is change.

Create the architecture

If you want to change something that already exists, you are going to have to have architecture -- one way or another. And you have to create that architecture.


Now, the reason that I am saying this is if 108 out of 108 CEOs -- of course those were high visibility CEOs -- said the biggest problem facing the architecture is change, who ought to own the Enterprise Architecture? I would submit it has to be a general management responsibility.

It needs to be an executive vice president. If the CEO thinks the biggest problem facing the enterprise is change, the CEO ought to be in-charge of Enterprise Architecture. If he or she is not going to be in-charge, then it ought to be the person they see every morning when they come in the office ... "Hey, Ralph, how is it going on the architecture?" So it probably should be a general management responsibility. That’s where I would take it.

This same misconception about enterprise is what leads people to misconstrue Enterprise Architecture as being big, monolithic, static, inflexible, and unachievable and it takes too long and costs too much.

I put TOGAF® together with my Zachman framework and, in fact, I kind of integrate them. In fact, I have known Allen Brown for a number of years, and he was in Johannesburg a number of years ago. He introduced me to a TOGAF conference and he said, "For a lot of years, I thought it was either Zachman or TOGAF." He said that’s incorrect. "Actually it’s Zachman and TOGAF."

That basically is where I am going to take this: It’s Zachman and TOGAF. How then would you integrate TOGAF and The Zachman Framework, which I obviously think is where we want to go to?

The first question turns out to be what is architecture? What is it? Some people think this is architecture: the Roman Colosseum. Some people think that is architecture.

Now, notice that is a common misconception. This is not architecture. This same misconception about enterprise is what leads people to misconstrue Enterprise Architecture as being big, monolithic, static, inflexible, and unachievable and it takes too long and costs too much.

If you think that is architecture, I am going to tell you, that’s big and monolithic, and static. It took a long time and it cost a lot of money. How long do you think it took them to build that thing? Not a day, not a week, not a year, not a decade. It was a couple of decades to build it.

In fact, the architecture had to be done long before they ever created the Roman Colosseum. They couldn't have even ordered up the stones to stack on top of each other until somebody did the architecture. The architecture had to be done long before they ever created the Roman Colosseum.

Result of architecture

Now, that is the result of architecture. In the result, you can see the architect’s architecture. The result is an implementation and instance. That is one instance of the architecture. Now, they could have built a hundred of these things, but they only built one.

I was in New Zealand a few years ago and I said that they could have built a hundred of these things, but they only built one. Some guy in the back of the room said they actually built three. I said I didn’t know that. He even knew where they were. I was really impressed.


I was in Rome last June and I am talking to these guys in Rome. I said, you guys could have built a hundred of these things, you only built three. And the guys in Rome said, "We built three; I thought we only built one." Actually I felt a lot better. I mean, you can build as many as you want, but this just happens to be one instantiation. And in fact, that is not architecture. That’s just the result of architecture.

Architecture is a set, it's not one thing, it's a set of descriptive representations relevant for describing a complex object (actually, any object) such that an instance of the object can be created and such that the descriptive representations serve as the baseline for changing an object instance (assuming that the descriptive representations are maintained consistent with the instantiation). If you change the instantiation and don't change the descriptive representations, they would no longer serve as a baseline for ensuing change of that instantiation. In any case, architecture is a set of descriptive representations.

Basically, they are answering six primitive interrogatives: what, how, where, who, when and why. That's been known since the origins of language about 7,000 years ago.

Now, you can classify those descriptive representations in two dimensions. One dimension is what I call it Abstractions. I don't want to digress and say why I happened to choose that word. But if you look at architecture for airplanes, buildings, locomotives, battleships, computers, tables or chairs, or XYZ, they are all going to have Bills of Materials that describes what the thing is made out of.

You have the Functional Specs that describe how the thing works. You have the Drawings or the Geometry that describes where the compound is relative to another. You have the Operating Instructions that describes who is responsible for doing what. You have the Timing Diagram that describes when thing happens, and the Design Objectives that describes why they happen.

So it doesn't make any difference what object you are looking at. They are going to have bills of material, Functional Specs, the Drawings or Geometry or Operating Instructions and so on. You are going to have that set of descriptive representations.

Now, they didn't happen to stumble across that by accident. Basically, they are answering six primitive interrogatives: what, how, where, who, when and why. That's been known since the origins of language about 7,000 years ago. And the linguists would observe for you that's the total set of questions you need to answer to have a complete description of whatever you want to describe; a subject, an object, or whatever.

It's okay if you don't answer them all, but any one of those questions that you don't answer, you are authorizing anybody and everybody to make assumptions about what the answers are that you don't make explicit. So if don't make those answers explicit, people are going to make assumptions. You are authorizing everybody to make assumptions.

The good news is, if the assumptions are correct, it saves you time and money. On the other hand, if the assumptions are incorrect, that could be horrendous, because incorrect assumptions are the source of defects. That’s where defects are, or miscommunications, or discontinuity. That's where you have the defects come from. So you don't have to answer all the questions, but there is a risk associated with not answering all the questions.

And I did not invent that, by the way. That's a classification that humanity has used for 7,000 years. Actually, it's the origins of language basically. I did not invent that. I just happened to see the pattern.

Parts and part structures

Now, there is one other thing I have to tell you, in a Bill of Materials, you have descriptions of parts and part structures. There is no expression of functional specification in the Bill of Materials, there is no expression of Drawings in the Bill of Materials, nor expression of Operating Instructions. There is no expression of Time or Design Objectives. There are parts and part structures.

In the Functional Specs there is Functional Specs. There is no expression of parts or part structures. There is no expression of Geometry or Drawings. There is no expression of operating responsibility, time, or Design Objectives. There are Functional Specs.

In the Geometry, there is no expression of parts and part structures, there is no expression of Functional Specs, operating responsibility, time, or Design Objective. There are the Drawings or the Geometry.

I am not going to do it anymore; you get the idea. If you are trying to do engineering kind of work, you want one, and only one, kind of thing in the picture. You start putting more and more kinds of thing in the picture, and that picture is going to get so complicated that you will never get your brain around it actually.

You want to minimize any potential discontinuity, any kind of disorder. You want to normalize everything.

And if you are going to do engineering work, what you want to do is normalize everything. You want to minimize any potential discontinuity, any kind of disorder. You want to normalize everything. In order to normalize everything you have to see all the parts relative to the whole object. You have to see them all, so that you can get rid of any re-occurrence or any kind of redundancy.

You want to normalize, get the minimal possible set. You only want to look at all the Functional Specs, but you want to look at it for the whole object. Get it down to minimize, minimize the complexity. You want to minimize the redundancy. You don’t want any redundancy showing up and so on. You want to minimize everything. Minimum possible set of components to create the object.

You don't want extraneous anything in that object, whatever that object happens to be,  airplane, building, or whatever. So I just made that observation.

Now I am going to digress. I am going to leap forward into the logic a little bit for you. There is the engineering view. If you want to do engineering work, you want to see the whole. You only want to see one type of fact, but you want to see the whole set for the whole object, for the whole product.

So when you are doing engineering work, you want to see the whole thing, because you have to see how all the parts fit together basically.

Now, if you are doing manufacturing work, however, that's not what you need. You want to take one part, you want to decompose the object down to as small parts as possible and then you want to see the characteristics. You take one part and you need to know the part or part structure. You have to know the functionality for that part, you have to know the Geometry for that part, the operating responsibility, for that part, the Timing Diagram for that part, and the Design Objective for that part. So if you are doing manufacturing, you want to see all the variables relative to one part.

Different models

There are two different kinds of models that are required here. You want the engineering models, which are, in fact, a normalized set. You want to see one fact for the whole object. And the manufacturing model, you want to see for one part. You want to see all the variables for one part. So there are two different kinds of descriptive representations that are relevant for describing the object.

Now, I would just observe this, engineering versus manufacturing. Engineering work requires single-variable, ontologically defined descriptions of the whole of the object, which I would call a primitive.

In contrast, manufacturing work requires multi-variable, holistic descriptions of parts of the object, what I would call a composite. That’s the implementation; that’s the composite.

The interesting phenomenon is -- and somebody talked about this yesterday too -- in manufacturing, this is analysis. You break it down into smaller and smaller pieces. In fact,  it's a good approach if you want to classify, you want to deal with complexity. The way humanity deals with complexity is through classification.

If you just do analysis, and you are doing manufacturing or implementation work, you are going to get disintegration. If you are doing engineering work, you want to deal with the issue of synthesis.

A one-dimensional classification for manufacturing is to decompose it down to various small parts. The reason why that becomes useful, is that it’s cheaper and faster to manufacture the part. The smaller the part, the faster and cheaper it is to manufacture it.

So basically if you go back to The Wealth of Nations by Adam Smith, the idea was to break it down into smaller parts so you can manage the parts, but in doing that, basically what you are doing is you are disintegrating the object, you are disintegrating it.

In contrast, in engineering work, you need to look at synthesis. It you take a one-dimensional classification, you are disintegrating the object. The same content can show up in more than one category, the bottom of the tree. If you want to do engineering work, you want to see how all the parts fit together. That’s a synthesis idea.

So if you just do analysis, and you are doing manufacturing or implementation work, you are going to get disintegration. If you are doing engineering work, you want to deal with the issue of synthesis.

So it’s not an either-or thing; it’s an “and” kind of a thing. And the significant issue is that this is radically different. In fact, it was Fred Brooks who said, programming is manufacturing, not engineering. So those of us who come from the IT community have been doing manufacturing for the last 65 or 70 rears basically. In contrast, this is different; this is a standard. This stuff appears radically different.

So the reason why we build implementations and we get frustration on the part of the enterprise is because the implementations are not integrated, not flexible, not interoperable, not reusable, and not aligned. They are not meeting expectations. Fundamentally, if we use a one-dimensional classification, you're going to end up with disintegrating the thing. It’s not engineering. It’s implemented, but not engineered.

Two-dimensional classification

If you want the thing to be engineered, you have to have a two-dimensional classification. You have to have a schema, a two-dimensional classification, because you have to have two-dimensional in order to normalize things.

I don’t want to digress into that, but Ted Codd was floating around with the relational model. Before Ted Codd and the relational model, we didn’t even have the word normalization at that point in time. But to try to manage the asset you are trying to manage, you have to have a normalized structure.

If you want to manage money, you have to have chartered accountants. If you want to manage an organization, you have to have allocation responsibilities. If you want to manage whatever you want to mange, you have to have a normalized structure.

So if you want the thing to be engineered, integrated, flexible, interoperable, reusable, and so on, then you have to do the engineering work. Those are engineering derived characteristics.

You don't get flexibility, integration and so on from implementation. Implementation is what you get, which is really good. I am not arguing; that’s really good, but on the other hand, if you need integration, flexibility and so on, then you have to do engineering work. So it takes you beyond merely the manufacturing and the implementation.

I gave you one dimension of the classification of descriptive representations, which I called abstractions; the other dimension I call perspectives. Typically, I would take a few minutes to describe this for you, but I'm just going to kind of net this out for you.

You have to take those apart to create the descriptive representations in such a fashion that we can do engineering work with it.

Back in the late 1960s time frame, we had methodologically defined how to transcribe the strategy of the enterprise, but we had to transcribe it. We knew at the time we had to transcribe it in such a fashion that you can do engineering work with it.

It's not adequate to transcribe the strategy in such fashion to say make money or save money or do good or don't do, whatever, or feel good, or feel bad, or go west or go east. Those are all relevant, but you have to take those apart to create the descriptive representations in such a fashion that we can do engineering work with it.

These are in the late 1960s time frame. We knew how to transcribe this strategy in such a fashion that we could do engineering work. What we didn't know is how to transform that strategy into an instantiation such that the instantiation bears any resemblance to what the strategy was fundamentally.

So the problem is that in those days, we tended to describe this in a somewhat abstract fashion: make money or save money, whatever, but down here, you're telling a person how to put a piece of metal in a lathe and then how to turn it to get whatever you're trying to create. Or it could be telling a machine what to do, in which case you're going to have a descriptive representation, like 1,100 and 11,000. So it's a long way from "make money" to "11,000." We didn’t know how to make the transformation of the strategy to the instantiation, such that the instantiation bears any resemblance to the strategy.

We knew architecture had something to do with this, but, if you go back to the late 1960s time frame, we didn’t know what architecture was.

Radical idea

I had a radical idea one day. I said, "What you ought to do is ask somebody who does architecture for something else, like a building, an airplane, a computer, an automobile, or XYZ. Ask them what they think architecture is." If we could figure out what they think architecture is, maybe we can figure out what architecture is for enterprises. That was my radical idea back in those days.

A friend of mine was an architect who built buildings actually. So I went to see my friend Gary, Gary Larson, the architect, and I said, "Gary, talk to me about architecture." He said, "What do you want to know?" I said, "I don't know what I want to know; just talk to me and maybe I'll learn something."

He said, "Somebody came in my office, and said I want to build a building." I said, well, what kind of building do you have in mind? Do you want to work in it? Do you want to sell things in it? Do you want to sleep in it? Is it a house? What are you going to do with it? Are you going to manufacture things in it? What’s the structure of it: steel structure, wood structure, stucco, glass, or whatever?

I have to know something about the footprint. Where are you going to put this thing? What’s the footprint? I have got to know what the workflow is. If you're going to eat in the thing and sleep in the thing, you put the eating and the cooking near each other, you put the sleeping someplace else. You have to know something about the workflows.

By the way, we learned about that a long time ago, those of us who are in IT; separate the independent variables.

I have to know something about the Timing Diagrams. Am I going to design a building that has elevators. It has an up button, you go up, and the down button, you go down. I have to know something about the Design Objectives.

Do you want to change this building, you want flexibility. If you want to change this building after I get a bill, then don't hard bind the wall to the floor. Separate the independent variables. If you want flexibility, you separate the independent variables.

By the way, we learned about that a long time ago, those of us who are in IT; separate the independent variables. I haven’t heard this for 30 or 40 years, but it’s like binding. You don’t want to bind anything together.

You want to bind independent variables together so you collect relationship knowledge. That’s why you bind them together, because as soon as you fix two things together, independent variables, if you want to change one, you have to change them all -- throw the whole thing and you have to start over again.

So if you want to change things, you separate the independent variables. How do you like this for an idea by the way? You have the data division and a procedure division? That’s pretty interesting. You can change one data element and all the instructions. You want to change one data element and all the instructions. So you separate the independent variables if you want to change them.


Now, for manufacturing purpose, you want to hard bind them together. That’s the implementation.

So Gary says, "I have to know whether they want flexibility or whatever. I have to know the Design Objectives. sketch up my bubble charts. I have to understand what the boundaries are here, so I don't get blindsided in effect."

"If I'm going to build a 100-story building, a huge building, then I'll live with the owners for a couple of years, so I find their aesthetic values, what they're thinking about, what their constraints are, what they really want to do, how much money they have, what their purpose is. I have to understand what the concept of that building is."

"I transcribe the concepts of the building. And this is really important. I can take this down to an excruciating level of detail. Actually, I have to build the scale model. It has light bulbs that go on or off. I have water that runs through the pipes. I can build a scale model, so that the owners can look at this and say, 'That is really great; it’s exactly what I had in mind', or 'Whoa, it’s not what I had in mind.'"

"It's really important because if the owner. If this is what they have in mind and they say, 'Okay, chief, sign here, press hard on the dotted line, you have got to go through three copies.'"

So the architect defined these models, then they transformed it into the instantiation. He built the building, but it’s not what the owner had in mind. And it’s a massive lawsuit.

"I have an architect friend right now, who's in the middle of a massive lawsuit. The owners of the building did not want to sit down and define these models up here. They said, 'You know what we have in mind so go ahead and define it. We don’t have the time to think about this or whatever.'"

"So the architect defined these models, then they transformed it into the instantiation. He built the building, but it’s not what the owner had in mind. And it’s a massive lawsuit.

"I said to my architect friend, 'I went out to your website and I figured out, I found out why you're having this lawsuit. They were not involved in defining what the concepts are."

Now, Gary would say, "Once I get the concepts, I have to transform those concepts into design logic for the buildings, because I haven’t got the building design, I only have the concepts transcribed. Now I have to deal with pressure per square inch, metallurgical strength, weight of water to move the water around. I have to deal with earthquakes. I have got to deal with the whole bunch of other stuff."

"I may have some engineering specialization to help me transform the requirement concepts into the engineering design logic." In manufacturing, they call this the as-designed representation. Gary called that the architect’s plans. He called these architect’s Drawings. He called these the architect’s plans.

"Now, I have to get the architect’s plans. "I have to negotiate with the general contractor, because the general contractor may or may not have the technology to build what I have designed. So I have to transform the logic of the design into the physical design. I have got the schematics here, but I have to have the blueprints."

Making transformations

"So we have to negotiate and make the transformations, have some manufacturing engineers help me make that transformation. And in manufacturing they would call this as designed and this as planned."

"I make the transformation, so the implementation. They have the technology to implement, the design. Then, this contractor goes to the subcontractors who have the tooling, and they have to configure the tools or express precisely what they want somebody to do in order to create it and then you build the building."

That’s pretty interesting. You notice, by the way, there are some familiar words here: concepts, logic, and physics in effect. So you have the owner’s view thinking about the concept; the designer’s view thinking about the logic; and the builder’s view thinking about the physics in effect. You have the concepts, the schematics, and the blueprints. Then you have the configuration and the instantiation. That's the other dimension of a classification.

Now, there is a two-dimensional classification structure. That’s an important idea. It’s a really important idea. If you want to normalize anything, you have to be looking at one fact at a time. You want to normalize every fact. You don’t want anything in there that’s extraneous. You want to normalize everything.

The original databases typically were either flat files or hierarchical databases. They're not any good for managing data; they're good for building systems.

So it’s a two-dimensional schema, not a one-dimensional schema, not a taxonomy or a hierarchy or a decomposition; this is a two-dimensional schema.

If you folks go back to the origins in the IT community, the original databases typically were either flat files or hierarchical databases. They're not any good for managing data; they're good for building systems. You break it down, decompose it onto small parts, and they're good for building systems. They're not good for managing data.

So then you had to have a two-dimensional classification and normalization. Ted Codd showed up, and so on. I don’t want to digress into that, but you get the idea here. It’s a two-dimensional classification.

And I was in Houston at one time, talking about the other dimensional classifications. Some guy in the back of the room said, "Oh, that’s reification." I asked what that was. Reification? I never heard the word before. It turns out it comes out of philosophy. 

Aristotle, Plato, and those guys knew the ideas that you can think about are one thing but the instantiation of the ideas is a completely different thing. If you want the instantiation to bear any resemblance to what the idea is, that idea has to go through a well-known set of transformations.

You have to identify it and name it. So you're going to have to dialogue about it. Then you define it and you have the semantic structures. They have to have their representations -- all the interior designs are done with representations -- and then you have to specify it based upon the implementation technology. Then, you configure it based upon the tooling and then you instantiate it. And if it goes through that set of well-known transformations, then the end result will bear some resemblance to the outset.

Set of transformations

If you don’t go through that, you may or may not look out, and say, "A blind thing finds a solution every now and then." Well that’s pretty good, but on the other hand, you won’t have any degree of assurance that whatever you’re going to end up with bears any resemblance to what you had in mind of the outset. It has to go through that set of transformations.

By the way, I didn't define those; those came out about a couple of thousand years ago as reification. The etymology of the word "re" is Latin; it means thing. So you’re taking an idea and transforming it into a thing. That’s the other dimension of classification in any case.

This is the framework for Anything Architecture, okay? They are going to bills of material, the Functional Specs, the Geometry, or Drawing, Operating Instructions, Timing Diagrams, Design Objectives. That’s one dimension. For the other dimension, you have the scoping representation, the boundaries, requirement concepts, the design logic, the plan physics, the tooling configurations, and then you get the instantiations. So that’s the framework for Anything Architecture.

And I don’t care whether you’re talking about airplanes, buildings, locomotives, battleships, tables, chairs, or whatever. It’s anything in effect. That's just a framework for Anything Architecture.

I didn't define those; those came out about a couple of thousand years ago as reification. The etymology of the word "re" is Latin; it means thing. So you’re taking an idea and transforming it into a thing.

Now all I did was I put enterprise names on the same descriptive representation relevant prescribing anything.

Okay, we produced a Bill of Materials, too. We would call these the Inventory Models, actually that's the business name for them. The technical name would be Entity Models. Now what's an entity, what's a set? What's important about sets? Well how many members are in the set? Are they all present or not? It is actually that the business cares less about entity. They don't care about entity; they care about inventories.

So let's call them by their business name. It's the Inventory Model. The technical name is be Entity Model, but there is Inventory Model. Now the system Entity Model would be the logic entity. In fact, we would call it a Logical Model, but that would be sitting right there. But the Bill of Materials we would call them Inventory Models.

The Functional Specs we call it the Process Models, those are processes. It takes on something different, or the input process output.

The Drawings or the Geometry we would call the Geography, the distribution models, the locations where you store things and transport things around. That would be the distribution models or the Geometry of the enterprise. Maybe Geography would be our name.

The Operating Instructions, we call the Responsibility Models, the workflow models. You know what responsibilities are going to assign to various roles within the enterprise; responsibility of workflow.

The Timing Diagrams, we would call Timing Models. Some people say the Dynamics Models. Jay Forrester at MIT basically wrote the book Industrial Dynamics in 1959. They were tracing resource flows in the enterprise. They were using manufacturing concepts in human systems and so they call the Dynamics Model, but a lot of times we will call them Timing Models.

Motivation models

The Design Objectives we might call motivation models. So all I was doing was putting enterprise names on the same concepts. By the same token the Scope Contexts we would call Scope Lists. We are just scoping out. Give me a list of inventory, give me a list of processes.

The Requirement Concept, we would call Business Models; those are models of the business. And the design logic, we call system models. Those are the Logic Models, they are the System Models and we call systems.

The plan physics we call technology models, the technology constraint. The part configuration, we call tooling models and then product instances we call the enterprise implementation.

I calculated 176 different plausible definitions for business architecture . . . So you have to get definitive about it, or else you are like freight trains passing in the night.

The enterprise is sitting down here. Actually all this is architecture, but the instantiation is down here.

Allen Brown made some really good observations about business architecture. I have a whole other observation about business architecture. Now the question is when you say business architecture, what do you mean?

I was talking at a big business architecture conference. They were having animated discussions and they were getting real passionate about it, but the fact of the matter is they weren’t defining business architecture the same way; they were all over the board.

I said this yesterday. I calculated 176 different plausible definitions for business architecture. For those guys, you could be talking about any one of those, but if you don’t define which one you are talking about, whoever you're talking to may be hearing any one of the other 175. So you have to get definitive about it, or else you are like freight trains passing in the night.

I will tell you, there are various combinations of these models up here that somebody can articulate as business architecture. Which one are you talking about when you say business architecture. Are you talking about the business process? Are you talking about the objectives and strategies. Or are you talking about the infrastructure distribution structure?

Or are you talking about some combination? You have to talk about the inventories and the processes and see those together. You can put together whatever combinations you want. There are 176 possibilities basically.

I would have what I would call the primitive components defined and then, depending upon what you want to talk about, I would construct whatever definition you want to construct.

Enterprise names

Now, I just put the enterprise names on it again. So here is The Framework for Enterprise Architecture and I populated this. Here is the Bill of Material, here are the functional specs, here is the Geometry or the Geography, here are the Operating Responsibilities, here are the Timing Diagrams and here is the Design Objectives, and here are the Scoping Representations, here are the Concepts Models, the Requirement Concepts, here are the Design Logic, here is the Building Physics in effect, the as planned, here are the Tool Configuration, and there is the Instantiation. So that’s The Framework for Enterprise Architecture.

I just put the enterprise names on it, You obviously saw what I was doing. You can read The Framework for Anything, you can read The Framework for Enterprise, but I was telling you The Framework for Anything. So it’s all basically the same thing. This is Enterprise Architecture.

Now, I have some of these framework graphics. For anybody who wants to go to the workshop this afternoon, we will make sure you have a copy of it, and anybody who doesn't go to the workshop, we will have them out at the table. So you can pick up a copy.

I wrote a little article on the back of  that John Zachman’s Concise Definition of Zachman Framework.

Actually somebody asked me if I had ever read what Wikipedia said about my Framework? I said no, I had never read it. I don’t need to read Wikipedia to find out the definition of The Zachman Framework. So they said, "You better read it, because whoever wrote it has no idea what you're talking about."

It’s architecture for every other object known to humankind. It’s architecture for airplanes, buildings, locomotives, computers, for XYZ. It doesn't make any difference.

So I read it, and they were right. They had no idea what I was talking about. So I fixed it. I wrote the article and put it out there. A couple of months later some friend of mine said, "Did you ever read what they wrote on Wikipedia about your Framework?" I said I wrote it. He said, "What? You wrote it? I don't believe it. It’s not what you talk about."

So I read it and some yo-yo had changed it back. So I changed it back. And a couple of months later, guess what? They changed it. So I said I'd forget these people. The fact is I wrote my own definition of The Zachman Framework, so that’s on the back there, with the little audio.

Now, you understand what I am telling you. This is Enterprise Architecture. It’s architecture for every other object known to humankind. It’s architecture for airplanes, buildings, locomotives, computers, for XYZ. It doesn't make any difference. I just put enterprise names on it.

By the way, for those of you technical aficionados, the meta-entity names are at the bottom of every cell, and there are only two meta-entities on every cell. But the names are very carefully selected to make sure they are precisely unique and single variable. You only have one and only one thing in each one of these -- one type effect in any one of these cells. So in any case, this is Enterprise Architecture.

Friends of mine wanted me to change the name of this to Zachman Ontology, because if you recognize this, this is not a methodology; this is an ontology. This does not say anything about how you do Enterprise Architecture -- top-down, bottom-up, left to right, right to left, where it starts. It says nothing about how you create it. This just says this is a total set of descriptive representations that are relevant for describing a complex object. d I happen to have enterprise names on them, but it doesn't tell you anything about how to do this.

Not either/or

For a lot of years, people didn't know what to do with this. They were saying, "I don’t know what to do with it. How do you do Enterprise Architecture?" Now you understand where I am going to take you with this. This is an ontology, and you need a methodology. It is neither a methodology or an ontology. It’s an ontology and a methodology. It’s not either/or.

However, this is an ontology. It’s classifying. It has unique categories of every set of facts that are relevant for describing a complex object basically.

Now, by the way, there is another graphic in this and the reason I put this is that my name is on a number of websites, but I am excluded from those websites, I have nothing to do with those websites, even though they have my name on them. There is only one website that I have any access to, and that’s That’s why I put that slide in there and there’s some other stuff in there.

Now, you understand what I basically am saying here. Architecture is architecture is architecture. I simply put enterprise names on the same descriptive representations relevant for describing everything. Why would anyone think that the descriptions of an enterprise are going to be any different from the descriptions of anything else your manager has ever described? I don’t believe it.

I don't think Enterprise Architecture is arbitrary… and it is not negotiable.

Now, you could argue enterprises are different. Hey, airplanes are different than buildings too, and buildings are different than computers, and computers are different than tables, and tables are different than chairs, and everything is different, they are all different, but they all have Bills of Material, Functional Specs, Geometry. They all have Concepts, Logic, Physics, so this is basically architecture is architecture is architecture. That’s my observation.

I am trying to do this in a very short period of time and I haven’t had half a day or a day to soften all you guys up, but get ready, here you go. I don't think Enterprise Architecture is arbitrary… and it is not negotiable. My opinion is, we ought to accept the definitions of architecture that the older disciplines of architecture, construction, engineering, and manufacturing have already established and focus our energy on learning how to use them to actually engineer enterprises. I think that’s what we ought to be doing.

So I don’t think it’s debatable. Architecture is architecture is architecture.

I have to tell you another thing, Depth and Width. For every cell, you could have a cell that’s enterprise wide and it's an excruciating level of detail. That would be the whole cell basically.

Or you could have a model that is enterprise wide and only a medium level of detail. That would be half of it. You could have a model that’s enterprise wide at a high level of detail. So there is nothing that says that you have to have an excruciating level of detail. You can just say that’s another variable.

By the way, you could have a model that’s less enterprise wide. It’s an excruciating level of detail. It’s half of the enterprise excruciating or it could be the whole enterprise excruciating level of detail. So you have those two other variables. You have to be able to represent them in some fashion.

The implication is that anything that is white space here, if you don’t make it explicit, it’s implicit, which basically says that you're allowing anybody and everybody to make way.

Risk of defects

It may be fine. You may be willing to accept the risk of making erroneous assumptions. You're going to accept the risk of defects. In fact, in manufacturing airplanes they will accept some degree of risk of defects. When the parts don’t start to fit together in the scrap, the work cost starts to go up. Now, then they will say, wait a minute, you can’t complete the implementations until you have a complete engineering design release.

So that other variable you have to read into this as well. There are two different things here in ontology. I didn't even know what an ontology was till fairly recently.

I'm going to give you my John Zachman layman's definition of ontology. Some of you guys may be ontological wizards. I don’t know, but the probability in a group this big is that somebody really is familiar with ontology.

The Zachman Framework scheme technically is an ontology. Ontologies they are a theory of existence. Ontologies have to do with what exists, a theory of the existence of a structured set. That says a classification, a schema, that is rational, logical, and structured --  it’s not arbitrary -- of essential components of an object. Those essential components that says the end object is dependent for its existence on the components and the components exist as well.

A structure is not a process, and a process is not a structure. You have two different things going on here.

So you have a kind of existence of the object -- it just isn’t the components -- for which explicit expression is necessary. Probably it’s mandatory for designing, operating, and changing the object -- the object being an enterprise, a department of an enterprise, a value chain, many enterprises, a sliver, a solution, a project, an airplane, a building, a bathtub or whatever -- it doesn’t make too much difference what it is. It’s whatever that object is.

A framework is a structure. A structure defines something. In contrast, a methodology is a process, a process to transform something. And a structure is not a process, and a process is not a structure. You have two different things going on here.

Now, this is really an important idea too. Here is a comparison between ontology and methodology. An ontology is the classification of the total set of primitive elemental components that exist and are relevant to the existence of an object. A methodology produces composite compound implementations of the primitives.

All the implementation, the instantiations, are derivative of the methodology. The methodology produces the implementation. The implementations are compounds, and primitives, elements, are timeless and the compounds are temporal.

Now, that’s an important point, and I'll try to give you an illustration of that.

Here is an ontology. I learned a lot from this metaphor by the way. This is a classification of all the elements in the universe actually. It’s a two-dimensional schema. It’s normalized; one factor in one place. You are classifying the elements of the universe in terms of neutrons and protons -- the number of neutrons and protons by the number of electron. That is not a process.

This tells you nothing about how do you do this: top-down, bottom-up, left to right, right to left, or what compound that you might want to create out of this thing. This just says here is the total set of elements from which you can create whatever you want to create.

And once again, I didn’t say this yet, but until an ontology exists, nothing is repeatable and nothing is predictable. There is no discipline.

Best practices

Before Mendeleev published the periodic table, there were chemists. They weren’t chemists actually; they were alchemists, and they were very clever by the way, really competent, very clever. They could produce implementation, produce compounds, but it was based upon their life experience. It was a best practice kind of a thing, not based upon a theoretical construct.

And elements -- these elements are timeless. If you have an element that has six neutrons and protons and two electrons, that’s carbon. The rest of the world calls it carbon. Do yourself a favor and call it carbon. You can call it whatever you want to, but if you want to communicate with anybody else, just call it by the name that is recognizable by the rest of the universe.

Now, in any case, those are the elements and they are timeless. They are just forever.

Here are compounds. This is a process. A process transforms, creates something. This is a process. Take a bowl of bleach and add it to a bowl of alkali. It has to get transformed into saltwater. This is not an ontology; this is a process. Take this, add it to that, and it’s going to produce whatever you want to produce.

We could not have written this down like that until Mendeleev published the periodic table. We didn’t have any notation to produce that.

Now, the compounds are temporal. You produce saltwater for some reason, something good for some whatever, whatever it happens to be that you are trying to create.

Here are some examples of other compounds. This is an acid and a base, or a base or an alkali, and again, sodium chloride on water. It’s a balanced compound. Here is hydrogen, there is hydrogen, there are the two hydrogen. Here is chlorine, there is chlorine, here is the sodium, there is a sodium, here is the oxygen, there is oxygen.

We could not have written this down like that until Mendeleev published the periodic table. We didn’t have any notation to produce that.

So here are some other compounds: here is salt, that’s sodium chloride, here is aspirin. C9H8O4, Vicodin is C18H21NO3, Naproxen is C14H14O3, Ibuprofen, Viagra, sulphuric acid and water and so on and so on.

How many of these can you create out of the periodic table? The answer is infinite. It would go infinite. I don’t want to take the time to elaborate, but it’s infinite. And these are temporal. These are specifically defined to do specific things.

Here is an ontology. How many different enterprises could you create out of this ontology? And the answer again is going to be infinite. Until an ontology exists, nothing is repeatable, nothing is predictable. There is no discipline. Everything is basically best practice. The perimeters are timeless.

Now, here are some compounds. The elements are what I would call a primitive component. The compounds are implementations, instantiations. COBOL Programs, you can read Java 2 or Smalltalk or whatever you want to read; Objects, BPMN, Swimlanes, Business Architecture, Capabilities, Mobility, Applications, Data Models, Security Architecture, Services, COTS, Technology Architecture, Big Data, Missions/Visions, Agile Code, Business Processes, DoDAF Models, Balanced Scorecard, Clouds, I.B. Watson, TOFAF Artifacts, and so on. How many of these are there? It’s infinite. 

Specific reasons

How long will it be until we can add one to the list? What time is it? People get really creative. They create a lot of these things. And these are temporal. They are for specific reasons at a specific point in time.

Here is alchemy. It’s a practice. It’s a mythology without an ontology. Process is down in the basement with a chemistry set, trying things out. If it works and it doesn’t blow the house up, write that one down; that’s a good one. If it blows up, you probably have to write that one down too; don’t do that one again.

So a process with no ontological structure is ad hoc, fixed, and dependent on practitioner skills. It’s not a science; it is alchemy; it’s a practice.

I've got to tell you, the alchemists were really clever. Man, they figured out how to create gunpowder long before they ever had the periodic table. So these people were really creative. However, few hundred years later, Mendeleev published the periodic table.

I don’t know whether you guys realize this or not, but we tend to think the periodic table has been around forever, because the elements have been around forever. Basically we learn that in chemistry or whatever. Periodic table was only published in the 1880-1890 time frame.

If you just built them to get the code to run, they're not going to be integrated, not flexible, not interoperable, not reusable. They are not aligned; they are not meeting expectations.

If you think about this, within 50 years of the publication of the periodic table, the physicists and chemists basically were splitting atoms. Think about this. Once you have order, now research actually works. Things become predictable and repeatable. We don’t have to learn everything by experience. We can hypothetically define other possibilities and get really creative.

Like I say, in a very short period of time, friction goes to zero, and you can get really creative and really sophisticated in very short periods of time, so I just throw that one away.

So ontology versus process, engineering versus manufacturing, architecture versus implementation. It's not "either/or;" it is "and." And the question is, how did you get your composite manufacturing implementation? Did you reuse components of primitive, ontological, engineering constructs, or did you just manufacture the composite ad hoc to some problem or some system requirement?

Actually the enterprise is the total aggregate sum of composite implementations.

Now, the question is, how did you get your composite? Were you just building systems or did you have the periodic table of primitive components from which you assembled the implementation?

If you just built them to get the code to run, they're not going to be integrated, not flexible, not interoperable, not reusable. They are not aligned; they are not meeting expectations.

So the question is, how did you get the composite, the compounds? Did you have the periodic table? Now, obviously I am taking it to a point where I am saying, it’s not an "or;" it’s an "and."

Allen and I were talking about this yesterday. I don’t want to take a lot of time to develop this, but this came from Roger Greer, who was the Dean of the School of Library and Information Management USC years ago, and I just happened to run across some notes I had taken at an IBM GUIDE Conference in 1991.

Professional vs. trade

Roger was talking about the difference between a profession and a trade. He basically didn’t make any differentiation. This is the Professional Service Cycle. The professional starts with a diagnosis, analysis of need, and diagnoses the problem. Then you prescribe the solution. Then the technician applies the solution. He evaluates the application and, depending upon the evaluation, enters into the cycle again.

So what differentiates the professional from the trade or labor is the diagnosis and a prescription, where the trade or labor is involved with the implementation and any evaluation.

My observation is that this is where the engineering has taken place. That’s where you need the ontology to do the diagnosis and the prescription. And then, you need the methodology to do the implementation basically -- the manufacturing. The engineering work is going on over here; the manufacturing work is going on over there.

So what differentiates the professional from the trade? Well, if you start with the diagnosis of the problem and the prescription, that’s what the doctor does. The x-ray technician shoots the x-ray, takes the picture, and then evaluates whatever the result is.

Those of us who come from the architecture domain, need to begin to develop the characteristics of a profession. This is a profession.

Leon Kappelman is a friend of mine. He's an academic guy. He traces the CEO surveys for years and years -- 20, 30 years. In 20 or 30 years, one of the top ten issues that the CEOs of the world say those of us who come from the information community need to deal with turns out to be alignment.

They're basically saying, "I don’t know what you guys are doing. You're spending a lot of money down there in IT. Whatever you're doing with it does not align with what I think the enterprise is about."

So there's an alignment problem. I would submit to you, if you are starting over here, you are going to always be a solution in search of a problem.

So we want to change it. Allen and I really feel strongly about this. Those of us who come from the architecture domain, need to begin to develop the characteristics of a profession. This is a profession. Well, that presumes a discipline, and the implication is that we need to change our whole concept to diagnose the enterprise problem. In fact, that’s the one last slide I would use.

The end object is not to build the system. The end object is to diagnose the enterprise problem. Then, you can prescribe. The enterprise really complicates it. You can probably prescribe three, four, or a dozen different possible solutions that they could pursue. Okay chief, here are a set of things that you can do.

Somebody, I think it was Steve Jobs in his book, said that you had to go in with two recommendations to Steve Jobs, but you have a third one in your pocket, because he would tear them up. So, you have to go in and have a third one.

How many do you want chief? We can construct however many you want to, and you can evaluate them or analyze them for whatever the implications are. What are the capital expense implications, or cultural? You can analyze them and let them understand what the alternatives are, what the implications are, or the alternatives. and you can pick one and you can do the implementation, then you evaluate and so on.

Lessons to be learned

This is what differentiates the profession from the trade. This is important. The more I think about it, there is really lessons to be learned here.

Here are the research lessons that we've learned. It is possible to solve general management problems very quickly with a small subset of primary components -- simply lists and their interdependencies short of the complete primitive models.

You don’t have to do a lot of architecture to begin. You have enough that you can do the diagnosis of the problem. Then, different complex, composite constructs can be created dynamically, virtually cost-free, from the inventory of primitive lists for addressing subsequent general management problems.

Once you begin to populate the inventory of the primitive components, they are reusable to analyze or diagnose other problems. This is not just limited to whatever precipitated the first population of the primitives.

There is a TOGAF development strategy, and I would evolve TOGAF to become an engineering methodology as well as a manufacturing methodology.

And many scenarios can be evaluated to test strategy alternatives before making commitments. You can analyze the implications and make recommendations around those implications before you actually spend money or actually create infrastructure kinds of things.

These are really important issues. So here are my conclusions.

Here is what I would propose to TOGAF. There is a TOGAF development strategy, and I would evolve TOGAF to become an engineering methodology as well as a manufacturing methodology.

Those of us who come from the IT community, for the last 75 years we've been building it to run a system. Technically that’s what people say. If you ask somebody from IT what they do for a living, we would say we're building to run systems.

So all of us are very manufacturing dominant. That's the way we tend to think. We tend to think in terms of composite contracts. Every artifact, if it has more than one variable in it, is a manufacturing artifact; it's not an engineering artifact. I can tell pretty quickly by looking at the descriptive representation, looking at the model.

If you have more than one variable in that model, I know you're using it for manufacturing, for implementation purposes, probably manufacturing, I would say the implementation purposes.

I would just broaden TOGAF to dig in to deal with the engineering issues. The way I would do that in Phase I. I'll tell you what I think phase two and three might be as well. I'm just getting creative here. Allen may say, "That's interesting, Zachman, but you're out of here. We don't need a lot of help. We already got enough things to do here."

Existing process

But, first of all, I would use the existing data-gathering process to populate the inventory of single-variable, primitive podels. We're already doing the gathering, and I would just factor out what are the primitive components and begin to populate the inventory.

We have a little workshop this afternoon to show you how to do that. It is not that hard. Anybody who goes to the workshop will see. The workshop is created in order to just show you that it's not too complicated. You can do this.

I would just use the existing data-gathering, purchase methodology, begin to populate. Then I would reuse the primitive components in the creations of present TOGAF artifacts. You’ve got to create the artifacts anyway, you might as well just reuse the primitive components.

Now that presumes another thing for those of you who are into the tooling domain, but you’d have to map the primitive metamodel against the TOGAF metamodel. So there is a metamodel issue here.

You have to look at the metamodel, the TOGAF artifacts, and see if there is a composite construct in the Metamodel and just factor out what the primitive components are.

But what that would tell you is that you have to look at the metamodel, the TOGAF artifacts, and see if there is a composite construct in the Metamodel and just factor out what the primitive components are.

That's the way you would map the composite you're trying to create from the primitive implementations. That's what I would do. That would be just looking at right where we are today. So, here is the set of primitives and here is the methodology. Let's just use the methodology to do engineering work, and it will still end up creating the same implementation composites.

I was getting creative, here is what I would do. Here is what we were doing. I probably have to say it that way, and I encourage you to do this by the way. I would extend the methodology for enterprise problem diagnosis and solution prescriptions --single-variable, primitive components, binary relationships, and impact analysis.

What you need in order to do the diagnosis is the single-variable, primitive construct, and only binary models, because what you're going to do with this is to impact analysis. You touch one thing, and here are all the other things that are impacted by it.

That application has been around for a long time. I'm not telling you something that nobody knows how to do. But there are single-variable models and binary models. Building a binary model, is this related to that, is this related to this, there are two things at a time. The models are pretty simple. I'm not going to take more time to do that but -- then I would segue to the current TOGAF methodology.

I would come out of here and go into the current methodology, making enhancements incrementally as practical. You have been improving TOGAF forever, from the very beginning. I would just start to begin to improve it based upon what we've learned with the diagnostic and the prescription enhancement.

The transformation

Then, in Phase III, I would orchestrate the transformation from the TOGAF artifacts to implementation, lower rows. I would orchestrate that transformation. So you have the transformation from the strategy up here and to the concept, the logic and physics, totally a configuration.

I would orchestrate that and I would extend the TOGAF governance process. With a governance process, and TOGAF's is really strong, I would just take a hard look at that and elaborate that to manage the entire reification set of transformation. That's where I would take it to.

Some of you may say, "We will not take so long or cost too much, whatever." The argument might be a guess, but I think that’s where I would go. Principally, I think that's where I go because of the implication of changing the fundamental concept of the Enterprise Architecture. The profound significance is this. It alters the concept of Enterprise Architecture from one of building models to one of solving general management problems.

Man, that would be really interesting. It buys the time for the experts to build out the complete Enterprise Architecture -- thing-relationship-thing -- primitive models iteratively and incrementally. You don’t have to do it all at once, but general management problem, my problem, my problem, my problem, iteratively and incrementally start building out little more adding to the primitives over time.

If we could change the perception of Enterprise Architecture to be one of solving general management problems, we would have no problem getting the resources and the time to do it.

Then, it builds significant creditability for the information-technology community, and I would submit, we need all help we can get. If we begin to be perceived to be the enterprise doctors, we would be perceived to be direct not indirect. It wouldn’t be an optional, but mandatory, kind of a responsibility. Most importantly, it would position Enterprise Architecture to become a general management operational process, not simply an IT exercise. I think that's where you have to go.

If we could change the perception of Enterprise Architecture to be one of solving general management problems, we would have no problem getting the resources and the time to do whatever Enterprise Architecture we want to do. That valuation issue will tend to go away. I saw a presentation yesterday about the valuation. It was talking about the Internet of Things, and it was really a creative presentation. I really appreciated it a lot.

But if we can solve general management problems, you don’t have to worry about valuation. I will say one more thing about valuation. The fundamental problem with architecture is that it doesn't save money in the current accounting period. It’s not an expense. You don't make money or save money in the current accounting period. You are building an inventory of assets. What make an asset different than an expense? How many times you use it. Use it more than once, it's as an asset. So you build the inventory of assets.

The assets don’t save you money in the current accounting period, but they enable you to make money or save money in many accounting periods in the future. The problem with asset valuation is the accounting people in the US are precluded from putting values on assets. That's probably because there's not an absolute way to value the asset, because the value of it is derived from how many times you use it, or from what the market will pay for at some point in time.

It’s difficult to value assets and it’s really difficult to value intellectual assets. I would submit Enterprise Architecture as intellectual asset, and we’re just beginning to learn some things about that. But that issue turns out to go away. If you just solve general management, you don’t have to worry about valuing the value proposition.

I think I made it in an hour, but actually an hour and three minutes. I owe Allen three minutes now and that’s not too bad on my part, but  there will be a panel. We will have some discussion, answer any questions, and then also there is a workshop of anybody who cares about trying to work with some of these things. I would be glad to do it. Thank you, Allen, and thank you guys for taking the time to listen, I appreciate it a lot.


Brown: We have some questions. We talked about professionalizing Enterprise Architecture. We both feel passionately about it, and having these professionals as the enterprise doctors, as you say. The person to ask the questions is actually the CEO of the AEA, Steve Nunn. The AEA is now 44,000 members. And they are actually active, as well, which is great. So, Steve, what have you got?

Steve Nunn: Unsurprisingly no shortage of questions and compliments on the presentation, John.

Here’s a long question, but bear with it. Given a composite of an enterprise, a methodology existed for its construction. Today, I have a million assets with individual change records associated with them. The EA methodology did not govern the maintenance after construction. What do you suggest to correct the EA to the Ontology?

Zachman: This not atypical, by the way. I think that's where most of us live. I normally make the case that those of us who come from IT have been manufacturing the enterprise for last 60 to 70 years. The enterprise was never engineering. We are manufacturers. So we are manufacturing parts. We don't manufacture the enterprise, and the parts don't fit together.

So what do you do if you manufacture parts that don't fit together? We will actually scrap and rework. There is no way to fix that problem after the fact. If you want the parts to fit together, you have to engineer them to fit together before you manufacture them. After you get them manufactured and then try to fit them together, you can't get them to fit together.

You don't have to have them all, but you have to begin. So you have to have solve whatever you can out of the TOGAF activity that you have at your disposal.

We're all sitting in kind of the same position. Somebody has to break the pattern. If you just keep on writing code, you're just going to get more parts. You're not going to change anything. You have to have a different paradigm. I was describing for you a different paradigm, and I was describing for you an engineering paradigm.

I would do just exactly what I said. I’ll start taking TOGAF. We already have this methodology -- very, very widely respected, very widely used. I would take the methodology, the data gathering methodology portion of it, and I would begin to populate the inventory of primitive assets. You don't have to have them all, but you have to begin. So you have to have solve whatever you can out of the TOGAF activity that you have at your disposal.

Once you do that, then you're going to populate them with the primitives that are required to create the TOGAF composites right now, so we can produce whatever we are producing out of TOGAF right now. I would just start with something I know, something I have my hands on. I can start with TOGAF. I would start to populate the primitive artifacts and then create by reusing the primitives to create the composites.

So I would start with there. And then I begin to enhance that over time. I have to begin to enhance the methodology to elaborate. I gave you some thoughts about how I would enhance it, but in the meantime what you can do is, once you start creating the architectural constructs, you have to orchestrate your way out of where you're at.

Migrating what exists

We don't have a blank sheet of paper when we start with our Enterprise Architecture. We already have an enterprise out there. You have to figure out a way to migrate out of the existing environment into the architectural environment. am not just going to tell you what I think the solution is without elaborating how I would use it, but I would use it, the data warehouse kind of a concept. I create the architecture. I extract and transform the database out of the existing application to populate the architectural environment.

I didn't learn this. I didn't figure out this all myself. People from Codec and I were sitting in the Codec Theater one time. They were saying, once we have the architected data, we know what the data is, and now we're going to rebuild all the transaction processing systems to update the data warehouse. Then, after we update the data warehouse, we're going to turn off the legacy system.

Over some period of time, however long you want to take, you just move it out little by little, transaction by transaction by transaction to the architectural environment. If you're going to rebuild the transaction processing systems to populate the data warehouse, then I would add the process specification. I would add the distribution. I would add the other characteristics of architecture. That's the way it would orchestrate my migration out of the existing environment.

Brown: There is also a sense that came out of that question. The architecture, once it was done, it was done. And then things changed afterwards. So there was no concept that the architecture should be referenced any time you make a change to the instantiation and to update the architecture accordingly.

Zachman: Allen, that’s really an important point. I’ll tell you where I learned how to manage change. I set that up at Boeing. It took me about 10 years to figure this out. How do you manage the changes in the Boeing 747’s? Very carefully, right?

If you really want to change the Enterprise Architecture before you change the enterprise, if you have a general management responsibility for Enterprise Architecture, this a piece of cake.

Brown: Yes.

Zachman: You don’t walk around a Boeing 747 with a hammer and screwdriver making the changes. Forget that. You have an engineering administration. Who are they? They're managing the drawings, the functional space, the building materials, and so on. You pull out the one you want to change. You change the artifact and then you figure out the impact on the adjoining artifacts.

When you figure out the impact, you make changes to all those other artifacts. You don’t throw away the old version, but you keep the old version. It is regulated in the airplanes. You have to trace the artifact back to the last time we had an airplane that flew successfully.

But once you change the artifact, then you go to the shop for a particular change kit. You take the change kit out of the Boeing 747 and you put the change in the Boeing 747. If you manage change in that fashion, you will minimize the time, disruption, and cost of change in the Boeing 747.

Every artifact precisely represents the Boeing 747 as it existed in this moment. And one thing that people would not tend to know, every Boeing 747 is unique. They are all different and they have a set of these artifacts for every Boeing 747. You can trace it back to the origin, whatever they changed and flew since the last time. The Boeing 747 is not complicated enough. These artifact were on paper.

The first electronically designed airplane was the 777. Now you understand the reason I'm telling you this. If you really want to change the Enterprise Architecture before you change the enterprise, if you have a general management responsibility for Enterprise Architecture, this a piece of cake.

Making changes

So by the way Ms. Vice President of Marketing, before you change your allocation responsibility, your organization responsibility, come-up and see me. We’ll change the repository first, and then you can change the allocation responsibility.

Oh, by the way Ms. Programmer, before you change a line of code in that program, you come up and see what changes are in the repository first. Then, you can change the line of code on the program. Before you change anything, you change the architecture first, and then you change it.

And, by the way, it’s dynamic because you can continuously solve problems, you can then populate, put more primitive components into the architecture. That's why this becomes really important. It becomes an operating responsibility for the general management.

If they really understood what they are, that’s a knowledge base, everything they can possibly know about the enterprise. They can change anything or have a great deal of creativity to do lots of things they haven’t even dreamed about doing before. That’s really important idea that that becomes a general management who is integrated into the enterprise operation.

The problem we have now is that when you try to make things common or federal, when they are not common, that’s where you get the hate and discontent.

Nunn: Next question. Have you considered what changes might be required to your framework to accommodate an ecosystem of multiple enterprises?

Zachman: That’s what I would call federated architecture. You know some things you want in common with more than one enterprise and some things you want to be provincial, if you will. Some things you want to make federal, some things you want to leave provincial. The problem we have now is that when you try to make things common or federal, when they are not common, that’s where you get the hate and discontent. The framework is really helpful for thinking through what you what to make common or you want to leave a provincial artifact.

That’s the way you need to deal with that and that would be any complex environment. In  most every enterprise these days, there is more than one framework that you might possibly want to populate, but then you have to understand what you want to be the same and what you want to leave. That’s the way we would handle that.

Brown: So if you are an architect, you pull out the drawing for the entire urban landscape. You pull out the drawing for specific buildings. You pull out the drawing for functions within that building, a different framework.

Zachman: Actually this was implemented. I learned it from the Province of Ontario. There was a premier about 20 years ago who was pretty creative. We sorted all of the departments in the Province of Ontario into categories. You can call them clusters.

You had the social services cluster, the land cluster, the finance cluster. Then, he put a super minister in-charge of each cluster, and their role was to integrate -- get rid of all the redundancy and make it as integrated as possible.

That was the process we were using. You have a federation at each cluster, but then you have the federation, a second level up at the province level as well.
Common connector

Nunn: Do you envision a common connector from a given architecture development method, like TOGAF, DoDAF, FEA to the Zachman Framework?

Zachman: When we talk in the workshop, we'll get into that a little bit. If you have the primitive components and say which set you want to change, if you want the TOGAF components, click, there is TOGAF. Oh. You want DoDAF, oh no problem; click, there is the DoDAF. Oh you want balanced scorecard, no problem; click, there is a balanced scorecard. Oh, you want your profit and loss statement; click, there is a profit and loss.

What you're doing is you are creating a composite dynamically. Whatever the composite is, you want to take a look at. I was really impressed with Don's presentation yesterday. I was at Raytheon last week and there was a presentation I had seen recently about the hardware -- the price performance improvements and the capabilities in hardware. What it basically was saying is that you'll be able to put big data, all the structured data, all this data on a chip. And that chip will go into your processor. The guys at Raytheon said that it's not when you can do it; you can do it now.

So if you have big data on a chip, you get dynamically identified threats and opportunities. What do you think that's going to do to the decision cycle? It's going to make that decision cycle very short.

Because you have big data on a chip, and you can analyze that big data and find a threat, an opportunity, or something external or even internal, the immediate question is going to become what are you going to change in the enterprise. Are you going to increase or decrease the inventory, increase or decrease the process transformation, going to increase or decrease the storage capacity of the node? What are you going to do to your enterprise?

So if you have big data on a chip, you get dynamically identified threats and opportunities. What do you think that's going to do to the decision cycle? It's going to make that decision cycle very short.

So you have to make up your mind. What you are going to do real quickly. So you like several alternatives? Okay, chief, here are the three or four alternatives. Which one do you want to pick?

It's going to shorten the decision cycle dramatically. Dawn was frightening me yesterday. We're not talking about the sweet by and by. She was talking about stuff that is here now, and that's what the guys at Raytheon were telling me. This is here now.

I talked about big data before, and the fundamental question is, once you figure out something external or even internal, what are you going to do to your enterprise? Where is your Enterprise Architecture? What are you going to change?

The next question is, who is working on your architecture? Somebody might be working on this. I'll tell you about that. I don't think too many people have an idea of the sense of urgency we have here. You're not going to do this today. You have to start working on this, and you’ve got to eat this elephant -- bite, bite, bite. It's not going to happen overnight.

Nunn: How can the Zachman Framework be applied to create an architecture description that can be implemented later on, without falling into a complex design that could be difficult to construct and maintain. Following on from that, how do you avoid those descriptions becoming out of date, since organizations change so quickly?

Manufacturing perspective

Zachman: Whoever posed that question is thinking about this from a manufacturing perspective. They're thinking about it as a composite construct. And if you separate the independent variables and populate the primitive components, don't bind anything together until you click the mouse, and you can change any primitive component anytime you want to, okay.

You're not locked into anything. You can change with minimum time. You can change one variable without changing everything. The question is couched in terms of our classic understanding of Enterprise Architecture, the big monolithic, static, it takes long time and costs lot of money. That's a wrong idea. Basically, you build this iteratively and incrementally, primitive, by primitive, by primitive, and then you can create the composites on-the-fly.

Basically, that would be the approach that I would take. You're not creating fixed implementation, extreme complex implementations. That’s probably not the way that you want to do it.

Basically, you build this iteratively and incrementally, primitive, by primitive, by primitive, and then you can create the composites on-the-fly.

Nunn: Short question. Is business architecture part of Enterprise Architecture or something different?

Zachman: Well, out of the context in my framework, I started to suggest that some people say, well, the business processes are as architecture, it would be column 2, row 2. Some people say, well, no, it’s actually column 6, row 1. Some people will say, well it’s actually the composite of column 1, and column 2 at row 2.

The Chief Operating Officer of a utility I work with, this is years ago now, he basically said, "My DP people know this. My DP people want to talk to me about data, process, and network, and I don't care about data, process and network. I want to talk about allocation, responsibility, business cycles, and strategy. So I don’t want to talk about column 1, 2, and 3. I only care about column 4, 5, and 6."

I couldn't believe this guy said that. I knew the guy. You don’t care about the inventories, the process transformations, and the distribution structure? Are you kidding me -- in a utility? Come on. It is just unfathomable.

At some point of time you probably are going to wish, you’d have more and more and more of these primitives. Build them up iteratively and incrementally for some long period of time. There's not one way to do it, there are n different ways to do it, and some work better than others. The fact that you’ve got a tested methodology, you had to use that, why not.

Brown: WI think it depends on which one of the 176 different definitions of business architecture you use.

Zachman: Yes.

Business architects

Brown: In my definition, the people I spoke to in Australia and New Zealand, had the title of business architects and they quite clearly felt that they were part of Enterprise Architect. But the other side of things is that some of the greatest business architects would be Bill Gates, Michael Dell, Steve Jobs, Jack Roush.

Zachman: I was pontificating around the architectural idea and I lost sight of the business architecture question. The question turns out to be, which primitives you want to consider. If you want to say you want to open up new markets, then we’ve got to figure out whatever choice you are going to need, what process, what location, and that would create the composite that you need for addressing whatever the issues you have --

Brown: And it is too tough, Enterprise Architecture.

Zachman: Yeah, right exactly.

Nunn: TOGAF’s primary, short- and long-term guidance is achieved through principles or achieved with principles. How would you propose to reconcile that with the idea of extending TOGAF’s framework and method with the Zachman Framework?

The principles don’t go away. One thing is that when you define principles, they have a lifetime.

Zachman: The principles don’t go away. One thing is that when you define principles, they have a lifetime. Somebody was making that case yesterday at the presentation. He defined the principle, and I think there is a set of architectural principles. One thing is, if you want flexibility to separate the independent variables, that’s a good principle, you have a single point of control, you have single sort of truth. Those tend to be principles that people would establish, and then take whatever the principles are that anybody has in mind, and you could figure out how that manifests itself in the architectural structure, in the framework structure and, in fact, the ontological construct, and you manage that. I mean the governance system has to enforce the principle.

There is another principle. I would not change the enterprise without changing the artifact first. I would change the architecture before I change the enterprise. Here is another one. I wouldn't allow any programmer to just spuriously create any line or code or data element or any kind of technology aggregation. You reuse what's in the primitive. And if it’s not, if you need something that’s not in that primitive, then fix the primitive. Don’t create something spurious just to get the code to run. That’s another principle.

There was probably an array thing, and off the top of my head there is a couple that I would be interested in, but those tend to be deriving from these ideas about architecture. I learned it all of this. I didn't invent any of this. I learned it by looking to other people and, I saw the patterns. All I do is I put enterprise names and the same architectural constructs in any other object and then I learned about migration. I learned about federation. I learned about all these other things by managing change and by looking at what other people did.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion on cybersecurity standards for safer supply chains. And another earlier discussion from the event focused on synergies among major Enterprise Architecture frameworks.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tags:  Allen Brown  BriefingsDirect  Dana Gardner  enterprise architect  enterprise architecture  Interarbor Solutions  John Zachman  Steve Nunn  The Open Group  TOGAF  Zachman Framework 

Share |
PermalinkComments (0)

GoodData analytics developers on what they look for in a big data platform

Posted By Dana L Gardner, Tuesday, April 14, 2015

This BriefingsDirect big data innovation discussion examines how GoodData created a business intelligence (BI)-as-a-service capability across multiple industries that enables users to take advantage of both big-data performance as well as cloud delivery efficiencies. They had to choose the best data warehouse infrastructure to fit their scale and cloud requirements, and they ended up partnering with HP Vertica.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about their choice process for best big data in the cloud platform, we're joined by Tomas Jirotka, Product Manager of GoodData; Eamon O'Neill, the Director of Product Management at HP Vertica, and Karel Jakubec, Software Engineer at GoodData. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a bit about GoodData and why you've decided that the cloud model, data warehouses, and BI as a service are the right fit for this marketplace?

Jirotka: GoodData was founded eight years ago, and from the beginning, it's been developed as a cloud company. We provide software as a service (SaaS). We allow our customers to leverage their data and not worry about hardware/software installations. We just provide them a great service. Their experience is seamless, and our customers can simply enjoy the product.


We provide a platform -- and the platform is very flexible. So it's possible to have any type of data, and create insights. You can analyze data coming from marketing, sales, or manufacturing divisions -- no matter in which industry you are.

Gardner: If I'm an enterprise and I want to do BI, why should I use your services rather than build my own data center? What's the advantage?

Cheaper solution

Jirotka: First of all, our solution is cheaper. We have a multi-tenant environment. So the customers effectively share the resources we provide them. And, of course, we have experience and knowledge of the industry. This is very helpful when you're a beginner in BI.

Gardner: What have been some of the top requirements you’ve had as you've gone about creating your BI services in the cloud?


Jakubec: The priority was to be able to scale, as our customers are coming in with bigger and bigger datasets. That's the reason we need technologies like HP Vertica, which scales very well by just adding nodes to cluster. Without this ability, you realize you cannot implement solutions for the biggest customers. Even if you're running the biggest machines on the market, they're still not able to finish computation in a reasonable time.

Gardner: In addition to scale and cost, you need to also be very adept at a variety of different connection capabilities, APIs, different data sets, native data, and that sort of thing.

Jirotka: Exactly. Agility, in this sense, is really curial.

Gardner: How long you have been using Vertica and how long have you been using BI through Vertica for a variety of these platform services?

Working with Vertica

Gardner: What were some of the driving requirements for changing from where you were before?

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Jirotka: We began moving some of our customers with the largest data marts to Vertica in 2013. The most important factor was performance. It's no secret that we also have Postgres in our platform. Postgres simply doesn’t support big data. So we chose Vertica to have a solution that is scalable up to terabytes of data.

Gardner: What else is creating excitement about Vertica?


O’Neill: Far and away, the most exciting is about real-time personalized analytics. This allows GoodData to show a new kind of BI in the cloud. A new feature we released in our 7.1 release is called Live Aggregate Projections. It's for telling you about what’s going on in your electric smart meter, that FitBit that you're wearing on your wrist, or even your cell-phone plan or personal finances.

A few years ago, Vertica was blazing fast, telling you what a million people are doing right now and looking for patterns in the data, but it wasn’t as fast in telling you about my data. So we've changed that.

With this new feature, Live Aggregate Projections, you can actually get blazing fast analytics on discrete data. That discrete data is data about one individual or one device. It could be that a cell phone company wants to do analytics on one particular cell phone tower or one meter.

That’s very new and is going to open up a whole new kind of dashboarding for GoodData in the cloud. People are going to now get the sub-second response to see changes in their power consumption, what was the longest phone call they made this week, the shortest phone call they made today, or how often do they go over their data roaming charges. They'll get real-time alerts about these kinds of things.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

When that was introduced, it was standing room only. They were showing some great stats from power meters and then from houses in Europe. They were fed into Vertica and they showed queries that last year we were taking Vertica one-and-half seconds. We're now taking 0.2 seconds. They were looking at 25 million meters in the space for a few minutes. This is going to open up a whole new kind of dashboard for GoodData and new kinds of customers.

Gardner: Tomas, does this sound like something your customers are interested in, maybe retail? The Internet of Things is also becoming prominent, machine to machine, data interactions. How do you view what we've just heard Eamon describe, how interesting is it?

More important

Jirotka: It sounds really good. Real-time, or near real-time, analytics is becoming a more-and-more important topic. We hear it also from our customers. So we should definitely think about this feature or how to integrate it into the platform.

Jakubec: Once we introduce Vertica 7.1 to our platform, it will be definitely one of features we will focus on. We have developed a quite complex caching mechanism for intermediate results and it works like a charm for Postgres SQL, but unfortunately it doesn't perform so well for Vertica. We believe that features like Live Aggregate Projection will improve this performance.

Gardner: So it's interesting. As HP Vertica comes out with new features, that’s something that you can productize, take out to the market, and then find new needs that you could then take back to Vertica. Is there a feedback loop? Do you feel like this is a partnership where you're displaying your knowledge from the market that helps them technically create new requirements?

Jakubec: Definitely, it's a partnership and I would say a complex circle. A new feature is released, we provide feedback, and you have a direction to do another feature or improve the current one. It works very similarly with some of our customers.

Engineer-to-engineer exchanges happen pretty often in the conference rooms.

O’Neill: It happens at a deeper level too. Karel’s coworkers flew over from Brno last year, to our office in Cambridge, Massachusetts and hung out for a couple of days, exchanging design ideas. So we learned from them as well.

They had done some things around multi-tenancy where they were ahead of us and they were able to tell us how Vertica performed when they put extra schemers on a catalog. We learned from that and we could give them advice about it. Engineer-to-engineer exchanges happen pretty often in the conference rooms.

Gardner: Eamon, were there any other specific features that are popping out in terms of interest?

O’Neil: Definitely our SQL on Hadoop enhancements. For a couple of years now we've been enabling people to do BI on top of Hadoop. We had various connectors, but we have made it even faster and cheaper now. In this most recent 7.1 release, you can now install Vertica on your Hadoop cluster. So you no longer have to maintain dedicated hardware for Vertica and you don’t have to make copies of the data.

The message is that you can now analyze your data, where it is and as it is, without converting from the Hadoop format or a duplication. That’s going to save companies a lot of money. Now, what we've done is brought the most sophisticated SQL on Hadoop to people without duplication of data.

Using Hadoop

Jirotka: We employ Hadoop in our platform, too. There are some ETL scripts, but we've used it in a traditional form of MapReduce jobs for a long time. This is really costly and inefficient approach because it takes much time to develop and debug it. So we may think about using Vertica directly with Hadoop. This would dramatically decrease the time to deliver it to the customer and also the running time of the scripts.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Gardner: Eamon, any other issues that come to mind in terms of prominence among developers?

O’Neill: Last year, we had our Customer Advisory Board, where I got to ask them about those things. Security came to the forefront again and again. Our new release has new features around data-access control.

We now make it easy for them to say that, for example, Karel can access all the columns in a table, but I can only access a subset of them. Previously, the developers could do this with Vertica, but they had to maintain SQL views and they didn’t like that. Now it's done centrally.

They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

They like the data-access control improvements, and they're saying to just keep it up. They want more encryption at rest, and they want more integration. They particularly stress that they want integration with the security policies in their other applications outside the database. They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

Gardner: Any thoughts about security, governance and granularity of access control?

Jakubec: Any simplification of security and access controls is a great new. Restriction of access for some users to just subset of values or some columns is very common use case for many customers. We already have a mechanism to do it, but as Eamon said it involves maintenance of views or complex filtering. If it is supported by Vertica directly, it’s great. I didn’t know that before and I hope we can use it.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

Tags:  big data  BriefingsDirect  Dana Gardner  data analytics  Eamon O'Neill  GoodData  HP  HP Big Data  HPDiscover  Interarbor Solutions  Karel Jakubec  Tomas Jirotka 

Share |
PermalinkComments (0)

Source Refrigeration selects agile mobile platform Kony for its large in-field workforce

Posted By Dana L Gardner, Friday, April 10, 2015

The next BriefingsDirect enterprise mobile strategy discussion comes to you directly from the recent Kony World 2015 Conference in Orlando.

This series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Our next innovator interview focuses on how Source Refrigeration and HVAC has been extending the productivity of its workforce, much of it in the field, through the use of innovative mobile applications and services.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

We'll delve in to how Source Refrigeration has created a boundaryless enterprise and reaped the rewards of Agile processes and the ability to extend data and intelligence to where it’s needed most.

To learn how their successful mobile journey has unfolded, we welcome Hal Kolp, Vice President of Information Technology at Source Refrigeration and HVAC in Anaheim, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It’s my understanding that you have something on the order of several hundred field-based service and installation experts serving the needs of 2,500 or more customers nationwide. Tell us a little bit about why mobility is essential for you and how this has created better efficiency and innovation for you?

Kolp: Source started to explore mobility back in 2006. I was tasked with a project to figure out if it made sense to take our service organization, which was driven by paper, and convert it to an electronic form of a service ticket.


After looking at the market itself and at the technology for cellular telephones back in 2006, as well as data plans and what was available, we came to the conclusion that it did make sense. So we started a project to make life easier for our service technicians and our back office billers, so that we would have information in real time and we'd speed up our billing process.

At that time, the goals were pretty simple. They were to eliminate the paper in the field, shorten our billing cycle from 28 days to 3 days, and take all of the material, labor, and asset information and put it into the system as quickly as possible, so we could give our customers better information about the equipment, how they are performing, and total cost of ownership (TCO).

But over time, things change. In our service organization then, we had 275 guys. Today, we have 600. So we've grown substantially, and our data is quite a bit better. We also use mobility on the construction side of our business, where we're installing new refrigeration equipment or HVAC equipment into large supermarket chains around the country.

Our construction managers and foremen live their lives on their tablets. They know the status of their job, they know their cost, they're looking at labor, they're doing safety reports and daily turnover reports. Anyone in our office can see pictures from any job site. They can look at the current status of a job, and this is all done over the cellular network. The business has really evolved.

Gardner: It’s interesting that you had the foresight to get your systems of record into paperless mode and were ready to extend that information to the edge, but then also be able to accept data and information from the edge to then augment and improve on the systems of record. One benefits the other, or there is a symbiosis or virtuous adoption cycle. What have been some of the business benefits of doing it that way?

Kolp: There are simple benefits on the service side. First of all, the billing cycle changed dramatically, and that generated a huge amount of cash. It’s a one-time win, whatever you would bill between 3 days and 28 days. All of that revenue came in, and there was this huge influx of cash in the beginning. That actually paid for the entire project. Just the generation of that cash was enough to more than compensate for all the software development and all the devices. So that was a big win.

But then we streamlined billing. Instead of a biller looking at a piece of paper and entering a time ticket, it was done automatically. Instead of looking at a piece of paper, then doing an inventory transfer to put it on a job, that was eliminated. Technician’s comments never made it into our system or on paper. They just sent a photocopy of the document to the customer.

Today, within 30 seconds of the person completing a work order, it’s uploaded to the system. It’s been generated into PDF documents where necessary. All the purchase order and work order information has entered into the system automatically, and an acknowledgement of the work order is sent to our customer without any human intervention. It just happens, just part of our daily business.

That’s a huge win for the business. It also gives you data for things that you can start to measure yourself on. We have a whole series of key performance indicators (KPIs) and dashboards that are built to help our service managers and regional directors understand what’s going on their business.

Technician efficiency

Do we have customers where we're spending a lot of time in their store-servicing them? That means there is something wrong. Let’s see if we can solve our customer’s problems. We look at the efficiency of our technicians.

We look at the efficiency of drive times. That electronic data even led us into automatic dispatching systems. We have computers that look at the urgency of the call, the location of the call, and the skills necessary to do that service request. It automatically decides which technician to send and when to send them. It takes a work order and dispatches a specific technician on it.

Gardner: So you've become data-driven and then far more intelligent, responsive, and agile as a result. Tell me how you've been able to achieve that, but at the same time, not get bogged down in an application development cycle that can take a long time, or even find yourself in waterfall-type of affair, where the requirement shift rapidly, and by the time you finish a product, it’s obsolete.

How have you been able to change your application development for your mobile applications in a way that keeps up with these business requirements?

This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us.

Kolp: We've worked on three different mobile platforms. The claim in the beginning was to develop once and just update and move forward. That didn’t really work out so well on the first couple of platforms. The platforms became obsolete, and we essentially had to rewrite the application on to a new platform for which the claim was that it was going to survive.

This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us, because we've done a whole bunch of upgrades in the last 12 months on the platform. We're moving, and our application is migrating very quickly.

So things are very good on that side and in our development process. When we were building our new application initially, we were doing two builds a week. So every couple of days we do a little sprint up. We don’t really call them sprints, but essentially, it was a sprint to add functionality. We go into a quick testing cycle, and while we're testing, we have folks adding new functionality and fixing bugs. Then, we do another release.

At the current stage where we are in production really depends on the needs of the business. Last week, we had a new release and this week, we're having another release as we fix some small bugs or did enhancements to the products that came up during our initial rollout where we are making changes. It’s not that difficult to roll out a new version.

We send an alert. The text says that they have got a new version. They complete the work order that they're on, they perform an update, and they're back in business again. So it's pretty simple.

Field input

Gardner: So it's a very agile, iterative, easily adaptive type of development infrastructure. What about the input from those people in the field. Another aspect of agile development isn’t just the processes for the development itself, but being able to get more people involved with deciding features, functions, and not necessarily forcing the developers to read minds.

Has that crept into your process? Are you able to take either a business analyst or practitioner in the field and allow them to have the input that then creates better apps and better processes?

Kolp: In our latest generation application, we made tremendous changes in the user interface to make it easier for the technicians to do their job and for them to not have to think about anything. If they needed to do something, they knew what they had to do. It was kind of in their face, in other words. We use cues on screens by colors. It’s something that's required for them to do, it’s always red. If there is an input field that is optional, then it’s in blue. We have those kinds of cues.

We also built a little mini application, a web app, that's used by technicians for frequently asked questions (FAQs). If they have got some questions about how this application works, they can look at the FAQs. They can also submit a request for enhancements directly from the page. So we're getting requests from the field.

If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn.

If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn. We're trying to make the application to be more driven by the field and less by managers in the back office.

Gardner: Are there any metrics yet that would indicate an improvement in the use of the apps, based on this improved user interface and user experience. Is there any way to say the better we make it, the more they use it; the more they use it, the better the business results?

Kolp: We're in early stages of our rollout. In a couple of weeks we'll have about 200 of our 600 guys on the new application, and the guys noticed a few things. Number one, they believe the application is much more responsive to them. It’s just fast. Our application happens to be on iOS. Things happen quickly because of the processor and memory. So that’s really good for them.

The other thing they notice, is that if they're looking at assets and they need to find something in the asset, need to look up a part, or need to do anything, we've added search capability that just makes it brain-dead simple to find stuff that they need to look for. They can use their camera as a barcode scanner within our application. It’s easy to attach pictures.

What they find is that we've made it easier for them to add information and document their call. They have a much greater tendency to add information than they did before. For example, if they're in their work order notes, which for us is a summary, they can just talk. We use voice to text, and that will convert it. If they choose to type, they can type, but many of the guys really like the voice to text, because they have big fingers and typing on the screen is a little bit harder for them.

What's of interest?

Gardner: We are here at Kony World, Hal. Did anything jump out at you that’s particularly interesting? We've heard about solution ecosystems and vertical industries, Visualizer update, some cloud interactions for developers? Did anything really jump out at you that might be of interest for the coming year?

Kolp: I'm very interested in Visualizer 2.0. It appears to be a huge improvement over the original version. We use third-party development. In our case, we used somebody else’s front-end design tool for our project, but I really like the ability to be able to take our project and then use it with Visualizer 2.0, so that we can develop the screens and the flow that we want and hand it off to the developers. They can hook it up to the back end and go.

I just like having the ability to have that control, and now we've done the heavy lifting. For the most part, understanding your data, data flow or the flow of the application is usually where you spend quite a bit more time. For us to be able to do that ourselves is much better than writing on napkins or using PowerPoint or Visio to generate screens or some other application.

It’s nice because ultimately we will be able to go use Visualizer, push it into the application, take the application, push it back into Visualizer, make more changes, and go back and forth. I see that as a huge advantage. That’s one thing I took from the show.

When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Gardner: With this journey that you've been on since 2006, you’ve gone quite a way. Is there anything you could advise others who are perhaps just beginning in extending their enterprise to that mobile edge, finding the ways to engage with the people in the field that will get them to be adding information, taking more intelligence back from the apps into their work? What might you do with 20-20 hindsight and then relate that to people just starting?

Kolp: There are a couple of things that I’ll point out. There was a large reluctance for people to say that this would actually work. When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Our technicians today, who have been on mobile applications, hate to be on paper. They don't want to have anything to do with paper, because it's harder for them. They have more work to do. They have to collect the paper, shove the paper in an envelope, or hand it off to someone to do things. So they don’t like it.

The other thing you should consider is what happens when that device breaks? All devices will break at some point for some reason. Look at how those devices are going to get replaced. We operate in 20 states. You can't depend upon the home office to be able to rush out a replacement device for your suppliers in real time. We looked pretty hard at using all kinds of different methods to reduce the downtime for guys in the field.

You should look at that. That’s really important if the device is being used all day, every day for a field worker. That’s their primary communication method.

Simpler is better

The other thing I could say is, “simpler is better.” Don't make an application where you have to type-in a tremendous amount of data. Make data entry as easy as possible via taps or predefined fields.

Think about your entire process front to back and don't hesitate to change the way that you gather information today, as opposed to the way you want to in the future. Don't take a paper form and automate it, because that isn't the way your field worker thinks. You need to generate the new flow of information so that it fits on whatever size screen you want. It can't be a spreadsheet or it can’t be a bunch of checkboxes and stuff, because that doesn't necessarily suit the tool that you are using to drive the information gathering.

Spend a lot of time upfront designing screens and figuring out how the process should work. If you do that, you'll meet with very little pushback from the field once they get it and actually use it. I would communicate with the field regularly if you're developing and tell them what's going on, so that they are not blind-sided by something new.

The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

I'd work closely with the field in designing the application. I'd also be involved with anybody that touches that data. In our case, it's service managers. We work with builders, inventory control, purchasing people, and timecards. All of those were pieces that our applications touch. So people from the business were involved, even people from finance, because we're making financial transactions in the enterprise resource planning (ERP) system.

So get all those people involved and make sure that they're in agreement with what you're doing. Make sure that you test thoroughly and that everybody signs off together at the end. The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

Add a new feature if you're starting something new. If you're replacing an existing application, it's much harder to do that. You'll have to create all of the functionality because the business typically doesn't want to lose functionally.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Kony, Inc.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Hal Kolp  Interarbor Solutions  Kony  Kony World  Source Refrigeration 

Share |
PermalinkComments (0)

Ariba elevates business user experience with improved mobile apps and interfaces

Posted By Dana L Gardner, Thursday, April 09, 2015

LAS VEGAS -- Ariba today announced innovations to its cloud-based applications and global business network designed to deliver a personal, contextual and increasingly mobile-first user experience for business applications consumers.

Among the announcements are a newly designed user interface (UI) and a mobile app based on the SAP Fiori approach, both due to be available this summer, for Ariba's core procurement, spend management, and other line of business apps. Ariba, an SAP company, made the announcements here at the Ariba LIVE conference in Las Vegas.

"This gives us a cleaner approach, more tiles, and a Fiori-like user experience," said

-- Chris Haydon, Ariba Senior Vice President of Product Management. "Users see all the business network elements in one dashboard, with a native UI for a common, simple experience for Ariba apps upstream and down."

The new UI offers an intuitive experience that simplifies task and process execution for all users – from frontline requisitioners to strategic sourcing professionals, said Ariba. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Users see all the business network elements in one dashboard, with a native UI for a common, simple experience for Ariba apps upstream and down.

The goal is to make the business users want to conduct their B2B processes and transactions using these new interfaces and experiences, in effect having them demand them of their employers. User-driven productivity is therefore a new go-to market for Ariba, with simplicity and everywhere-enabled interactions with processes a growing differentiator.

"We want to make it easier to use the central procurement department, rather than to go around it," said Cirque du Soleil Supply Chain Senior Director Nadia Malek. Ariba Sourcing, she said, helps Montreal-based Cirque standardize to keep all of its many shows going around the world. And by having all the Ariba apps funnel into SAP, they can keep costs managed, on a pay-as-you-go basis and still keep all the end users' needs met and controlled.

In another nod to consumer-level ease in business, B2B enterprise buying on Ariba increasingly looks like an Amazon consumer online shopping experience, with one-click to procure from approved catalogs of items from approved suppliers. Search-driven navigation also makes the experience powerful while simple, said Haydon.

Behind the pretty pictures is an increasingly powerful network and platform, built on SAP HANA. Network-centric and activity-context-aware apps allow Ariba to simplify processes across the cloud, on-premises, and partner ecosystem continuum for its global users. The interface serves up only what the users need at that time, and is context aware to the user's role and needs.

"The future of commerce means processes across multiple business networks, on any device, with shared insights by all," said Tim Minahan, Senior Vice President at Ariba.

Scale and reach

The Ariba Network supports these apps and services in new terms of scale and reach as well. Every minute the Ariba Network adds a new supplier, and every day, it fields 1.5 million catalog searches while generating $1.8 billion in commerce, said Ariba.

"Business strong, consumer simple" is the concept guiding Ariba's user experience development, said new incoming Ariba President Alex Atzberger. And SAP is increasingly focusing on far-reaching processes among and between companies to better control total spend and innovate around the analytics of such activities, he said.

Some 1.7 million companies are now connected by Ariba, more than for Alibaba, eBay and Amazon combined.

Some 1.7 million companies are now connected by Ariba, more than for Alibaba, eBay and Amazon combined, said Atzberger.

Ariba is also in the process of integrating with SAP HANA and other recent SAP acquisitions, including Concur and Fieldglass. Haydon emphasized that Ariba is striving to make those integrations invisible to users.

"Ariba is leveraging SAP HANA to integrate Concur, S4HANA, and Fieldglass, with no middleware needed," said Haydon.

The scale and sophistication of the Network and its data is also enabling new levels of automated risk management to Ariba and SAP users. The Hackett Group says more than half of the companies surveyed recently see reducing supply risk as a key initiative.

The scale and sophistication of the Network and its data is also enabling new levels of automated risk management to Ariba and SAP users. ata-driven risk management is increasingly being embedded across supply chain and procurement apps and processes, says Ariba. Now, the risk of businesses having forced labor or other illegal labor practices inside their supply chain is being uncovered.

Made in a Free World

Also at the conference here, Ariba announced a alliance with San Francisco-based non-profit Made in a Free World to help eliminate modern day slavery from the supply chain. Ariba has donated part of the Ariba LIVE registrations to help fund further development of Made in a Free World's supply chain risk analysis tool, which improve a business's visibility of their supply chain.

Also at Ariba LIVE, the company presented T-Mobile with its 2015 Network Innovation Award. T-Mobile was an early adopter of Ariba Pay, a  cloud-based solution designed to digitize the "pay" in "procedure-to- pay." Prior to Ariba Pay, the payment process was largely paper based, making it costly and inefficient.

Disruption fuels innovation. And innovation drives advantage.

“Emerging payment technologies are certainly disruptive,” said Atzberger. “But as T-Mobile proves, disruption fuels innovation. And innovation drives advantage. And for this, we are pleased to recognize the company with the 2015 Network Innovation Award.”

But the mobile enablement and so-called "total user experience" improvements seemed the most welcomed by the audience. The experience factor may well prove to be where Ariba and SAP can outpace their competitors and fundamentally change how enterprises use business services.

For example, for all Ariba users -- from technicians on the plant floor to sales people in client meetings -- the company now provides a mobile app, Ariba Mobile, which is available from the Apple App Store or Google Play. With the app, users can quickly and easily view, track, and act on requisitions from their mobile devices, transforming the speed and efficiency with which work gets done. Ariba’s approach is not just about replicating existing user interactions onto mobile devices, but providing continuity between desktop interactions and working outside the office.

Easy access

“The Ariba Mobile app provides our requisition approvers with easy access to requisitions anywhere, anytime and gives them the information they need to make spend decisions while keeping up with global demand,” said Alice Hillary, P2P Enablement Lead at John Wiley and Sons. “Using the app, approvers can view their requests in a sleek interface and access details in an easier-to-read format than general emails. The feature has also helped us to address security concerns with approvals done through email.” 

And as part of planned innovations, Ariba will provide access to templates, best practices, videos and more within many of its applications, beginning with its Seller solutions. Users will also be able tap directly into the Ariba Exchange Community, enabling collaboration within the context of their apps with professionals who have the expertise, experience, and knowledge to help them execute a process such as responding to an RFP or creating an auction.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Mobile  Ariba Network  BriefingsDirect  Dana Gardner  Interarbor Solutions  SAP  SAP Fiori 

Share |
PermalinkComments (0)

ITIL-ITSM tagteam boosts Mexican ISP INFOTEC's operations quality

Posted By Dana L Gardner, Tuesday, April 07, 2015

The next BriefingsDirect IT systems performance innovation case study interview highlights how INFOTEC in Mexico City improves its service desk and monitoring operations and enjoys impressive results -- an incident reduction of more than 20 percent -- from those efforts.

INFOTEC needed to react better to systems failures, to significantly reduce the time to repair, and to learn from those failures to prevent future ones. Now, by deploying advanced IT service management (ITSM) tools, the ISP's users have a much higher quality of dependable service. 

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how they obtained such improvements, we're joined by Victor Hugo Piña García, the Service Desk and Monitoring Manager at INFOTEC. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Piña: INFOTEC is a Government Research Center. We have many activities. The principal ones are teaching, innovation technology, and IT consulting. The goal is to provide IT services. We have many IT services like data centers, telecommunications, service desk, monitoring, and manpower.

Gardner: This is across Mexico, the entire country?

Piña: Yes, it covers all the national territory. We have two locations. The principal is in Mexico City; San Fernando, and the Aguascalientes City is the other point we offer the services.

Gardner: Explain your role as the Service Desk and Monitoring Manager. What are you responsible for?

Three areas

Piña: My responsibility is in three areas. The first is the monitoring, to review all of the service, the IT components for the clients.


The second is the service desk, management of incidents and problems. Third is the generation of the deliveries of all the services of INFOTEC. We make deliveries for the IT service managers and service delivery.

Gardner: So it's important for organizations to know their internal operations, all the devices, and all the assets and resources in order to create these libraries. One of the great paybacks is that you can reduce time to resolution and you can monitor and have much greater support.

Give us a sense of what was going on before you got involved with ITIL and IT service management (ITSM), so that we can then better understand what you got as a benefit from it. What was it like before you were able to improve on your systems and operations?

Piña: We support the services with HP tools, HP products. We have many types of assets for adaptation and for solution. Then we create a better process. We align the process with the HP tools and products. Within two years we began to see benefits to service a customer.

That reduces considerably the time to repair. As a consequence, users have a better level of service.

We attained a better service level in two ways. First is the technical report, the failures. And second, the moment the failure is reported, we send specialists to attend to the failure. That reduces considerably the time to repair. As a consequence, users have a better level of service. Our values changed in the delivery of the service.

Gardner: I see that you have had cost reductions of up to one third in some areas, a 40 percent reduction in time to compliance, with service desk requests going from seven or eight minutes previously down to five minutes. It’s a big deal, an incident reduction of more than 20 percent. How is this possible? How were these benefits generated? Is it the technology, people, process, all the above?

Your guide to the "new style of IT"
Register now
For the HP Toolkit for Service Management

Piña: Yes, we consider four things. The people with their service is the first. The process with innovative mindset, the technology, is totally enabled to align with the previous two points, and the fourth, consistent and integral to the work in terms of the above three points.

Gardner: It sounds to me as if together these can add up to quite a bit of cost savings, a significant reduction in the total cost of operations.

Piña: Yes, that’s correct.

Gardner: Is there anything in particular that you're interested in and looking for next from HP? How could they help you do even more?

New concept and model

Piña: I've discovered many things. First, we need to know better and think about how we take these to generate a new concept, a new model, and a new process to operate and offer services.

There have been so many ideas. We need to process that and understand it, and we need to support HP Mexico to know how to deal with these new things.

Gardner: Are there any particular products that you might be going to, now that you've been able to attain a level of success? What might come next, more ITIL, more configuration management, automation, business service management? Do you have any  thoughts about your next steps?

Piña: Yes. We use ITIL methodology to make changes. When we present a new idea, we're looking for the impact -- economic, social, and political -- when the committee has a meeting to decide.

This is a good idea. This has a good impact. It's possible and proven, and then right there, we make it the new model of business for delivering our new service. We're thinking about the cloud, about big data, and about security. I don’t want to promise anything.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  HP  HPDiscover  INFOTEC  Interarbor Solutions  ITIL  ITSM  service desk  Victor Hugo Piña García 

Share |
PermalinkComments (0)

Novel consumer retail behavior analysis from InfoScout relies on HP Vertica big data chops

Posted By Dana L Gardner, Tuesday, March 31, 2015

The next BriefingsDirect big data innovation case study interview highlights how InfoScout in San Francisco gleans new levels of accurate insights into retail buyer behavior by collecting data directly from consumers’ sales receipts.

In order to better analyze actual retail behaviors and patterns, InfoScout provides incentives for buyers to share their receipts, but InfoScout is then faced with the daunting task of managing and cleansing that essential data to provide actionable and understandable insights.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript.

To learn more about how big -- and even messy -- data can be harnessed for near real time business analysis benefits, please join me in welcoming our guests, Tibor Mozes, Senior Vice President of Data Engineering, and Jared Schrieber, the Co-founder and CEO, both at InfoScout, based in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: In your business you've been able to uniquely capture strong data, but you need to treat it a lot to use it and you also need a lot of that data in order to get good trend analysis. So the payback is that you get far better information on essential buyer behaviors, but you need a lot of technology to accomplish that.

Tell us why you wanted to get to this specific kind of data and then your novel way of acquiring.

Schrieber: A quick history lesson is in order. In the market research industry, consumer purchase panels have been around for about 50 years. They started with diaries in people’s homes, where they had to write down exactly every single product that they bought, day-in day-out, in this paper diary and mail it in once a month.


About 20 years ago, with the advent of modems in people’s homes, leading research firms like Nielsen would send a custom barcode scanner into people’s homes and ask them to scan each product they bought and then thumb into the custom scanner the regular price, the sales price, any coupons or deals that they got, and details about the overall shopping trip, and then transfer that electronically. That approach has not changed in the last 20 years.

With the advent of smartphones and mobile apps, we saw a totally new way to capture this information from consumers that would revolutionize how and why somebody would be willing to share their purchase information with a market research company.

Gardner: Interesting. What is it about mobile that is so different from the past, and why does that provide more quality data for your purposes?

Schrieber: There are two reasons in particular. The first is, instead of having consumers scan the barcode of each and every item they purchase and thumb in the pricing details, we're able to simply have them snap a picture of their shopping receipt. So instead of spending 20 minutes after a grocery shopping trip scanning every item and thumbing in the details, it now takes 15 seconds to simply open the app, snap a picture of the shopping receipt, and be done.

The second reason is why somebody would be willing to participate. Using smartphone apps we can create different experiences for different kinds of people with different reward structures that will incentivize them to do this activity.

For example, our Shoparoo app is a next-generation school fundraiser akin to Box Tops for Education. It allows people to shop anywhere, buy anything, take a picture of their receipt, and then we make an instant donation to their kid’s school every time.

Another app is more of a Tamagotchi game called Receipt Hog, where if you download the app, you have adopted a virtual runt. You feed it pictures of your receipt and it levels-up into a fat and happy hog, earning coins in a piggy bank along the way that you can then cash-out from at the end of the day.

Become a member of myVertica
Register now

Gain access to the HP Vertica Community Edition

These kinds of experiences are a lot more intrinsically and extrinsically rewarding to the panelists and have allowed us to grow a panel that’s many times larger than the next largest panel ever seen in the world, tracking consumer purchases on a day-in day-out basis.

Gardner: What is it that you can get from these new input approaches and incentivization through an app interface? Can you provide me some sort of measurement of an improved or increased amount of participation rates? How has this worked out?

Leaps and bounds

Schrieber: It's been phenomenal. In fact, our panel is still growing by leaps and bounds. We now have 200,000 people sharing with us their purchases on a day-in day-out basis. We capture 150,000 shopping trips a day. The next largest panel in America captures just 10,000 shopping trips a day.

In addition to the shopping trip data, we're capturing geolocation information, Facebook likes and interests from these people, demographic information, and more and more data associated with their mobile device and the email accounts that are connected to it.

Gardner: So yet another unanticipated consequence of the mobility trend that’s so important today.

Tibor, let’s go to you. The good news is that Jared has acquired this trove of information for you. The bad news is that now you have to make sense of it. It’s coming in, in some interesting ways, as almost a picture or an image in some cases, and at a great volume. So you have velocity, variability, and volume. So what does that mean for you as the Vice President of Data Engineering?

Mozes: Obviously this is a growing panel. It’s creating a growing volume of data that has created a massive data pipeline challenge for us over the years, and we had to engineer the pipeline so that is capable of processing this incoming data as quickly as possible.

It’s creating a growing volume of data that has created a massive data pipeline challenge for us over the years.

As you can imagine, our data pipeline has gone through an evolution. We started out with a simple solution at the beginning with MySQL and then we evolved it using Elastic Map Reduce and Hive.

But we felt that we wanted to create a data pipeline that’s much faster, so we can bring data to our customers much faster. That’s how we arrived at Vertica. We looked at different solutions and found Vertica a very suitable product for us, and that’s what we're using today.

Gardner: Walk me through the process, Tibor. How does this information come in, how do you gather it, and where does the data go? I understand you're using the HP Vertica platform as a cloud solution in the Amazon Web Services Cloud. Walk me through the process for the data lifecycle, if you will.

Mozes: We use AWS for all of our production infrastructure. Our users, as Jared mentioned, typically download one of our several apps, and after they complete a receipt scan from their grocery purchases, that receipt is immediately uploaded to our back-end infrastructure.


We try to OCR that image of the receipt, and if we can’t, we use Amazon Mechanical Turk to try to make sense of the image and turn that image into text. At the end of the day, when an image is processed, we have a fairly clean version of that receipt in a text format.

In the next phase, we have to process the text and try to attribute various items on the receipt and make the data available in our Vertica data warehouse.

Then, our customers, using a business intelligence (BI) platform that we built especially for them, can analyze the data. The BI platform connects to Vertica, so our customers can analyze various metrics of our users and their shopping behavior.

Gardner: Jared, back to you. There's an awful lot of information on a receipt. It’s supposed to be very complex, given not just the date and the place and the type of retail organization, but all the different SKUs, every item that’s possibly being bought. How do you attack that sort of a data problem from a schema and cleansing and extract, transform, load (ETL) and then making it therefore useful?

Schrieber: It’s actually a huge challenge for us. It's quite complex, because every retailer’s receipt is different. The way that they structure the receipt, the level of specificity about the items on the receipt, the existence of product codes, whether they are public product codes like the kind of you see on a barcode for a soda product versus an internal product code that retailers use as a stock keeping unit internally versus just a short description on the receipt.

One of our challenges as a company is to figure out the algorithmic methods that allow us to identify what each one of those codes and short descriptions actually represent in terms of a real world product or category, so that we can make sense of that data on behalf of our client. That’s one of the real challenges associated with taking this receipt-based approach and turning that into useful data for our clients on a daily basis.

One of our challenges as a company is to figure out the algorithmic methods that allow us to identify what each one of those codes and short descriptions actually represent.

Gardner: I imagine this would be of interest to a lot of different types of information and data gathering. Not only are pure data formats and text formats being brought into the mix, as has been the case for many years, but this image-based approach, the non-structured approach.

Any lessons learned here in the retail space that you think will extend to other industries? Are we going to be seeing more and more of this image-based approach to analysis gathering?

Schrieber: We certainly are. As an example, just take Google Maps and Google Street View, where they're driving around in cars, capturing images of house and building numbers, and then associating that to the actual map data. That’s a very simple example.

A lot of the techniques that we're trying to apply in terms of making sense of short descriptions for products on receipts are akin to those being used to understand and perform social-media analytics. When somebody makes a tweet, you try to figure out what that tweet is actually about and means, with those abbreviated words and shortened character sets. It’s very, very similar types of natural language processing and regular expression algorithms that help us understand what these short descriptions for products actually mean on a receipt.

Gardner: So we've had some very substantial data complexity hurdles to overcome. Now we have also the basic blocking and tackling of data transport, warehouse, and processing platform.

Going back to Tibor, once you've applied your algorithms, sliced and diced this information, and made it into something you can apply to a typical data warehouse and BI environment, how did you overcome these issues about the volume and the complexity, especially now that we're dealing with a cloud infrastructure?

Compression algorithms

Mozes: One of the benefits of Vertica, as we went into the discovery process, was the compression algorithms that Vertica is using. Since we have a large volume of data to deal with and build analytics from, it has turned out to be beneficial for us that Vertica is capable of compressing data extremely well. As a result of that, some of our core queries that require a BI solution can be optimized to run super fast.

You also talked about the cloud solution, why we went into the cloud and what is the benefit of doing that. We really like running our entire data pipeline in AWS because it’s super easy to scale it up and down.

It’s easy for us to build a new Vertica cluster, if we need to evaluate something that’s not in production yet, and if the idea doesn’t work, then we can just pull it down. We can scale Vertica up, if we need to, in the cloud without having to deal with any sort of contractual issues.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Schrieber: To put this in context, now we're capturing three times as much data every day as we were six months ago. The queries that we're running against this have probably gone up 50X to a 100X in that time period as well. So when we talk about needing to scale this up quickly, that’s a prime example as to why.

Gardner: What has happened in just last six months that’s required that ramp up? Is it just because of the popularity of your model, the impactfulness and effectiveness of the mobile app acquisition model, or is it something else at work here?

Schrieber: It’s twofold. Our mobile apps have gotten more and more popular and we've had more and more consumers adopt them as a way to raise money for their kid’s school or earn money for themselves in a gamified way by submitting pictures of their receipts. So that’s driven massive growth in terms of the data we capture.

Also, our client base has more than tripled in that time period as well. These additional clients have greater demands of how to use and leverage this data. As those increase, our efforts to answer their business questions multiplies the number of queries that we are running against this data.

Gardner: That, to me, is a real proof point of this whole architectural approach. You've been able to grow by a factor of three in your client base in six months, but you haven’t gone back to them and said, "You'll have to wait for six months while we put in a warehouse, test it, and debug it." You've been able to just take that volume and ramp up. That’s very impressive.

Schrieber: I was just going to say, this is a core differentiator for us in the marketplace. The market research industry has to keep up with the pace of marketing, and that pace of marketing has shifted from months of lead time for TV and print advertising down to literally hours of lead time to be able to make a change to a digital advertising campaign, a social media campaign, or a search engine campaign.

So the pace of marketing has changed and the pace of market research has to keep up. Clients aren’t willing to wait for weeks, or even a week, for a data update anymore. They want to know today what happened yesterday in order to make changes on-the-fly.

Reports and visualization

Gardner: We've spoken about your novel approach to acquiring this data. We've talked about the importance of having the right platform and the right cloud architecture to both handle the volume as well as scale to a dynamic rapidly growing marketplace.

Let’s talk now about what you're able to do for your clients in terms of reports, visualization, frequency, and customization. What can you now do with this cloud-based Vertica engine and this incredibly valuable retail data in a near real-time environment for your clients?

Schrieber: A few things on the client side. Traditional market research providers of panel data have to put a very tight guardrails on how clients can access and run reports against the data. These queries are very complex. The numerators and denominators for every single record of the reports are different and can be changed on-the-fly.

If, all of a sudden, I want to look at anyone who shopped at Walmart in the last 12 months that has bought cat food in the last month and did so at a store other than Walmart, and I want to see their purchase behavior and how they shop across multiple retailers and categories, and I want to do that on-the-fly, that gets really complex. Traditional data warehousing and BI technologies don't support allowing general business-analyst users to be able to run those kinds of queries and reports on-demand, yet that’s exactly what they want.

They want to be able to ask those business questions and get answers. That’s been key to our strategy, which is to allow them to do so themselves, as opposed to coming back to them and saying, "That’s going to be a pretty big project. It will require a few of our engineers. We'll come back to you in a few weeks and see what we can do." Instead, we can hand them the tools directly in a guided workflow to allow them to do that literally on-the-fly and have answers in minutes versus weeks.

They want to be able to ask those business questions and get answers. That’s been key to our strategy.

Gardner: Tibor, how does that translate into the platform underneath? If you're allowing for a business analyst type of skill set to come in and apply their tools, rather than deep SQL queries or other more complex querying tools, what is it that you need from your platform in order to accommodate that type of report, that type of visualization, and the ability to bring a larger set of individuals into this analysis capability?

Mozes: Imagine that our BI platform can throw out very complex SQL queries. Our BI platform essentially is using, under the hood, a query engine that's going to run queries against Vertica. Because, as Jared mentioned, the questions are so complex, some of the queries that we run against Vertica are very different than your typical BI use cases. They're very specialized and very specific.

One of the reasons we went with Vertica is its ability to compute very complex queries at a very high speed. We look at Vertica not as simply another SQL database that scales very well and that’s very fast, but we also look at it as a compute engine.

So as part of our query engine, we are running certain queries and certain data transformations that would be very complicated to run outside Vertica.

We take advantage of the fact that you can create and run custom UDFs that is not part of the ANSI 99 SQL. We also take advantage some of the special functions that are built into Vertica allowing data to be sessionized very easily.

Analyzing behavior

Jared can talk about some of the use cases where we like to analyze user’s entire shopping trips. In order to do that, we have to stitch together different points in time that the user has gone through and shopped at various locations. And using some of the built –in functions in Vertica that’s not standard SQL, we can look at shopping journeys, we call them trip circuits, and analyze user behavior along the trip.

Gardner: Tibor, what other ways can you be using and exploiting the Vertica capabilities in the deliverables for your clients?

Mozes: Another reason we decided to go with Vertica is its ability to optimize very complex queries. As I mentioned, our BI platform is using a query engine under the hood. So if a user asks a very complicated business question, our BI platform turns that question into a very complicated query.

One of the big benefits of using Vertica is to be able to optimize these queries on the fly. It’s easy to do this with running the database optimizer to build custom projections, making queries running much faster than we could do before.

Another reason we decided to go with Vertica is its ability to optimize very complex queries.

Gardner: I always think more impactful for us to learn through an example rather than just hear you describe this. Do you have any specific InfoScout retail client use cases where you can describe how they've leveraged your solution and how some of these both technical and feature attributes have benefited them -- an example of someone using InfoScout and what it's done for them?

Schrieber: We worked with a major retailer this holiday season to track in real time what was happening for them on Thanksgiving Day and Black Friday. They wanted to understand their core shoppers, versus less loyal shoppers, versus non-core shoppers, how these people were shopping across retailers on Thanksgiving Day and Black Friday, so that the retailer could try to respond in more real time to the dynamics happening in the marketplace.

You have to look at what it takes to do that, for us to be able to get those receipts, process them, get them transcribed, get that data in, get the algorithms run to be able to map it to the brands and categories and then to calculate all kinds of metrics. The simplest ones are market share; the most complex ones have to do with what Tibor had mentioned: the shopper journey or the trip circuit.

We tried to understand, when this retailer was the shopper's first stop, what were they most likely to buy at that retailer, how much were they likely to spend, and how is that different than what they ended up buying and spending at other retailers that followed? How does that contrast to situations where that retailer was the second stop or the last stop of the day in that pivotal shopping day that is Black Friday?

For them to be able to understand where they were winning and losing among what kinds of shoppers who were looking for what kinds of products and deals was an immense advantage to them -- the likes of which they never had before.

Decision point

Gardner: This must be a very sizable decision point for them, right? This is going to help you decide where to build new retail outlets, for example, or how to structure the experience of the consumer walking through that particular brick-and-mortar environment.

When we bring this sort of analysis to bear, this isn’t refining at a modest level. This could be a major benefit to them in terms of how they strategize and grow. This could be something that really deeply impacts their bottom line. Is that not the case?

Schrieber: It has implications as to what kinds of categories they feature in their television, display advertising campaigns, and their circulars. It can influence how much space they give in their store to each one of the departments. It has enormous strategic implications, not just tactical day-to-day pricing decisions.

Gardner: Now, that was a retail example. I understand you also have clients that are interesting in seeing how a brand works across a variety of outlets or channels. Is there another example you can provide on somebody who is looking to understand a brand impact at a wider level across a geography for example?

It has enormous strategic implications, not just tactical day-to-day pricing decisions.

Schrieber: I'll give you another example that relates to this. A retailer and a brand were working together to understand why the brand sales were down at this particular retailer during the summer time. To make it clear for you, this is a brand of ice-cream. Ice cream sales should go up during the summer, during the warmer months, and the retailer couldn’t understand why their sales were underperforming for this brand during the summer.

To figure this out, we had to piece-together, along the shopper journey over time, not only in the weeks during the summer months, but year round to understand this dynamic of how they were shopping. What we were able to help the client quickly discover was that during the summer months people eat more ice-cream. If they eat more ice-cream, they're going to want larger pack sizes when they go and buy that ice-cream. This particular retailer tended to carry smaller pack sizes.

So when the summer months came around, even though people has been buying their ice-cream at this retailer in the winter and spring, they now wanted larger pack sizes and they were finding them at other retailers, and switching their spend over to these other retailers.

So for the brand, the opportunity was a selling story to the retailer to give the brand more freezer space and to carry an additional assortment of products to help drive greater sales for that brand, but also to help the retailer grow their ice cream category sales as well.

Idea of architecture

Gardner: So just that insight could really help them figure that out. They probably wouldn’t have been able to do it any other way.

We've seen some examples of how impactful this can be and how much a business can benefit from it. But let’s go back to the idea of the architecture. For me, one of my favorite truths in IT is that architecture is destiny. That seems to be the case with you, using the combination of AWS and HP Vertica.

It seems to me that you don’t have to suffer the costs of a large capital outlay of having your own data center and facilities. You're able to acquire these very advanced capabilities at a price point that's significantly less from a capital outlay and perhaps predictable and adjustable to the demand.

Is that something you then can pass along? Tell me a little bit about the economics of how this architectural approach works for you?

Mozes: One of the benefits of using AWS is that it’s very easy for us to adjust our infrastructure on demand, as we see fit. Jared has referred to some of the examples that we had before. We did a major analysis for a large retailer on Black Friday, and we had some special promotions to our mobile app users going on at that point. Imagine that our data volume would grow tremendously from one day to the next couple of days, and then after when the promotion is over and the big shopping season is over, our volume would come down somewhat.

It’s very cost efficient to run an operation where you can just add additional computing power as you need, and then when you don’t need that anymore, you can scale it down.

When you run an infrastructure in the cloud in combination with online data storage and data engine, it's very easy to scale it up and down. It’s very cost efficient to run an operation where you can just add additional computing power as you need, and then when you don’t need that anymore, you can scale it down.

We did this during a time period, when we had to bring a lot fresh data online quickly. We could just add additional nodes, and we saw very close to linear scalability by increasing our cluster size.

Schrieber: On the business side, the other advantage is we can manage our cash flows quite nicely. If you think about running a startup, cash is king, and not having to do large capital outlays in advance, but being able to adjust up and down with the fluctuations in our businesses, is also valuable.

Gardner: We're getting close to the end of our time. I wonder if you have any other insights into the business benefits from an analytics perspective of doing it this way. That is to say, incentivizing consumers, getting better data, being able to move that data and then analyze it at an on-demand infrastructure basis, and then deliver queries in whole new ways to a wider audience within your client-base.

I guess I'm looking for how this stands up both to the competitive landscape, but also to the past. How new and how innovative is this in marketing? Then we'll talk about where we go next? Let’s try to get a level set as to how new and how refreshing this is, given what the technology enables both at cloud basis and the mobility basis and then the core stuff, the underlying analytics platform basis.

Product launch

Schrieber: We have an example that's going on right now around a major new product launch for a very large consumer goods company. They chose us to help monitor this launch, because they were tired of waiting for six months for any insight in terms of who is buying it, how they were discovering it, how they came about choosing it over the competition, how their experience was with the product, and what it meant for their business.

So they chose to work with us for this major new brand launch, because we could offer them visibility within days or weeks of launching that new product in the market to help them understand who were the people who were buying, was it the target audience that they thought it was going to be, or was it a different demographic or lifestyle profile than they were expecting. If so, they might need to change their positioning or marketing tactics and targeting accordingly.

How are these people discovering the products? We're able to trigger surveys to them in the moment, right after they've made that purchase, and then flow that data back through to our clients to help them understand how these people are discovering it. Was it a TV advertisement? Was it discovered on the shelf or display in the store? Did a friend tell them about it? Was their social media marketing campaign working?

Often, hundreds of millions of dollars spent by major consumer goods companies on new brand launches to get this quick feedback in terms of what’s working and what’s not.

We're also able to figure out what these people were buying before. Were they new to this category of product? Or did they not use this kind of product before and were just giving it a try? Were they buying a different brand and have now switched over from that competitor? And, if so, how did they like it by comparison, and will they repeat purchase? Is this brand going to be successful? Is this meeting needs?

These are enormous decisions. Often, hundreds of millions of dollars spent by major consumer goods companies on new brand launches to get this quick feedback in terms of what’s working and what’s not, who to target with what kind of messaging, and what it’s doing to the marketplace in terms of stealing share from competitors.

Driving new people to the product category can influence major investment decisions along the lines of whether we need to build the new manufacturing facility, do we need to change our marketing campaigns, or should we go ahead and invest in that TV Super Bowl ad, because this really has a chance to go big?

These are massive decisions that these companies can now make in a timely manner, based on this new approach of capturing and making use of the data, instead of waiting six months on a new product launch. They're now waiting just weeks and are able to make the same kinds of decisions as a result.

Gardner: So, in a word it’s unprecedented. You really just haven’t been able to do this before.

Schrieber: It’s not been possible before at all, and I think that’s really what’s fueling the growth in our business.

Look to the future

Gardner: Let’s look to the future quickly. We hear a lot about the Internet of Things. We know that mobile is only partially through its evolution. We're going to see more smart phones in more hands doing more types of transactions around the globe. People will be using their phones for more of what we have thought of as traditional business in commerce. So that opens up a lot more information that’s generated and therefore need to gather and then analyze.

So where do we go next? How does this generate additional novel capabilities, and then where do we go perhaps in terms of verticals? We haven’t even talked about food or groceries, hospitality, or even health care.

So without going too far -- this could be another hour conversation in itself -- maybe we could just tease the listener and the reader with where the potential for this going forward is.

Schrieber: If you think about Internet of Things as it relates to our business, there are a couple of exciting developments. One is the use of things like beacons inside of stores. Now we can know exactly which aisle people have walked down and what shelf they’ve stood in front of, and what product they've interacted with. That beacon is communicating with their smartphone and that smartphone is tied to our user account in a way that we're surveying these individuals or triggering surveys to them, in-the-moment, as they shop.

That will open up entirely new fields of research and consumer understanding about how people shop and make decisions at the shelf.

That’s not something that’s been doable before. It’s something that the Internet of Things, and very specifically beacons linking with smartphones, will allow us to do going forward. That will open up entirely new fields of research and consumer understanding about how people shop and make decisions at the shelf.

The same is true inside the home. We talk about the Internet of Things as it relates to smart refrigerators or smart laundry machines, etc. Understanding daily lifestyle activities and how people make the choice of which product to use and how to use them inside their home is a field of research that is under-served today. The Internet of Things is really going to open up in the years to come.

Gardner: Just quickly, what are other retail sectors or vertical industries where this would make a great deal of sense.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Schrieber: I have a friend who runs an amazing business called Wavemark, which is basically an Internet of Things for medical devices and medical consumables inside of hospitals and care facilities, with the ability to track inventory in real time, tying it to patients and procedures, tying it back to billing and consumption.

Making all of that data available to the medical device manufacturers, so that they can understand how and when their products are being used in the real world in practice, is revolutionizing that industry. We're seeing it in healthcare, and I think we're going to see it across every industry.

Engineering perspective

Gardner: Last word to you, Tibor. Given what Jared just told us about the greater applicability. The model, the architecture comes back to mind for me, the cloud, the mobile device, the data, the engine, the ability to deal with that velocity, volume, and variability at a cost point that is doable and scales up and down. Are there any thoughts about this from an engineering perspective and where we go next?

Mozes: We see that with all these opportunities bubbling up, the amount of data that we have to process on a daily basis is just going to continually grow at an exponential rate. We continue to get additional information on shopping behavior and more data from external data sources. Our data is just going to grow. We will need to engineer everything to be as scalable as possible.

Listen to the podcast. Find it on iTunes. Read a full transcript. Get the mobile app. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  consumer panels  Dana Gardner  data analytics  HP  HP DISCOVER  HP Vertica  InfoScout  Interarbor Solutions  Jared Schrieber  Tibor Mozes 

Share |
PermalinkComments (0)

IT operations modernization helps energy powerhouse Exelon acquire businesses

Posted By Dana L Gardner, Wednesday, March 25, 2015

This next BriefingsDirect IT innovation discussion examines how Exelon Corporation, based in Chicago, employs technology and process improvements to not only optimize their IT operations but also to both help manage a merger and acquisition transition, and to bring outsourced IT operations back in-house.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how this leading energy provider in the US, with a family of companies having $23.5 billion in annual revenue, accomplishes these goals we're joined by Jason Thomas, Manager of Service, Asset and Release Management at Exelon. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I gave a brief overview of Exelon, but tell us a little bit more. It's quite a large organization that you're involved with.

Thomas: We are vast and expansive. We have a large nuclear fleet, around 40-odd nuclear power plants in three utilities, ComEd in Chicago, in the Illinois space; PECO out of Philadelphia; and BG and E in Baltimore.

So we have large urban utilities center. We also have a large retail presence with the Constellation brand and the sale of power both to corporations and to users. So there's a lot of that we do obviously in the utility space, and there are some element of the trade, the commodity trading side, as well in trading power in these markets.

Gardner: I imagine it must be quite a large IT organization to support all that?

Thomas: There are 1,200 to 1,300 IT employees across the company.

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

Gardner: Tell us about some of the challenges that you've been facing in managing your IT operations and making them more efficient. And, of course, we'd like to hear more about the merger between Constellation and Exelon back in 2012.

Merger is a challenge

Thomas: The biggest challenge is the merger. Obviously, our scale and the number of, for lack of a better word, things that we had to monitor, be aware of, and know about vastly increased. So we had to address that.


A lot of our efforts around the merger and post-merger were around bringing everything into one standard monitoring platform, extending that monitoring out, leveraging the Business Service Management (BSM) suite of products, leveraging Universal Configuration Management Database (UCMDB).

Then there was a lot around consolidating asset management. In early 2013, we moved to Asset Manager as our asset manager platform of choice, consolidating data from Exelon from their tool, the Cergus CA Argis tool, into Asset Manager in support of moving to new IT billing that would be driven out of the data and Asset Manager in leveraging some of the executive scorecard and financial manager pieces to make that happen.

There was also a large effort through 2013 to move the company to a standardized platform to support our service desk, incident management, and also our service catalog for end-users. But a lot of this was driven last year around the in-sourcing of our relationship with Computer Sciences Corporation for our IT operations.

This was to basically realize a savings to the company of $12 to $15 million annually from the management of that contract, and also to move both the management and the expertise in house and leverage a lot of the processes that we built up and that had grown through the company as a whole.

Gardner: So knowing yourself well in terms of your IT infrastructure and all the elements of that is super important, and then bringing in-sourcing transition to the picture, involves quite a bit of complexity.

You've leveled the playing field and you have that common set of tools that you're going to drive to take you to the next level.

What do you get when you do this well? Is there a sense of better control, better security, or culture? What is it that rises to the top of your mind when you know that you have your IT service management (ITSM) in order, when you have your assets and configuration management data in order. Is it sleeping better at night? Is it a sense of destiny you have fulfilled -- or what?

Thomas: Sleeping better at night. There is an element of that, but there's also sometimes the aspect of, "Now what's next?" So, part of it is that there's an evolutionary aspect too. We've gotten everything in one place. We're leveraging some of the integrations, but then what’s next?

It's more restful. It's now deciding how we better position ourselves to show the value of these platforms. Obviously, there's a clear monetary value of what we did to in-source, but now how do we show the business the value that we have done? Moving to a common set of tools helps to get there. You've leveled the playing field and you have that common set of tools that you're going to drive to take you to the next level.

Gardner: What might that next level be? Is it a cloud transition? Is it more of a hybrid sourcing for IT? Is this enabling you to take advantage of the different devices in terms of mobile? Where does it go?

Automation and cloud

Thomas: A lot of it is really around automation, the intermediate step around cloud. We've looked at cloud. We do have areas where the company has leveraged it. IT is still trying to wrap their heads around how we do it, and then also how we expose that to the rest of the organization.

But the steps we’ve done around automation are very key in making leaner operations, IT operations, but also being able to do things in an automated fashion, as opposed to requiring the manual elements that, in some cases, we had never done prior to the merger.

Gardner: Any examples? You mentioned $15 million in savings, but are there any other metrics of success or key performance indicator (KPI)-level paybacks that you can point to in terms of having all this in place for managing and understanding your IT?

Thomas: We're still going through what it is we're going to measure and present. There's been a standard set of things that we've measured around our availability and our incidents and whether these incidents are caused by IT, by infrastructure.

One of the key things is how you're changing and how you do IT operations.

We've done a lot better operationally. Now it's taking some of those operational aspects and making them a little bit more business-centric. So for the KPIs, we're going through that process of determining what we're going to measure ourselves against.

Gardner: Jason, having gone through quite a big and complex undertaking in getting your ITSM and Application Lifecycle Management (ALM) activities, what comnes next? Maybe a merger and acquisition is going to push you in a new direction.

Thomas: We recently announced the intent to acquire Pepco Holdings, which is the regional utility in Washington, DC area, that further widens our footprint in the mid-Atlantic area. So yeah, we get to do it all over again with a new partner, bringing Pepco in and doing some elements of this again.

Gardner: Having gone through this and anticipating yet another wave, what words of wisdom might you provide in hindsight for those who are embarking on a more automated, streamlined, and modern approach to IT operations?

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

Thomas: One of the key things is how you're changing and how you do IT operations. Moving towards automation, tools aside, there's a lot of organizational change if you're changing how people do what they do or changing people's jobs or the perception of that.

You need to be clear. You need to clearly communicate, but you also need to make sure that you have the appropriate support and backing from leadership and that the top-down communication is the same message. We certainly had that, and it was great, but there's alway going to be that challenge of making sure everybody is getting that communication, getting the message, and getting constant reinforcement of that.

Organizational changes resulting from a large merger or acquisition are huge. It's key to show the benefits, even to the people who are obviously going to reap some of these immediate benefits,  those in IT. You know the business is going to see some. It's couching that value in the means or method appropriate for those actors, all of those stakeholders.

Full circle

Gardner: Of course, you have mentioned working through a KPI definition and working the executive scorecard. That makes if full circle, doesn’t it?

Thomas: Defining those KPIs, but also having one place where those KPIs can be viewed, seen easily, and drilled into is big. To date, it's been a challenge to provide some of that historiography around that data. Now, you have something where you can even more readily drill into it to see that data -- and that’s huge.

Presenting that, being able to show it, and being able to show it in a way that people can see it easily, is huge, as opposed to just saying, "Well, here's the spreadsheet with some graphs" or "Here’s a whiz-bang PowerPoint doc."

Gardner: And, Jason, I suppose this points to the fact that IT is really maturing. Compared to other business services and functions in corporations, things that had been evolving for 80 or 100 years, IT is, in a sense, catching up.

Now, you have something where you can even more readily drill into it to see that data -- and that’s huge.

Thomas: It's catching up, but I also think it's more of a reflection. It's reflection of a lot of the themes of the new style of IT. A lot of that is that consumerization aspect. In fact,  if you look at the last 10 years ago, the wide presence of all of these, your smart devices and your smartphones, is huge.

We have brought to most people something that was never easily accessible. And having to take that same aspect and make it part of how you present what you do in IT is huge. You see it in how you're manifesting it in your various service catalogs and some of the efforts that we're undertaking to refine and better the processes that underlie our technical service catalog to have a better presentation layer.

That technical service catalog will refer to what we've seen with Propel. It's an easier, nicer, friendlier way to interact, and people expect that. Why can’t this be more like my app store, or why can't this be more like X.

Is IT catching up or has IT become more reachable, has become more warm and fuzzy as opposed to something that’s cold, hard, and stored away somewhere? You kind of know about it, and perhaps the guys in the basement are the ones who are doing all the heavy lifting, and it's more tangible.

Gardner: Humanization of IT, perhaps.

Thomas: Absolutely.

Gardner: All right, one last area I want to get into before we sign off. We've heard quite a bit  about The Machine, HP unveiling more detail from its labs activities. It’s not necessarily on a product roadmap yet, but it’s described through a lower footprint, much more rapid ability to join compute and memory, and then  reduce the size of the data center down to a size of a refrigerator.

I know that it's on the horizon, but how does that strike you, and how interesting is that for you?

Ramp up/ramp down

Thomas: It's interesting, because it allows you to get to bit more ability to ramp up or ramp down, based on what you need, as opposed to you having x amount of servers and x amount of storage that's always somewhere. It gives you a lot more flexibility and, to some extent, gives you a bit more tenability. It's directly applicable to certain aspects of the business, where you need that capability to ramp up and ramp down much more easily.

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

I had a conversation with one of my peers about that. We were talking about how both that and the Moonshot aspect and the ability to have that for a lot of the customer-facing websites, and the ability to tie them, in particular, the utility customer-facing websites whose utilization tends to spike during weather events.

While they don't spike all at the same time, there is the potential opportunity in the Mid-Atlantic of all the utilities spiking at the same time around a hurricane or Sandy-esque event. There's obviously a need to able to respond to that kind of demand, and that technology positions you with the flexibility to do that rather quickly and easily.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Asset Manager  BriefingsDirect  Dana Gardner  Exelon  HP  HP BSM  HPDiscover  Interarbor Solutions  ITSM  Jason Thomas 

Share |
PermalinkComments (0)

Axeda's machine cloud produces on-demand IoT analysis services

Posted By Dana L Gardner, Friday, March 20, 2015

This BriefingsDirect big data innovation discussion examines how Axeda, based in Foxboro, Mass., has created a machine-to-machine (M2M) capability for analysis -- in other words, an Axeda Machine Cloud for the Internet of Things (IoT).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how Axeda produces streams of massive data to multiple consumer dashboards that analyze business issues in near-real-time, we're joined by Kevin Holbrook, Senior Director of Advance Development at Axeda. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We have the whole Internet of Things (IoT) phenomenon. People are accepting more and more devices, end points, sensors, even things within the human body, delivering data out to applications and data pools. What do you do in terms of helping organizations start to come to grip with this M2M and IoT data demand?

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Holbrook: It starts with the connectivity space. Our focus has largely been in OEMs, equipment manufacturers. These are people who have the "M" in the M2M or the "T" in the Internet of Things. They are manufacturing things.

The initial drivers to have a handle on those things are basic questions, such as, "Is this device on?" There are multi-million dollar machines that are currently deployed in the world where that question can’t be answered without a phone call.

Initial driver

That was the initial driver, the seed, if you will. We entered into that space from the remote-service angle. We deployed small-agent software to the edge to get the first measurements from those systems and get them pushed up to the cloud, so that users can interact with it.


That grew into remote accesstelnet sessions or remote desktop being able to physically get down there, debug, tweak, and look at the devices that are operating. From there, we grew into software distribution, or content distribution. That could be anything from firmware updates to physically distributing configuration and calibration files for the instrument. We're recently seeing an uptake in content distribution for things like digital signage or in-situ ads being displayed on consumer goods.

From there, we started aggregating data. We have about 1.5 million assets connected to our cloud now globally, and there is all kinds of data coming in. Some of it's very, very basic from a resource standpoint, looking at CPU consumption, disks space, available memory, things of that nature.

It goes all the way through to usage and diagnostics, so that you can get a very granular impression how this machine is operating. As you begin to aggregate this data, all sorts of challenges come out of it. HP has proven to be a great partner for starting to extract value.

We can certainly get to the data, we can connect the device, and we can aggregate that data to our partners or to the customer directly. Getting value from that data is a completely different proposition. Data for data’s sake is not high value.

From our perspective, Vertica represents an endpoint. We've carried the data, cared for the data, and made sure that the device was online, generating the right information and getting it into Vertica.

Gardner:  What is it that you're using Vertica for to do that? Are we creating applications, are we giving analysis as a service? How is this going to market for you?

Holbrook: From our perspective, Vertica represents an endpoint. We've carried the data, cared for the data, and made sure that the device was online, generating the right information and getting it into Vertica.

When we approach customers, were approaching it from a joint-sale perspective. We're the connectivity layer, the instrumentation, the business automation layer there, and we're getting it into Vertica ,so that can be the seed for applications for business intelligence (BI) and for analytics.

So, we are the lowest component in the stack when we walk into one of these engagements with Vertica. Then, it's up to them, on a customer-by-customer basis, to determine what applications to bring to the table. A lot of that is defined by the group within the organization that actually manages connectivity.

We find that there's a big difference between a service organization, which is focused primarily on keeping things up and running, versus a business unit that’s driving utilization metrics, trying to determine not only how things are used, but how it can influence their billing.

Business use

We've found that that's a place where Vertica has actually been quite a pop for us in talking to customers. They want to know not just the simple metrics of the machines' operation, but how that reflects the business use of it.

The entire market has shifted and continues to shift. I was somewhat taken aback only a couple of weeks ago, when I found out that you can no longer buy a jet engine. I thought this was a piece of hardware you purchased, as opposed to something that you may have rented and paid per use. And so [the model changes to leasing] as the machines get  bigger and bigger. We have GE and the Bureau of Engraving and Printing as customers.

We certainly have some very large machines connected to our cloud and we're finding that these folks are shifting away from the notion that one owns a machine and consumes it until it breaks or dies. Instead, one engages in an ongoing service model, in which you're paying for the use of that machine.

While we can generate that data and provide some degree of visibility and insight into that data, it takes a massive analytics platform to really get the granular patterns that would drive business decisions.

Gardner: It sounds like many of your customers have used this for some basic blocking and tackling about inventory and access and control, then moved up to a business metrics of how is it being used, how we're billing, audit trails, and that sort of thing. Now, we're starting to look at a whole new type of economy. It's a services economy, based on cloud interactivity, where we can give granular insights, and they can manage their business very, very tightly.

There's not only a ton of data being generated, but the regulatory and compliance requirements which dictate where you can even leave that data at rest.

Any thoughts about what's going to be required of your organization to maintain scale? The more use cases and the more success, of course, the more demand for larger data and even better analytics. How do you make sure that you don't run out of runway on this?

Holbrook: There are a couple of strategies we've taken, but before I dive into that, I'll say that the issue is further complicated by the issue of data homing. There's not only a ton of data being generated, but the regulatory and compliance requirements which dictate where you can even leave that data at rest. Just moving it around is one problem, and where it sits on a disk is a totally different problem. So we're trying to tackle all of these.

The first way to address the scale for us from an architectural perspective was to try to distribute the connectivity. In order for you to know that something's running, you need to hear from it. You might be able to reach out, what we call contactability, to say, "Tell me if you're still running." But, by and large, you know of a machine's existence and its operation by virtue of it telling you something. So even if a message is nothing more than "Hello, I'm here," you need to hear from this device.

From the connectivity standpoint, our goal is not to try to funnel all of this into a single pipe, but rather to find where to get a point of presence that is closest and that is reasonable. We’ve been doing this on our remote-access technology for years, trying to find the appropriate geographically distributed location to route data through, to provide as easy and seamless an experience as possible.

So that’s the first, as opposed to just ruthlessly federating all incoming data, distributing the connectivity infrastructure, as well as trying to get that data routed to its end consumer as quickly as possible.

We break down data from our perspective into three basic temporal categories. There's the current data, which is the value you would see reading a dial on the machine. There's recent data, which would tell you whether something is trending in a negative direction, say pressure going up. Then, there is the longer-term historical data. While we focus on the first two, we’d deliberately, to handle the scale problem, don't focus on the long-term historical data.

Recent data

I'll treat recent data as being anywhere from 7 to 120 days and beyond, depending on the data aggregation rates. We focus primarily on that. When you start to scale beyond that, where the real long tail of this is, we try to make sure that we have our partner in place to receive the data.

We don't want to be diving into two years of data to determine seasonal trending when we're attempting to collect data from 1.5 million assets and acting as quickly as possible to respond to error conditions at the edge.

Gardner: Kevin, what about the issue of latency? I imagine some of your customers have a very dire need to get analysis very rapidly on an ongoing streamed basis. Others might be more willing to wait and do it in a batch approach in terms of their analytics. How do you manage that, and what are some of the speeds and feeds about the best latency outcomes?

Holbrook: That’s a fantastic question. Everybody comes in and says we need a zero-latency solution. Of course, it took them about two-and-a-half seconds to say that.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

There's no such thing as real-time, certainly on the Internet. Just negotiating up the TCP stack and tearing it down to send one byte is going to take you time. Then, we send it over wires under the ocean, bounce it off a satellite, you name it. That's going to take time.

There are two components to it. One is accepting that near-real-time, which is effectively the transport latency, is the smallest amount of time it can take to physically go from point A to point B, absent having a dedicated fiber line from one location to the other. We can assume that on the Internet that's domestically somewhere in the one- to two-second range. Internationally, it's in the two- to three-second or beyond range, depending on the connectivity of the destination.

What we provide is an ability to produce real-time streams of data outbound. You could take from one asset, break up the information it generates, and stream it to multiple consumers in near-real-time in order to get the dashboard in the control center to properly reflect the state of the business. Or you can push it to a data warehouse in the back end, where it then can be chunked and ETLd into some other analytics tool.

For us, we try not to do the batch ETLing. We'd rather make sure that we handle what we're good at. We're fantastic at remote service, at automating responses, at connectivity and at expanding what we do. But we're never going to be a massive ETL, transforming and converting into somebody’s data model or trying to get deep analytics as a result of that.

Gardner: Was it part of this need for latency, familiarity, and agility that led into Vertica? What were some of the decisions that led to picking Vertica as a partner?

Several reasons

Holbrook: There were a few reasons. That was one of them. Also the fact that there's a massive set of offerings already on top of it. A lot of the other people when we considered this -- and I won't mention competitors that we looked at -- were more just a piece of the stack, as opposed to a place where solutions grew out of.

It wasn't just Vertica, but the ecosystem built on top of Vertica. Some of the vendors we looked at are currently in the partner zone, because they're now building their solutions on top of Vertica.

We looked at it as an entry point into an ecosystem and certainly the in-memory component, the fact that you're getting no disk reads for massive datasets was very attractive for us. We don’t want to go through that process. We've dealt with the struggles internally of trying to have a relational data model scale. That’s something that Vertica has absolutely solved.

Gardner: Now your platform includes application services, integration framework, and data management. Let’s hone in on the application services. How are developers interested in getting access to this? What are their demands in terms of being able to use analysis outcomes, outputs, and then bring that into an application environment that they need to fulfill their requirements to their users?

It wasn't just Vertica, but the ecosystem built on top of Vertica. Some of the vendors we looked at are currently in the partner zone, because they're now building their solutions on top of Vertica.

Holbrook: It breaks them down into two basic categories. The first is the aggregation and the collection of data, and the second is physical interaction with the device. So we focus on both about equally. When we look at what developers are doing, almost always it’s transforming the data coming in and reaching out to things like a customer relationship management (CRM) system. It's opening a ticket when a device has thrown a certain error code or integrating with a backend drop-ship distribution system in the event that some consumable has begun to run low.

In terms of interaction, it's been significant. On the data side, we primarily see that they're  extracting subsets of data for deeper analysis. Sometimes, this comes up in discrete data points. Frequently, this comes up in the transfer of files. So there is a certain granularity that you can survive. Coming down the fire-hose is discrete data points that you can react to, and there's a whole other order of magnitude of data that you can handle when it's shipped up in a bulk chunk.

A good example is one of the use cases we have with GE in their oil and gas division  where they have a certain flow of data that's always ongoing and giving key performance indicators (KPIs). But this is nowhere near the level of data that they're actually collecting. They have database servers that are co-resident with these massive gas pipeline generators.

So we provide them the vehicle for that granular data. Then, when a problem is detected automatically, they can say, "Give me far more granular data for the problem area." it could be five minutes before or five minutes since. This is then uploaded, and we hand off to somewhere else.

So when we find developers doing integration around the data in particular, it's usually when they're diving in more deeply based on some sort of threshold or trigger that has been encountered in the field.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Gardner: And lastly, Kevin, for other organizations that are looking to create data services and something like your Axeda Machine Cloud, are there any lessons learned that you could share when it comes to managing such complexity, scale, and the need for speed? What have you learned at a high level that you could share?

All about strategy

Holbrook: It’s all going to be about the data-collection strategy. You're going to walk into a customer or potential customer, and their default response is going to be, "Collect everything." That’s not inherently valuable. Just because you've collected it, doesn’t mean that you are going to get value from it. We find that, oftentimes, 90-95 percent of the data collected in the initial deployment is not used in any constructive way.

I would say focus on the data collection strategy. Scale of bad data is scale for scale’s sake. It doesn’t drive business value. Make sure that the folks who are actually going to be doing the analytics are in the room when you are doing your data collection strategy definition. when you're talking to the folks who are going to wire up sensors,  and when you're talking to the folks who are building the device.

Unfortunately, these are frequently within a larger business ,in particular, completely different groups of people that might report to completely different vice presidents. So you go to one group, and they have the connectivity guys. You talk about it and you wire everything up.

We find that, oftentimes, 90-95 percent of the data collected in the initial deployment is not used in any constructive way.

Then, six to eight months later, you walk into another room. They’ll say "What the heck is this? I can’t do anything with this. All I ever needed to know was the following metric." It wasn’t collected because the two hadn't stayed in touch. The success of deployed solutions and the reaction to scale challenges is going to be driven directly by that data-collection strategy. Invest the time upfront and then you'll have a much better experience in the back.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

 You may also be interested in:

Tags:  Axeda  BriefingsDirect  cloud computing  Dana Gardner  HP  HPDiscover  Interarbor Solutions  Internet of things  Kevin Holbrook  M2M  machine to machine  Vertica 

Share |
PermalinkComments (0)

Health Shared Services BC harnesses a healthcare ecosystem using IT asset management

Posted By Dana L Gardner, Tuesday, March 17, 2015

The next BriefingsDirect innovation panel discussion examines how Health Shared Services BC in Vancouver improves process efficiency and standardization through better integration across health authorities in British Columbia, Canada.

We'll explore how HSSBC has successfully implemented one of the healthcare industry’s first Service Asset and Configuration Management Systems to help them optimize performance of their IT systems and applications.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how HSSBC gains up-to-date single views of IT assets across a shared-services environment, please join me in welcoming our guests, Daniel Lamb, Project Manager for the ITSM Program, and Cam Haley, Program Manager for the ITSM Program, both at HSSBC. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gentlemen, tell me first about the context of your challenge. You're an organization that's trying to bring efficiency and process improvements across health authorities in British Columbia. What is it about that task that made better IT service management (ITSM) an imperative?

Haley: If you look at the healthcare space, where it is right now within British Columbia, we have the opportunity to look at using our healthcare funding more efficiently and specifically focus on delivering more clinical outcomes for consumers of the services.


That was one of the main drivers behind the formation of HSSBC, to consolidate some of the key supporting and enabling services into an organization that could deliver a standardized set of service offerings across our health authority clients, so that they can focus on clinical delivery.

That was the key business driver around why we're here and why we are doing some of those things. For us to effectively deliver on that mandate, we need the tools and the process capabilities to be able to effectively deliver more consistent service outcomes, all those things that we want to deliver there, and to look at reducing cost a little long-term so that those cost could be again shifted into clinical delivery and to really enable those outcomes.

Necessary system

Gardner: Daniel, why was a Service Asset and Configuration Management System something that was important to accomplish this?

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: We have been in the process of a large data center migration project over the past three years, moving a lot of the assets out of Vancouver and into a new data center. We standardized on HP infrastructure up in Kamloops and we have -- when we put in all our Health Authorities assets, it's going to be upwards of around probably 6,500-7,000 servers to manage.


As we merged to the super organization, the manual processes just don’t exist anymore. To keep those assets up-to-date we needed an automated system. The reason we went for those products, which included the asset side and the configuration service management, is that’s really our business. We're going to be managing all these assets for the organization and all the configuration items, and we are providing these services. So this is where the toolset really fitted our goals.

Gardner: So other than scale, size, and the migration, were there any other requirements or problems that you needed to solve that moving into this more modern ITSM capability delivered?

Haley: Just to build on what Daniel said, one of the key drivers in terms of identifying the toolset and the capabilities was to support the migration of infrastructure into the data center.

But along with that, we provide a set of services that go beyond data center. The tool capability that has been delivered in supporting that outcome enables us to focus on optimizing our processes, getting a better view into what's happening in our own environment. So having the configuration items (CIs) in the configuration management data base (CMDB), having the relationships develop both at the infrastructure level, but all the way up to the application or the business service level.

Now we have a view up and down the stack of what's going on. We get better analytics and better data, and we can make some better decisions as well around where we want to focus. What are the pain points that we need to target? We 're able to mine that stuff and really look at opportunities to optimize.

The tool allows us to standardize our processes and roll out the capabilities. Automation is built into the tool, which is fantastic for us in terms of taking that manual overhead out of that and really just allowing us to focus on other things. So it's been great.

Gardner: Any unexpected benefits, ancillary benefits, that come from the standardization with this visibility, knowing your organization better that maybe you didn't anticipate?

Up-to-date information

Lamb: We've been able to track down everything that’s out there. That’s one thing. We just didn’t know where everything was or what we had. So in terms of being able to forecast to the health authorities, "This is how much you need to part with for maintenance, that sort of thing," that was always a guess in the past. We now have that up-to-date information available.

This has also laid the platform for us to better take advantage of the new technologies that are coming in. So what HP is talking about at the moment, we can’t really take advantage of that, but they have this base platform. It’s going to allow us to take advantage of a lot of the new stuff that’s coming out.

Gardner: So in order to get the efficiency and cost benefits of new infrastructure and converged systems and data center efficiencies, having your ducks lined up and understood is a crucial first step.

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: Definitely.

Gardner: Looking down the road, what’s piquing your interest in terms of what HP is doing or new developments, or does this now allow you to then progress into other areas that you are interested in?

Lamb: Personally, I'm looking at obviously the new versions of the product sets we have at the moment. We've also been speaking to other customers on the success that we've had and giving them some lessons learned on how things worked.

One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

Then, we're looking at some of other products we could build on to this -- the PPM, which is the Project Management toolset and the BSM, which is unified monitoring and that sort of thing. Being able to put those products on is where we'll start seeing even more value, like in terms of being able to reduce the amount of tickets and support cost and that sort of thing. So we're looking at that.

Then, just ad-hoc interest are the things around the big data and that sort of thing, just trying to get my head around how that works for us, because we have a lot of data. So some of those new technologies are coming out as well.

Gardner: Cam, given what you've already done, what has it gotten for you? What are some of the benefits and results that you have seen. Are there any metrics of success that you can share with us?

Haley: The first thing is that we're still pretty early in our journey out of the gate, if I just talk about what we've already achieved. One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

We've implemented change management in particular within the toolset, and that’s giving us a more robust set of controls around what's actually happening and what’s actually going into the environment. That's been really important, not only for the staff, although there is bit of a learning curve around that, but in terms of the outcomes for our clients.

Comfort level

They have a higher comfort level that we have more insight or oversight into what’s actually happening in space and we are actually protecting the services that they need to deliver by putting those kinds of capabilities in. So from the process perspective, we've certainly been able to get some benefits in that area in particular.

From a client perspective, it's putting the toolset in it. It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients. That’s something that hasn’t always been there in the past.

I'm not saying that we're all the way there yet, but we're starting to show that we can deliver the services that the health authorities expect us to deliver, and we are using the toolset to help enable that. That’s also an important aspect.

The other thing is that through the work we've done in terms of consolidating some of our contracts, maintenance agreements, and so on into our asset management system, we have a better view of what we're paying for. We've already realized some opportunities to consolidate some contracts and show some savings as well.

It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients.

That's just a number of areas where we're already seeing some benefits. As we start to roll out more of the capabilities of the tool in the coming year and beyond that, we expect that we will get some of those standard metrics that you would typically get out of it. Of course, we'll continue to drive out the ROI value as well. So we're already a good way down that path, and we'll just continue to do that.

Gardner: Any words of wisdom, based on your journey so far, for other organizations that might be struggling with spreadsheets and tracking all of their assets and all of their devices and even the processes around IT support? What have you learned. What could you share to someone who is just starting out?

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: We had a few key lessons that we spoke about. One was the guiding principles that you are going to do the implementation by. We were very much of the approach that we would try to keep things as out-of-the-box as possible. HP, as they are doing the new releases, would pick up the functionality that we are looking for. So we didn’t do a lot of tailoring.

And we did the project in a short cycle. These projects can go on for years sometimes, and a lot of money can get sunk and there isn’t value gained sometimes. We said, "Let’s do these in more short sprint projects. We'll get something in, we'll start showing value to the organization, then we'll get into another thing." That’s the cycle that we're working in, and that's worked really well.

The other thing is that we had a great consultant partner that we worked with, and that was key. We were feeling a little lost when we came here last year, and that was one of the things we did. We went to a good consultant partner, Effectual Systems from San Francisco, and that helped us.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Cam Haley  Dana Gardner  Daniel Lamb  HP  HPDiscover  HSSBC  Interarbor Solutions  ITSM 

Share |
PermalinkComments (0)

Hackathon model plus big data equals big innovation for Thomson Reuters

Posted By Dana L Gardner, Thursday, March 12, 2015

The next BriefingsDirect innovation interview explores the use of a hackathon approach to unlock creativity in the search for better use of big data for analytics. We will hear how Thomson Reuters in London sought to foster innovation and derive more value from its vast trove of business and market information.

The result: A worldwide virtual hackathon that brought together developers and data scientists to uncover new applications, visualizations, and services to make all data actionable and impactful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about getting developers on board the big-data analysis train, BriefingsDirect sat down with Chris Blatchford, Director of Platform Technology in the IT organization at Thomson Reuters in London. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Blatchford: Thomson Reuters is the world's leading source of intelligent information. We provide data across the finance, legal, news, IP, and science, tax, and accounting industries through product and service offerings, combining industry expertise with innovative technology.

Gardner: It’s hard to think of an organization where data and analysis is more important. It’s so core to your very mission.


Blatchford: Absolutely. We take data from a variety of sources. We have our own original data, third-party sources, open-data sources, and augmented information, as well as all of the original content we generate on a daily basis. For example, our journalists in the field provide original news content to us directly from all over the globe. We also have third-party licensed data that we further enrich and distribute to our clients through a variety of tools and services.

Gardner: And therein lies the next trick, what to do with the data once you have it. About this hackathon, how did you come up upon that as an idea to foster innovation?

Big, Open, Linked Data

Blatchford: One of our big projects or programs of work currently is, as everyone else is doing, big data. We have an initiative called BOLD, which is Big, Open, Linked Data, headed up by Dan Bennett. The idea behind the project is to take all of the data that we ingest and host within Thomson Reuters, all of those various sources that I just explained, stream all of that into a central repository, cleanse the data, centralize it, extract meaningful information, and subsequently expose it to the rest of the businesses for use in their specific industry applications.

As well as creating a central data lake of content, we also needed to provide the tools and services that allow businesses to access the content; here we have both developed our own software and licensed existing tools.

So, we could demonstrate that we could build big-data tools using our internal expertise, and we could demonstrate that we could plug in third-party specific applications that could perform analysis on that data. What we hadn’t proved was that we could plug in third-party technology enterprise platforms in order to leverage our data and to innovate across that data, and that’s where HP came in.

HP was already engaged with us in a number of areas, and I got to speaking with their Big Data Group around their big data solutions. IDOL OnDemand came up. This is now part of the Haven OnDemand platform. We saw some synergies there between what we were doing with the big-data platform and what they could offer us in terms of their IDOL OnDemand API’s. That’s where the good stuff started.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: Software developers, from the very beginning, have had a challenge of knowing their craft, but not knowing necessarily what their end users want them to do with that craft. So the challenge -- whether it’s in a data environment, a transactional environment or interface, or gaming -- has often been how to get the requirements of what you're up to into the minds of the developers in a way that they can work with. How did the hackathon contribute to solving that?

As well as creating a central data lake of content, we also need to provide the tools and services that allow businesses to access the content.

Blatchford: That’s a really good question. That’s actually one of the biggest challenges big data has in general. We approach big data in one of two ways. You have very specific use cases, for example, consider a lawyer working on a particular case for a client, it would be useful for them to analyze prior cases with similar elements. If they are able to extract entities and relevant attributes, they may be able to understand the case final decision, or perhaps glean information that is relevant to their current case.

Then you have the other approach, which is much more about exploration, discovering new insights, trends, and patterns. That’s similar to the the approach we wanted to take with the hackathon -- provide the data and the tools to our developers for them just to go and play with the data.

We didn’t necessarily want to give them any requirements around specific products or services. It was just, "Look, here is a cool platform with some really cool APIs and some capabilities. Here is some nice juicy data. Tell us what we should be doing? What can we come up with from your perspective on the world?"

A lot of the time, these engineers are overlooked. They're not necessarily the most extroverted of people by the nature of what they do and so they miss chances, they miss opportunities, and that’s something we really wanted to change.

Gardner: It’s fascinating the way to get developers to do what you want them to do is to give them no requirements.

Interesting end products

Blatchford: Indeed. That can result in some interesting end-products. But, by and large, our engineers are more commercially savvy than most, hence we can generally rely on them to produce something that will be compelling to the business. Many of our developers have side projects and personal development projects they work on outside of the realms of their job requirement. We should be encouraging this sort of behavior.

Gardner: So what did you get when you gave them no requirements? What happened?

Blatchford: We had 25 teams that submitted their ideas. We boiled that down to 7 finalists based upon a set of preliminary criteria, and out of those 7, we decided upon our first-, second-, and third-place winners. Those three end results were actually taken, or are currently going through a product review, to potentially be implemented into our product lines.

The overall winner was an innovative UI design for mobile devices, allowing users to better navigate our content on tablets and phones. There was a sentiment analysis tool, that allowed users to paste in news stories or any news content source on the web and extract sentiment from that news story.

And the other was more of an internally focused, administrative exploration tool, that  allowed us to more intuitively navigate our own data, which perhaps doesn’t initially seem as exciting as the other two, but is actually a hugely useful application for us.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: Now, how does IDOL OnDemand come to play in this? IDOL is the ability to take any kind of information, for the most part, apply a variety of different services to it, and then create analysis as a service. How did that play into the hackathon? How did the developers use that?

Blatchford: Initially the developers looked at the original 50-plus APIs that IDOL OnDemand provides, and you have everything in there from facial recognition, to OCR, to text analytics, to indexing, all sorts of cool stuff. Those, in themselves, provided sufficient capabilities to produce some compelling applications, but our developers also utilized Thomson Reuters API’s and resources to further augment the IDOL platform.

This was very important, as it demonstrated that not only could we plug in an Enterprise analytics tool into our data, but also that it would fit well with our own capabilities.

Gardner: And HP Big Data also had a role in this. How did that provide value?

Five-day effort

Blatchford: The expertise. We should remember we stood this hackathon up from inception to completion in a little over one month, and that’s I think pretty impressive by any measure.

The actual hackathon lasted for five days. We gave the participants a week to get familiar with the APIs, but they really didn’t need that long because the documentation behind the APIs on IDOL OnDemand and the kind of "try it now" functionality it has was amazing. This is what the engineers and the developers were telling me. That’s not my own words.

The Big Data Group was able to stand this whole thing up within a month, a huge amount of effort on HP’s side that we never really saw. That ultimately resulted in a hugely successful virtual global hackathon. This wasn’t a physical hackathon. This was a purely virtual hackathon the world over.

Gardner: HP has been very close to developers for many years, with many tools, leading tools in the market for developers. They're familiar with the hackathon approach. It sounds like HP might have a business in hackathons as a service. You're proving the point here.

For the benefit of our listeners, if someone else out there was interested in applying the same approach, a hackathon as a way of creating innovation, of sparking new thoughts, light bulbs going off in people's heads, or bringing together cultures that perhaps hadn't meshed well in the past, what would you advise them?

First and foremost, the reason we were successful is because we had a motivated, willing partner in HP.

Blatchford: That’s a big one. First and foremost, the reason we were successful is because we had a motivated, willing partner in HP. They were able to put the full might of their resources and technology capabilities behind this event, and that along side our own efforts ultimately resulted in the events success.

That aside, you absolutely need to get the buy-in of the senior executives within an organization, get them to invest into the idea of something as open as a hackathon. A lot of hackathons are quite focused on a specific requirement. We took the opposite approach. We said, "Look, developers, engineers, go out there and do whatever you want. Try to be as innovative in your approach as possible."

Typically, that approach is not seen as cost effective, businesses like to have defined use cases, but sometimes that can strangle innovation. Sometimes we need to loosen the reins a little.

There are also a lot of logistical checks that can help. Ensure you have clear criteria around hackathon team size and members, event objectives, rules, time frames and so on. Having these defined up front makes the whole event run much smoother.

We ran the organization of the event a little like an Agile project, with regular stand-ups and check-ins. We also stood up a dedicated internal intranet site with all the information above. Finally, we set-up user accounts on the IDOL platform early on, so the participants could familiarize themselves with the technology.

Winning combination

Gardner: Yeah, it really sounds like a winning combination: the hackathon model, big data as the resource to innovate on, and then IDOL OnDemand with 50 tools to apply to that. It’s a very rich combination.

Blatchford: That’s exactly right. The richness in the data was definitely a big part of this. You don’t need millions of rows of data. We provided 60,000 records of legal documents and we had about the same in patents and news content. You don’t need vast amounts of data, but you need quality data.

Then you also need a quality platform as well. In this case IDOL OnDemand.The third piece is what’s in their heads. That really was the successful formula.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: I have to ask. Of course, the pride in doing a good job goes a long way, but were there any other incentives; a new car, for example, for the winning hackathon application of the day?

Blatchford: Yeah, we offered a 1960s Mini Cooper to the winners. No, we didn't. We did offer other incentives. There were three main incentives. The first one, and the most important one in my view, and I think in everyone’s view, was exposure to senior executives within the organization. Not just face time, but promotion of the individual within the organization. We wanted this to be about personal growth as much as it was about producing new applications.

Going back to trying to leverage your resources and give them opportunities to shine, that’s really important. That’s one of the things the hackathon really fostered -- exposing our talented engineers and product managers, ensuring they are appreciated for the work they do.

We also provided an Amazon voucher incentive, and HP offered some of their tablets to the winners. So it was quite a strong winning set.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Chris Blatchford  Dana Gardner  hackathon  HP  HP Big Data  Interarbor Solutions  Thomson Reuters 

Share |
PermalinkComments (0)
Page 1 of 58
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by®  ::  Legal