Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at


Search all posts for:   


Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  VMWare  HPDiscover  virtualization  Ariba  data center  enterprise architecture  data analytics  HP DISCOVER  SOA  Ariba Network  HP Vertica  Open Group Conference  SAP  security  VMWorld  Ariba LIVE  Tony Baer  desktop virtualization  Jennifer LeClaire  mobile computing  TOGAF  Business Intelligence 

Enterprises opting for converged infrastructure as stepping stone to hybrid cloud

Posted By Dana L Gardner, Thursday, May 21, 2015

In speaking with a lot of IT users, it has become clear to me that a large swath of the enterprise IT market – particularly the mid-market – falls in between two major technology trends.

The trends are server virtualization and hybrid cloud. IT buyers are in between – with one foot firmly into virtualization – but not yet willing to put the other foot down and commit to full cloud adoption.

IT organizations are well enamored of virtualization. They are so into the trend that many have more than 80 percent of their server workloads virtualized. They like hybrid cloud conceptually, but are by no means adopting it enterprise-wide. We’re talking less than 30 percent of all workloads for typical companies, and a lot of that is via shadow IT and software as a service (SaaS).

In effect, virtualization has spoiled IT. They have grown accustomed to what server virtualization can do for them – including reducing IT total costs – and they want more. But they do not necessarily want to wait for the payoffs by having to implement a lengthy and mysterious company-wide cloud strategy.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

They want to modernize and simplify how they support existing applications. They want those virtualization benefits to extend to storage, backup and recovery, and be ready to implement and consume some cloud services. They want the benefits of software-defined data centers (SDDC), but they don’t want to invest huge amounts of time, money, and risk in a horizontal, pan-IT modernization approach. And they're not sure how they'll support their new, generation 3 apps. At least not yet.

So while IT and business leaders both like the vision and logic of hybrid cloud, they have a hard time convincing all IT consumers across their enterprise to standardize deployment of existing generation 2 workloads that span private and public cloud offerings.

But they're not sitting on their hands, waiting for an all-encompassing cloud solution miracle covered in pixie dust, being towed into town by a unicorn, either.

Benefits first, strategy second

I've long been an advocate of cloud models, and I fully expect hybrid cloud architectures to become dominant. Practically, however, IT leaders are right now less inclined to wait for the promised benefits of hybrid cloud. They want many of the major attributes of what the cloud models offer – common management, fewer entities to procure IT from, simplicity and speed of deployment, flexibility, automation and increased integration across apps, storage, and networking. They want those, but they're not willing to wait for a pan-enterprise hybrid cloud solution that would involve a commitment to a top-down cloud dictate.

Instead, we’re seeing an organic, bottom-up adoption of modern IT infrastructure in the form of islands of hyper-converged infrastructure appliances (HCIA). By making what amounts to mini-clouds based on the workloads and use cases, IT can quickly deliver the benefits of modern IT architectures without biting off the whole cloud model.

If the hyper-scale data centers that power the likes of Google, Amazon, Facebook, and Microsoft are the generation 3 apps architectures of the future, the path those organizations took is not the path an enterprise can – or should – take.

Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket. It just doesn’t work that way.

Your typical Fortune 2000 enterprise is not going to build a $3 billion state-of-the-art data center, designed from soup to nuts to support their specific existing apps, and then place all their IT eggs into that one data center basket.

There are remote offices with unique requirements to support, users that form power blocks around certain applications, bean counters that won’t commit big dollars. In a word, there are “political” issues that favor a stepping-stone approach to IT infrastructure modernization. Few IT organizations can just tell everyone else how they will do IT.

The constraints of such IT buyers must be considered as we try to predict cloud adoption patterns over the next few years. For example, I recently chatted with IT leaders in the public sector, at the California Department of Water Resources. They show that what drives their buying is as much about what they don’t have as what they do.

"Our procurement is much harder. Getting people to hire is much harder. We live within a lot of constraints that the private sector doesn’t realize. We have a hard time adjusting our work levels. Can we get more people now? No. It takes forever to get more people, if you can ever get them,” said Tony Morshed, Chief Technology Officer for the California Resources Data Center.

“We’re constantly doing more with less. Part of this virtualization is survivability. We would never be able to survive or give our business the tools they need to do their business without it. We would just be a sinking ship,” he said. “[Converged infrastructure like VMware’s] EVO:RAIL looks pretty nice. I see it as something that we might be able to use for some of our outlying offices, where we have around 100 to 150 people.

"We can drop something like that in, put virtual desktop infrastructure (VDI) on it, and deliver VDI services to them locally, so they don't have to worry about that traffic going over the wide area network (WAN).” [Disclosure: VMware is a sponsor of my BriefingsDirect podcasts].

The California Department of Water Resources has deployed VDI for 800 desktops. Not only is it helping them save money, it’s also used as a strategy for a remote access. They're in between virtualization and cloud, but they're heralding the less-noticed trend of tactical modernization through hyper-converged infrastructure appliances.

Indeed, VDI deployments that support as many as 250 desktops on a single VSPEX BLUE appliance at a remote office or agency, for example, allow for ease in administration and deployment on a small footprint while keeping costs clear and predictable. And, if the enterprise wants to scale up and out to hybrid cloud, they can do so with ease and low risk.

Stepping stone to cloud

At Columbia Sportswear, there is a similar mentality, of moving to cloud gradually while seeking the best of agile, on-premises efficiency and agility.

"With our business changing and growing as quickly as it is, and with us doing business and selling directly to consumers in over a hundred countries around the world, our data centers have to be adaptable. Our data and our applications have to be secure and available, no matter where we are in the world, whether you're on network or off-premises,” said Tim Melvin, Director of Global Technology Infrastructure at Columbia Sportswear.

"The software-defined data center has been a game-changer for us. It’s allowed us to take those technologies, host them where we need them, and with whatever cost configuration makes sense, whether it’s in the cloud or on-premises, and deliver the solutions that our business needs,” he said.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

Added Melvin: "When you look at infrastructure and the choice between on-premise solutions, hybrid clouds, public and private clouds, I don't think it's a choice necessarily of which answer you choose. There isn't one right answer. What’s important for infrastructure professionals is to understand the whole portfolio and understand where to apply your high-power, on-premises equipment and where to use your lower-cost public cloud, because there are trade-offs in each case."

Columbia strives to present the correct tool for the correct job. For instance, they have completely virtualized their SAP environment to run on on-premises equipment. For .software development, they use a public cloud.

And so the stepping stone to cloud flexibility: To be able to run on-premise workloads like enterprise resource planning (ERP) and VDI with speed, agility, and low-cost. And to do so in such a way that some day those workloads could migrate to a public cloud, when that makes sense.

"The closer we get to a complete software-defined infrastructure, the more flexibility and power we have to remove the manual components, the things that we all do a little differently and we can't do consistently. We have a chance to automate more. We have the chance to provide integrations into other tools, which is actually a big part of why we chose VMware as our platform. They allow such open integration with partners that, as we start to move our workloads more actively into the cloud, we know that we won't get stuck with a particular product or a particular configuration,” said Melvin.

"The openness will allow us to adapt and change, and that’s just something you don't get with hardware. If it's software-defined, it means that you can control it and you can morph your infrastructure in order to meet your needs, rather than needing to re-buy every time something changes with the business,” he said.


What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide. And we’re seeing these deployments on a use-case basis, like VDI, rather than a centralized IT mandate across all apps and IT resources. These deployments are so tactical that they consist in many cases of a single “box” – an appliance that provides the best of hyper scale and simplicity of virtualization with the cost benefits and deployment ease of a converged infrastructure appliance.

This tactical approach is working because blocks of users and/or business units (or locations) can be satisfied, IT can gain efficiency and retain control, and these implementations can eventually become part of the pan-IT hybrid cloud strategy. Mid-market companies like this model because it means the hyper-converged appliance box is the data center, it can scale down to their needs affordably – not box them in when the time comes to expand – or to move to a hybrid cloud model later.

What we're seeing now are more tactical implementations of the best of what cloud models and hyper-scale data center architectures can provide.

What newly enables this appealing stepping-stone approach to the hybrid cloud end-game? It’s the principles of SDDC – but without the data center. It’s using virtualization services to augment storage and back-up and disaster recovery (DR) without adopting an entire hybrid cloud model.

The numbers speak to the preferences of IT to adopt these new IT architectures in this fashion. According to IDC, the converged infrastructure segment of the IT market will expand to $17.8 billion in 2016 from $1.4 billion in 2013.

plus EMC’s Management Products

A recent example of these HCIA parts coming together to serve the tactical apps support strategy and segue to the cloud is the EMC VSPEX BLUE appliance, which demonstrates a new degree to which total convergence can be taken.

The Intel x-86 Xeon off-the-shelf hardware went on sale in February, and is powered by VMware EVO:RAIL and EMC’s VSPEX BLUE Manager, an integrated management layer that brings entirely new levels of simplicity and deployment ease.

This bundle of capabilities extends the capabilities of EVO into a much larger market, and provides the stepping stone to hyper convergence across mid-market IT shops, and within departments or remote offices for larger enterprises. The VSPEX BLUE manager integrates seamlessly into EVO:RAIL, leveraging the same design principles and UI characteristics as EMC is known for.

What’s more, because EVO:RAIL does not restrict integrations, it can be easily extended via the native element manager. The notion of hyper-converged becomes particularly powerful when it’s not a closed system, but rather an extremely powerful set of components that adjust to many environments and infrastructure requirements.

VSPEX BLUE is based on VMware's EVO:RAIL platform, a software-only appliance platform that supports VMware vSphere hypervisors. By integrating all the elements, the HCIA offers the simplicity of virtualization with the power of commodity hardware and cloud services. EMC and VMware have apparently done a lot of mutual work to up the value-add to the COTS hardware, however.

The capabilities of VSPEX BLUE bring much more than a best-of-breed model alone; there is total costs predictability, simplicity of deployment and simplified means to expansion. This, for me, is where the software element of hyper-converged infrastructure is so powerful, while the costs are far below proprietary infrastructure systems, and the speed-to-value in actual use is rapid.

Respond to business needs faster
Simplify your IT infrastructure
Find out more about VSPEX BLUE

For example, VSPEX BLUE can be switched on and begin provisioning virtual machines in less than 15 minutes, says EMC. Plus, EMC integrates its management software to EMC Secure Remote Support, which allows remote system monitoring by EMC to detect and remedy failures before they emerge. So add in the best of cloud services to the infrastructure support mix.

Last but not least, the new VSPEX BLUE Market is akin to an “app store” and is populated with access to products and 24x7 support from a single vendor, EMC. This consumer-like experience of a context-appropriate procurement apparatus for appliances in the cloud is unique at this deep infrastructure level. It forms a responsive and well-populated marketplace for the validated products and services that admins need, and creates a powerful ecosystem for EMC and VMWare partners.

EMC and VMware seem to recognize that the market wants to take proven steps, not blind leaps. The mid-market wants to solve their unique problems. To start, VSPEX BLUE offers just three applications: EMC CloudArray Gateway, which helps turn public cloud storage into an extra tier of capacity; EMC RecoverPoint for Virtual Machines, which protects against application outages; and VMware vSphere Data Protection Advanced, which provides disk-based backup and recovery.

Future offerings may include applications such as virus-scanning tools or software for purchasing capacity from public cloud services, and they may come from third parties, but will be validated by EMC.

The way in which these HCIA instances are providing enterprises and mid-market organizations the means to adapt to cloud at their pace, with ease and simplicity, and to begin to exploit public cloud services that support on-premises workloads and reliability and security features, shows that the vendors are waking up. The best of virtualization and the best of hardware integration are creating the preferred on-ramps to the cloud.

Disclosure: VMware is a sponsor of BriefingsDirect podcasts that I host and moderate. EMC paid for travel and lodging for a recent trip I made to EMCWorld.

You may also be interested in:

Tags:  BriefingsDirect  Converged infrastructure  Dana Gardner  EMC  EVO:RAIL  HCIA  hybrid cloud  Interarbor Solutions  server virtualization  VMWare  VSPEX BLUE 

Share |
PermalinkComments (0)

Winning the B2B commerce game: What sales organizations should do differently

Posted By Dana L Gardner, Saturday, May 16, 2015

The next BriefingsDirect thought-leader interview focuses on what winning sales organizations do to separate themselves from the competition by creating market advantage through improved user experiences and better information services.

We'll hear from RAIN Group about a recent study on sales that uncovers what sales leaders do differently to foster loyalty and gain repeat business.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

And we'll also hear from National Business Furniture on how they're leveraging online business networks to enable more collaborative and innovative processes that enhance their relationships, improve customer satisfaction, and boost sales.

Please join our guests, Mike Schultz, President of RAIN Group, based in Framingham, Mass., and Brady Seiberlich, IT e-Procurement and Development Manager at National Business Furniture, based in Milwaukee. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: What's changing the B2B sales dynamic? What can we do about it?

Schultz: It's really interesting. In the world of sales, if you fell asleep in 1982, having just read a sales book, and woke up 30-something years later and in 2005 went back to work, you didn't miss anything. It didn't really change that much.


But there are a couple of things that have been happening in the last 10 years or so that have been making sales a lot different. It has changed more in the last 10 years than it did in the previous 40. So let’s look at two of the things.

The first one is that buyers perceive the offerings that different companies bring to them to be somewhat similar, somewhat interchangeable. What that means is that the sellers are no longer competing on saying, "Hey, here is the product, here is the service, and here's the benefit it’s going to get for you," because the other guy has something that the buyer perceives to be the same.

What they're actually competing on now is how to use and how to apply those services and products so the company actually gets the greatest benefit from them. That’s not actually the power of the offering; that is the power of the ideas, the innovation, and the collaboration that the sellers are bringing to the table. So there's one thing.

The other thing is the asymmetry of information has been changing. It used to be very asymmetrical, because the buyer had all the need and all the desire, but the seller had all the knowledge. Now, buyers can hop online and talk to user groups who have bought from you and see what everyone says about your pricing, and they can find your competitors really quickly. They can get a lot more information.

So there has been a leveling of the playing field, which brings us back to point number one. If the sellers want to compete, they have to be smarter than the average bear, smarter than they used to be. They used to be able to just take orders; they can't do that anymore and still win.

Gardner: Brady, is that what you're facing? What do you do differently about this new sales dynamic?


Seiberlich: I definitely agree with Mike. In the last couple of years, buyers are getting smarter. They're trying to challenge us more. With the Internet, they have the ability to easily price compare, shop products, look at product reviews. They're so much more knowledgeable now.

Another thing that we found with our buyers is that they want the ordering process to be as easy as possible, whether it's through the Internet or an e-procurement system. You have to work a lot harder to make sure the buyer finds you as the easiest way to order.

We've really had to work hard at that and we've had to be able to adjust, because every buyer has needs and they all have different needs. We want to make sure we can cover as many different needs without doing a user experience customization for everybody.

The experience is important

Gardner: It sounds as if the experience of buying and procuring is as important as what you're buying.

Schultz: That’s actually what we found from our research. I said that sales has changed in the last 10 years more than it's changed in the last 40. Yet our industry is very sleepy. Most people do the same thing in terms of what they profess to be what's important, to a whole bunch of people saying a whole bunch of different things. It goes all the way up to the Harvard Business Review saying that solution sales is at its end.

They published an article, The End of Solution Sales, and they published an article, Selling Is Not About Relationships. So is this true? What's actually going on?

We did the study where we looked at 700 business-to-business (B2) purchases from buyers who represented $3.1 billion of purchasing power. We wanted to find out what was the buyer's experience like from the seller they awarded the business to, to the seller that came in almost there, but came in second place. When you sell, person in first place gets the trip to Aruba, and the second place person gets the trip back to their office.

Sellers that win don’t just sell differently; they sell radically differently than the sellers that even come in the closest second place.

What we found first of all, is that the sellers that win don’t just sell differently; they sell radically differently than the sellers that come in the closest second place. [Get a free copy of the RAIN Group report, What Sales Winners Do Differently.]

The product and service playing field was perceived to be that the buyer is similar, especially by the time they get to the last two. Maybe they kicked out some lesser providers early, and when they get down to the end, both providers provide the technology, they can both engineer the playing field that we're building, and they can both do the thing that we need them to do.

It actually came down to the buyer experience with the seller and how the seller treated the buyer. What they did with the buyer were the tipping points for why they got awarded the business.

Gardner: Brady, what has changed in terms of your creating a better experience, a simple, direct, maybe even informative process for your customers? How do you accommodate what we have been talking about in terms of improved experience?

Flexible as possible

Seiberlich: We try to be as flexible as possible and we try to provide them with as much information as possible.

Information is huge for us. Back in the days when we first started, we mailed catalogs. For each piece of our furniture that we sell, you probably saw in the catalog seven pieces of information: how big it was, how much it weighed, what colors it came in.

Right now, for every piece of furniture we have, we hold over a 100 pieces of information on it and we display a lot of that on the web. It's an ergonomic chair, it’s leather, it raises up and down, it comes with or without arm, things like that. We try to provide as much information, because the shopper works harder.

In the days of a catalog, where you had a catalog at your desk and you opened it up, there was no competition there. On the web, there's plenty of competition and everybody is trying to compete for that same dollar.

We try to be as flexible as possible and we try to provide them with as much information as possible.

We want to make the customer as informed as possible. The customer doesn’t want to necessarily have to call us and say, "Is this brown; how dark is this brown?" We want to give them as much information as possible and inform them, because they want to make the decision themselves and be done with it. We're trying to get better at that.

Gardner: I believe you are in your 40th year now at National Business Furniture. Tell us a little about your company: your scale, where you do business, and what it is precisely that you are selling?

Seiberlich: That is correct. This year we are celebrating our 40th anniversary, which is pretty exciting for us. We sell in the US and in Canada. We opened our first office in Canada a couple of years ago.

The main reason we mainly sell in the US market is because of what we sell. We sell office furniture: desks, chairs, and bookcases. That stuff is too heavy to ship overseas, and we can't compete with some of the vendors that are over there already selling. So we sell here in the US mostly. The majority of our business obviously comes from there.

We started as 100 percent catalog. In the early '90s we made a website that was just for browsing purposes. You couldn't shop off of it. In the late 1990s we added the ability to buy off of it, and right now we're up to about a 50/50 split in what comes through the catalog and what comes through via e-commerce. And in e-commerce, we include the Internet, the e-procurement system, and stuff like that.

So we've proven that we're still adjusting with it, but the weird thing is that some of our product lines haven’t changed that much. Traditional furniture is still traditional furniture. We are selling some very similar products, just 40 years later.

Different approach

Gardner: Given this change in the environment with the emphasis on experience and data, making good choices with a lot of different possible choices, if you're a buyer, what are you doing differently in order to keep your business healthy?

Is this a matter of having more strategic long-term predictable sales? Do you go about marketing in a different way? Have you changed the actual selling process in some fashion? How are you adjusting?

Seiberlich: Probably all of the above. We're always looking for new markets to sell to. We've just started to move into medical furniture and we're doing some new things there.

The government has different rules in buying. So we're tying to make sure that we can adhere to those and make sure that’s an open market for us. And we continue to just try and find better ways to do things. That's what separates us from our competitors.

The days of establishing a relationship and just hoping that will carry you for years have kind of come and gone.

Everyone who sells office furniture is all selling similar products, around the same price. So we have to do something to differentiate ourselves, and we do that. We try to make the process easy, we try to provide the customer with as much information as possible, and we just want to make it a smooth process.

The days of establishing a relationship and just hoping that will carry you for years, like Mike said, have kind of come and gone. So we've got to work harder to keep our existing customers. We're doing that and also trying to find ways to find new customers, too.

Gardner: We are here at Ariba LIVE. We're hearing a lot about business networks, end-to-end processes, using different partners and different suppliers to create a solution within that end-to-end process. What is it about business networks that helps you attain your goals of a smoother data-driven process for sales?

Seiberlich: When you can prove that you can collaborate over these networks, you have a success that you can show to other buyers. You can say, "We've proven we can do this." It shows that you have established yourselves in these different markets.

I'm sure everybody knows that nobody wants to be the guinea pig and try something new with somebody else. But we've proven that we can work on these different markets and different networks and continue to try to find ways to make it easier. That’s what we're really pushing.
Unpacking the term

Schultz: Dana, I wanted to add one quick thing on that. "Network" is one of those interesting words that you can unpack. You can unpack it in the technology sense that things are networked, but there's also the concept of a network that says that on the other side of this technology, there are people.

As a seller what it does, when what you do here isn’t just what you do there, it starts to go out through technology to other people and it amplifies whatever you do.

So, if you're doing a pretty bad job, people are going to hear that it’s a pretty bad job a lot faster than they used to. But if you are doing something interesting, if you are doing something worthwhile, if you are doing something like Brady is talking about, saying, "Wow, this process really used to be a pain and now it's a lot better because of the technology," that will get through to more people.

If you're doing the things that I talked about earlier, if you're selling in ways that help buyers get the most use out of what they you're selling, get the most benefit out of what you're selling, it’s no longer just words in a catalog saying, "This is how you're going to benefit."

If you're doing a pretty bad job, people are going to hear that it’s a pretty bad job a lot faster than they used to.

In some ways, you're going to benefit from working with us to get it, not just from the thing itself. The technology amplifies the good sellers, and they end up selling a lot more because it spreads faster.

Gardner: I suppose another part of the technology impact is convenience. When you're already in an environment, an application, a cloud, a network, maybe even a mobile interface, and the seller is in that same environment, if you are a buyer, that has some positive implications. Things can be integrated. Things can be repeatable. The data can be collected, shared, and analyzed.

Tell me a little bit, if you would, Brady, about being in a shared environment technically that also provides grease to the sales gears?

Seiberlich: It definitely does. We have some customers that we transact with here on Ariba, and in the the first one, two, or three transactions, we had to work through some difficulties, but by transaction 10, 15, or 20, it’s just smooth and it goes right through. And that's what we're trying to push with other customers that buy from us and we are trying to get them moved over to the network.

We have a proven track record here. We are the highest rated furniture provider here. We are gold from the Ariba standpoint. So we're trying to push customers to continue to buy from us off of these networks, because we've proven how simple it can be and we want to continue to do that. We want to make the ordering process as simple as possible.

Transaction algorithm

Gardner: Mike, maybe looking a bit forward, if all things become equal in terms of the product and the information that’s available, if we take that to its fullest extent, it really becomes a transactional efficiency, even down to compressing the payment, schedule, and negotiating vis-à-vis actual transactions on a larger and larger scale. Where do we end up? Do sales go well together and it simply becomes a transaction algorithm?

Schultz: There were predictions about 10 years ago with e-commerce when the information symmetry really started to happen, when it shifted toward buyers. They started to know more that there were going to be fewer salespeople in the US, because of government data, the US economy.

US government data said that 1 out of 9 people working in the United States were in sales; that was in 2000. If you fast forward to now, the massive change has been that there are about 1 in 9 people working in sales. So it hasn’t changed; it’s just that they're not order takers anymore.

The other thing is, is that while things look the same, they still aren't always necessarily the same.

So the new challenge for buyers is to figure out what are the differences.

If you think about it, all this becomes price pressure. If this goes directly to microeconomics, and we are just buying commodity pork bellies, it has to be the exact same price because the elasticity works that way. Any shift is going to make it go to a different provider. That’s really not the case, because we're not all buying pork bellies.

I don’t know about you, but I don’t think that Brady is looking really well. Maybe he needs some heart surgery. I have a really cheap surgeon. Would you like to go see him. He's board-certified, and he is a really cheap heart surgeon? It’s like, oh jeez.

There is a lot of decision process and a lot of mental things built up about what cheap-versus-expensive means, especially because if you are not talking about pork bellies, it's not necessarily the same.

So the new challenge for buyers is to figure out what are the differences. This law firm says they have the same capabilities at that law firm, but in fact, one law firm is better. The question is how. It’s contingent on buyers and sellers to figure that out together.

That’s why for law firms, consulting firms, accounting firms, I can't sink my teeth into them, bite them, and tell you which one is thicker or stronger, or which is going to have a 20-year guarantee versus a 10-year guarantee on the chair. I'm just trying to figure out who is actually better, who can serve me better, and who is the right fit. So it's not all commodities.

One other challenge, if you think about it from the buying side, is that it's not a big secret that heading into the purchasing department is not necessarily the absolute positive I am dying to do a career path for the top MBAs that are coming out of the top schools.

Complicated purchases

There are some great people in purchasing, but a lot of the times, when we're talking to sellers, we're talking to sellers that are doing $5 and $10 million on very complicated things with buyers, and the purchasing person they're working with doesn't actually understand the business context of what they're trying to get done. So they're asking, "How do I actually get to interact with them when the rules are they don't let me talk to them?" This is $7 million. They're buying this like they're buying roofing shingles.

It's going to require much more sophistication from the buyers to figure out what they really need and what are really the quality levels as it is on the sellers to make sure that they bring forth the right ideas, craft the right solutions, and treat the buyers well.

Gardner: So clearly we've hit on that reputation being in an open visible network where information can be traded. That gets to that reputation, trust, and a track record. But it also sounds like we're talking about some sort of value-add to the buy.

And that’s one of our  biggest selling points -- our people. That’s an important thing for us. They have the knowledge that they need and they're not just order takers.

If other things are equal -- but the experience of buying, if making a decision in a complex environment is the case -- something else is needed, perhaps consulting, data, or analysis. So, Brady, what is potentially a value-add in your business to increase your likelihood of making the sale and then keeping that relationship with the buyer?

Seiberlich: We have a couple of things, but one of our most important things is that we've been around for 40 years. If you call either inside sales or a customer service, you're going to get somebody, on average, with over 10-plus years of furniture experience with us, and that's a big thing. They understand our products. Our vendors come into our office weekly and explain our products. Our salespeople know the products and they can really help you find a solution that fits you.

And that’s one of our  biggest selling points -- our people. That’s an important thing for us. They have the knowledge that they need and they're not just order takers. They're much more. Everybody on our side who answers the phone are furniture experts. That’s what they do.

Gardner: Do you find that those salespeople with that track record, with that depth of knowledge, are taking advantage of things like the Ariba Network to get more data, more analysis to help them? Have they made that leap yet to being data driven, rather than just experience driven?

Seiberlich: We're getting better and we're consistently improving.

I agree with Mike’s point, one of the hardest things is making sure that we align ourselves with our buyers’ needs, figuring out what’s important to them and then making sure we are addressing those situations. That’s a challenge, and when you figure it out today, it changes tomorrow. That makes it even more challenging.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  Brady Seiberlich  BriefingsDirect  Dana Gardner  Interarbor Solutions  Mike Schultz  National Business Furniture  RAIN Group 

Share |
PermalinkComments (0)

Ariba's product roadmap for 2015 leads to improved business cloud services

Posted By Dana L Gardner, Monday, May 04, 2015

The next BriefingsDirect thought-leader interview focuses on the Ariba product roadmap for 2015 -- and beyond.

Ariba’s product and services roadmap is rapidly evolving, including improved business cloud services, refined user experience features, and the use of increasingly intelligent networks. BriefingsDirect had an opportunity to learn first-hand how at the recent 2015 Ariba LIVE Conference in Las Vegas.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about the recent news at Ariba LIVE -- and also what to expect from both Ariba and SAP in the coming months -- we sat down with Chris Haydon, Senior Vice President of Product Management at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we get to the Ariba news, what do you see as having changed, developed, or evolved over the past year or so in the business network market?

Haydon: It’s been a very interesting year with a lot of learning and adoption for sure. There's a growing realization in companies that the networked economy is here to stay. You can no longer remain within the four walls of your business.

It really is about understanding that you are part of multiple business networks, not just a business network. There are business networks for finance, business networks for procurement, and so on. How do you leverage and harness those business networks to make your businesses more effective?

Balancing needs

Gardner: So it’s incumbent upon companies to take advantage of all these different networks and services, the data and analysis that’s driven from them. But at the same time, they need to retain simplicity. They need to make their users comfortable with this technology. They need to move toward a more mobile interface.


How do we balance the need for expansion, amid complexity, with simplicity and direct processes?

Haydon: It’s a difficult balance. There are a couple of ways to think about it as well. Just to pick up on the point on how businesses are changing, certainly the end-user expectation is dramatically changing. Whether it’s the millennials coming into the workforce, the nature of apps, mobile apps, in our personal lives driving the need, the requirement, the desire to have that in our business lives is there.

From an Ariba perspective, we believe our job is to manage complexity for our customers. That’s the value prop that people sign into. When we talk from a usability perspective about managing the complexity, it’s also about thinking about the individual persona or how the end-user really needs to interact to get the work done, how they can learn, and how they can use their different devices to work where they want and how they want.

Gardner: It seems to me that among the technology leaps that we are making in order to accommodate this balance is there a greater reliance on the network, network-centric attributes -- intelligence driven into the network. How do you view the role of the network in this balance?

Haydon: I think it’s fundamental, and we're definitely seeing it almost as a tipping point. It’s no longer just about the transactions, but about the trusted relationships. It’s about visibility into the extended value chain, whether that value chain is supply chain, financial payment chain, or logistics chain. It doesn’t matter what that process change or that value change is. It is insight into that trusted community, so you understand that it’s secure, that it’s scalable, and also that reliable and repeatable.

It’s no longer just about the transactions, but about the trusted relationships.

Gardner: It seems like we can put the word "hybrid" in front of so much these days. Tell us a little bit about why SAP HANA is so important to this network services tipping point. Many people think of HANA as a big data platform, but it’s quite a bit more.

Haydon: Yeah, it is. In Ariba we've made strides on leveraging the HANA Technology, first with the Spend Visibility program. The great message about HANA is that it's not HANA for technology sake; it’s how HANA enables different business outcomes. That's the exciting thing. Whether it's on the Ariba Network, whether we start in our analytical platform and have an average of 50X or 80X average improvement in terms of some of the reports, that’s great.

What was really interesting when we put HANA on to our Spend Visibility was that we got more users doing different types of reports because they could do this, they could iterate, they could change, they could experiment in a more interactive and faster way. We saw upticks in the behavior of how customers use their products, and that's the excitement of HANA.

Taking it to the next step, as we looked upon HANA across our network and our other applications in terms of better and different types of reporting in terms of the network and having real-time visibility in insights from our trusted community, it’s just going to provide a differential level of value to any of the end-users, whether they're buyer, seller or any of our partners.

Wider diversity

Gardner: So we have a wider diversity of network participants. We need to connect them. We’re leveraging the network to do that. We're leveraging the ability of a strong platform like HANA to give us analytics and other strong value adds, but we also need to bring that single platform, that single pane of glass value, to the mobile device.

User experience seems to be super important. Tell us a little bit about where you’re heading with that and introduce us to SAP Fiori.

Haydon: It’s a massive focus for us from an innovation perspective.

When we think about our user experience, it's not just about the user interface, albeit a very important part, but it's also the total experience that an end-user or a company has with the Ariba Suite and Business Network.

Fiori is an excellent user-interface design paradigm that SAP has led, and we have adopted, Fiori elements and design paradigms within our applications, mobile applications as well as desktop applications.

You will see a vastly updated user interface, based on Fiori design principles, coming out in the summer, and we'll be announcing that here at Ariba LIVE and taking customers through some really interesting demos. But, as you mentioned earlier, it's not just about the user experience. It's really about end users; we call them personas from a product perspective. You're in accounts payable or you're a purchasing officer. That’s the hat you wear.

It really is about how you link, where you work, work anywhere, embracing modern design principles and learning across the whole user experience.p>

It really is about how you link, where you work, work anywhere, embracing modern design principles and learning across the whole user experience. We've got some interesting approaches for our mobile device. Let me talk about the crossover there.

We're launching and showing a new mobile app. We launched our mobile app early this year for Ariba’s Procurement suite. I had some great uptick the first week, when 20 percent of our customers activated and rolled it out. Some of their end-users are progressively scaling that. Again, that's the power of a mobile-app delivery. It shows the untapped demand, the untapped potential, of how end-users do, can, and want to interact with business applications today.

At Ariba LIVE 2015, we are also announcing a brand-new application to enable shopping cart, adding, searching for the casual ad hoc end-user, so they can do their requisitioning and their owning of the contract items or ad hoc items wherever they are.

To finish off, just as excitingly, we're really looking to leverage the mobile device and take its abilities to create new user experience design paradigms. Let me give you an example of what that means. Let’s just say you're an accounts payable clerk and you're a very conscientious accounts payable clerk. You're on the bus, on the way to work, and you know you have got a lot of invoices to process. For example, you might want to say you need to process an invoice from ACME Inc. before you do it for my next supplier.

On your mobile device, you can’t process detailed information about an invoice, but you can certainly put it in your queue, and when you get to your desktop, there it is at the top of your to-do queue.

Then, when you finish work, maybe you want to push a report on "How did I do today?" You did x things, you did y things, and you have that on your mobile device on the train on the way home. That's the kind of continuity construct that would bring you in, making the user experience about learning and about working where you are.

Behavioral aspect

Gardner: Before we go into the list of things that you're doing for 2015, let's tie this discussion at the high level about the networked economy, power network, and intelligence driven in the network, the user interface, with this all-important behavioral aspect of users wanting to use these technologies.

One of the things that’s been interesting for me at Ariba LIVE is that I'm learning that user productivity is the go-to market. The pull of users that say they want these apps, they don't want the old-fashioned way, they want to be able to do some work on the train ride home and have notifications that allow them to push a business process forward or send it back.

So how do you see the future of the total technology mix coming to bear on that user productivity in such a way that they're actually going to demand these capabilities from their employers?

Haydon: It's interesting. Let's just use the example of a Chief Procurement Officer. As Chief Procurement Officer, you may have the old classic standard benefits of the total cost of ownership (TCO), cost reduction, and price reductions. But more and more, Chief Procurement Officers also realize that they have internal customers, their end-users.

If the end users can't adopt the systems and comply with the systems, what's the point?

If the end users can't adopt the systems and comply with the systems, what's the point? So, just getting to your point, it was an excellent thing. We're seeing the pull or the push, depending on your point of view, straight from the end user, straight through to the end-of -line outcome.

From an Ariba perspective, how this all comes together really is a couple things. User design interactions are foremost in our design thinking approaches. These different user design interactions make products do different things and work together. It also has some great impact on our platform, and this is where with SAP and HANA Cloud Platform gives us a differential way to address these problems.

One of these aspects here is to keep up with these demands not necessarily out of left field, but out of specific market or industry requirements.

We need to make sure that we can expand our ecosystem from an Ariba perspective to encompass partners and even customers doing their own things with our platform that we don't even know about. For some specific investments with HANA and the HANA Cloud Platform it's to make our network more open and we're also looking at some targeted extensibility scenarios for real applications.

Gardner: Let's go to the road map for 2015 Ariba products. Let's start with Spend Management. What's going on there?

A lot of innovations

Haydon: In 2014, we brought more than 330 odd significant features to market, almost one a day. So we have delivered a lot of innovation.

About 89 percent of those were delivered -- and this is important to our ongoing roadmap because we're cloud -- because we work with our customers in their own on-demand environment. They entrust their business processes to us. We're delivering more and more features in toggle mode -- or configured on or configured off. We're letting our end users and our customers consume our innovation even though it's intrinsic to the product.

That's one big improvement we made in 2014 and we want to carry through in 2015. In terms of spend management, again, we have some great new investment in Ariba. SAP continues to invest in Ariba, and we continue to turn out more innovation.

We have some innovation from enhancing capabilities to support the public sector. We're adding and extending in globalization capabilities. We're adding specific functionality to improve the security, the encryption, of applications.

We have 16-odd years of transactional history on the Ariba Network. We look at that in conjunction with our customers.

Then, there are some more targeted features, whether it's improving demand aggregation for our procurement applications, supporting large line levels and outsourcing and contract management applications, or improving our catalog searching capabilities with type-ahead and improved content and publishing management. It's really end to end.

Gardner: There are sort of four buckets within the spend management, indirect, contingent labor, direct, and supply chain management issues. The new big one was the Concur acquisition, travel and expense. Anything new to offer on understanding better spend management, better spend visibility, across these buckets?

Haydon: Of course. When we work with our customers, we have 16-odd years of transactional history on the Ariba Network. We look at that in conjunction with our customers. We see these big four major spend segments, indirect and MRO, as you mentioned, supply chain indirect, services, contingent labor, travel and expense, and, of course, the distribution of that spend type changes per industry.

But what we're really focused on is making sure that we can get end-to-end outcomes for our customers across the source-to-pay process. I'll touch on all of them in turn.

In indirect MRO we're just continuing to drive deeper. We really want to address specific features in terms of compliance and greater spend categories, specifically with Spot Buy, which is a product we are out there trialing with a number of customers right now.

In contingent labor and services management, we've done some excellent work integrating the Ariba platform with the Fieldglass platform, made some huge strides in linking purchase orders into the Fieldglass platform. Let Fieldglass do what they do great. They're the number one market leader in bringing the invoices back to the network over the common adapter.

In terms of direct and logistics and supply chain, we brought to market, like we mentioned last year, some direct materials supply-chain capability, co-innovating with a number of customers right now. We added subcontracting purchase order (PO) for complex scenarios in the summer and have done some great work in extending the capability to support consumer package and retail supplies.

Interesting strides

We've done some really interesting strides, and again, expanding the spend categories that we can support on there.

And last but not least, Concur. It's number one in travel, and we're excited to have that part of the family. Again, from an SAP perspective, when you look at total spend, there's just an unparalleled capability to manage any spend segment. We're working pretty closely with Concur to ensure we have tied integration and we work at how we can leverage their invoicing capability as a complement to Ariba's.

Gardner: Line-of-business applications is one of the things that's intrigued me here. Hearing your story unfold is this "no middleware, yet expansive integration -- end to end integration across business processes and data."

A resounding message from our customers . . . is that we need seamless, simpler integration between our cloud applications and our current applications.

So in this line-of-business category, explain to me how you can be so inclusive leveraging the technology. How does that work?

Haydon: Let me unpack that a little bit. A resounding message from our customers, particularly since the acquisition, is that we need seamless, simpler integration between our cloud applications and our current applications. Would they be on-premise?

I'll talk about Oracle and other clients in a little bit, but specifically for our SAP ERP systems, we’ve really worked hand in glove with our on-premise business-suite partners to understand how we can move from integrate to activate.

And so what we brought to market pretty significantly with the business suite is the ability for any SAP Business Suite customer to download an add-on that basically gives them an out-of-the-box connectivity to the Ariba Network. We continue to invest in that with S/4HANA upcoming, where we are planning to have native connectivity to the Ariba Network as part of a standard feature of S/4HANA.

For our other customers, the Oracle customers and other major ERP systems out there, we continue to invest in open adapters to enable their procurement and finance processes across the network or with any of our cloud applications.

Gardner: There's something that's always important. We leave it to the end, but we probably shouldn't -- risk management. It seems to me that you're building more inherent risk-management features inside these applications and processes. It's another function of the technology. When you have great network-centric capabilities and a solid single platform to work from, you can start to do this. Tell us a little bit more about that.

Emerging area

Haydon: This is a really exciting and emerging area. More and more leading-practice companies are starting to manage their procurement and their supply chains from a risk basis, the risk, the continuity of supply, security of supply. What happens if x, what happens if y? You eye your supply chain. If there is, heaven forbid, some contamination or whatever traceability issue somewhere in your supply chain, and you're a large company or even a small company, now you're held accountable.

How do we start helping companies understand the risk that exists within their supply chain? We think that the business network is the best way to make sense of the risk that exists in your supply chain. Why?

One, because it's a connected community; and two, because you think about the premise. We already have the transactions, 750 billion plus to spend. We already have a million plus trusted, connected relationships. But that's the first step.

We also think about where we can have differential inputs, third-party inputs, on types of dimensions, and we think it's these risk dimensions or domains of information that matter, whether it's safety, performance, innovation, diversity, environment, or financial risk. It could be any of these domains, whether it's information from Dun and Bradstreet, information from Made In A Free World, which has a global slavery index. Whatever these dimensions of information are, we want to bring them in to our applications in the context of the transaction, in the context of the end-user.

Imagine when you do a sourcing event if you could be notified of some disruption or some type of risk in your supply chain before you finally award that sourcing event or before you finally sign the contract. That provides a differential level of outcome that can only really be delivered through a business network in a community.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  BriefingsDirect  business network  Chris Haydon  Dana Gardner  Interarbor Solutions  networked economy 

Share |
PermalinkComments (0)

How Globe Testing helps startups make the leap to cloud- and mobile-first development

Posted By Dana L Gardner, Thursday, April 30, 2015

This latest BriefingsDirect mobile development innovation discussion examines how Globe Testing, based in Madrid, helps startups make the leap to cloud-first and mobile-first software development.

We'll explore how Globe Testing pushes the envelope on Agile development and applications development management using HP tools and platforms.
Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about modern software testing as a service we're joined by Jose Aracil, CEO of Globe Testing, based in the company's Berlin office. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Globe Testing. Are you strictly a testing organization? Do you do anything else? And how long have you been in existence?


Aracil: We're a testing organization, and our services are around the Application Development Management (ADM) portfolio for HP Software. We work with tools such as HP LoadRunner, HP Quality Center, HP Diagnostics, and so on. We've been around for four years now, although most of our employees actually come from either HP Software or, back in the day, from Mercury Interactive. So, you could say that we're the real experts in this arena.

Gardner: Jose, what are the big issues facing software developers today? Obviously, speed has always been an issue and working quality into the process from start to finish has always been important, but is there anything particularly new or pressing about today's market when it comes to software development?

Scalability is key

Aracil: Scalability is a big issue. These days, most of the cloud providers would say that they can easily scale your instances, but for startups there are some hidden costs. If you're not coding properly, if your code is not properly optimized, the app might be able to scale -- but that’s going to have a huge impact on your books.

Therefore, the return on investment (ROI) when you're looking at HP Software is very clear. You work with the toolset. You have proper services, such as Globe Testing. You optimize your applications. And that’s going to make them cheaper to run in the long term.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

There are also things such as response time. Customers are very impatient. The old rule was that websites shouldn't take more than three seconds to load, but these days it's one second. If it's not instant, you just go and look for a different website. So response time is also something that is very worrying for our customers.

Gardner: So it sounds like cloud-first. We're talking about high scale, availability, and performance, but not being able to anticipate what that high scale might be in any given time. Therefore, creating a test environment, where you can make the assumption that cloud performance is going to be required and test against it, becomes all more important.

Aracil: Definitely. You need to look at performance in two ways. The first one is before the app goes into production in your environment. You need to be able to optimize the code there and make sure that your code is working properly and that the performance is up to your standard. Then, you need to run a number of simulations to see how the application is going to scale.

You might not reach the final numbers, and obviously it's very expensive to have those staging environments. You might not want to test with large numbers of users, but at least you need to know how the app behaves whenever you increase the load by 20 percent, 50 percent, and so on.

The second aspect that you need to be looking at is when the app is in production. You can't just go into production and forget about the app.

The second aspect that you need to be looking at is when the app is in production. You can't just go into production and forget about the app. You need to carry on monitoring that app, make sure that you anticipate problems, and know about those problems before your end users call to tell you that your app is not up and running.

For both situations HP Software has different tools. You can count on HP Performance Center and HP Diagnostics when you're in preproduction in your staging environment. Once you go live, you have different toolsets such as AppPulse, for example, which can monitor your application constantly. It's available as software as a service (SaaS). So it's very well-suited for new startups that are coming out every day with very interesting pricing models.

Gardner: You're based in Berlin, and that's a hotbed of startup activity in Europe. Tell us what else is important to startups. I have to imagine that mobile and being ready to produce an application that can run in a variety of mobile environments is important, too.

Mobile is hot

Aracil: Definitely. Mobile is very hot right now in Berlin. Most of the startups we talk to are facing the same issue, which is compatibility. They all want to support every single platform available. We're not only talking about mobile and tablet devices, but we're also talking about the smart TVs and the wide array of systems that now should support the different applications that they're developing.

So being able to test on multiple operating systems and platforms and being able to automate as much as possible is very important for them. They need the tools that are very flexible and that can handle any given protocol. Again, HP Software, with things such as Unified Functional Testing (UFT), can help them.

Mobile Center, which was just released from HP Software, is also very interesting for startups and large enterprise as well, because we're seeing the same need there. Banking, for example, an industry which is usually very stable and very slow paced is also adopting mobile very quickly. Everyone wants to check their bank accounts online using their iPad, iPhone, or Android tablets and phones, and it needs to work on all of those.

Most of the startups we talk to are facing the same issue, which is compatibility. They all want to support every single platform available.

Gardner: Now going to those enterprise customers, they're concerned about mobile of course, but they're also now more-and-more concerned about DevOps and being able to tighten the relationship between their operating environment and their test and development organizations. How do some of these tools and approaches, particularly using testing as a service, come to bear on helping organizations become better at DevOps?

Aracil: DevOps is a very hot word these days. HP has come a long way. They're producing lots of innovation, especially with the latest releases. They not only need to take care of the testers like in the old days with manual testing, automation, and test management. Now, you need to make sure that whatever assets you're developing on pre-production can then be reused when you go in production.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

Just to give you an example, with HP LoadRunner, the same scripts can be run in production to make sure that the system is still up and running. That also tightens the relationship between your Dev team and your Operations team. They work together much more than they used to.

Gardner: Okay, looking increasingly at performance and testing and development in general as a service, how are these organizations, both the startups and the enterprises, adapting to that? A lot of times cloud was attractive early to developers, they could fire up environments, virtualize environments, use them, shut them down, and be flexible. But what about the testing for your organization? Do you rely on the cloud entirely and how do you see that progressing?

Aracil: To give you an example, customers want their applications tested in the same way as real users would access them, which means they are accessing them from the Internet. So it's not valid to test their applications from inside the data center. You need to use the cloud. You need to access them from multiple locations. The old testing strategy isn't valid any more.

For us, Globe Testing as a Service is very important. Right now, we're providing customers with teams that are geographically distributed. They can do things such as test automation remotely, and that can then be sent to the customers so they are tested locally, and things such as performance testing, which is run directly from the cloud in the same way as users will do.

And you can choose multiple locations, even simulating the kind of connections that these users are using. So you can simulate a 3G connection, a Wi-Fi connection, and the like.

Other trends

Gardner: I suppose other trends we're seeing are rapid iterations and microservices. The use of  application programming interfaces (APIs) is increasing. All of these, I think, are conducive to to a cloud testing environment, so that you could be rapid and bring in services. How is that working? How do you see your customers, and maybe you can provide some examples to illustrate this, working toward cloud-first, mobile-first and these more rapid innovations; even microservices?

Aracil: In the old days, most of the testing was done from an end-to-end perspective. You would run a test case that was heavily focused on the front end, and that would run the end-to-end case. These days, for these kinds of customers that you mentioned we're focusing on these services. We need to be able to develop some of the scripts before the end services are up and running, in which case things such as Service Virtualization from HP Software are very useful as well.

For example, one of our customers is Ticketmaster, a large online retailer. They sell tickets for concerts. Whenever there's a big gig happening in town, whenever one of these large bands is showing up, tickets run out extremely quickly.

Their platform goes from an average of hundreds of users a day to all of a sudden thousands of users in a very short period of time. They need to be able to scale very quickly to cope with that load. For that, we need to test from the cloud and we need to test constantly on each one of those little microservices to make sure that everything is going to scale properly. For that, HP LoadRunner is the tool that we chose.

We need to be able to develop some of the scripts before the end services are up and running.

Gardner: Do you have any examples of companies that are doing Application Development Management (ADM), that is to say more of an inclusive complete application lifecycle approach? Are they thinking about this holistically, making it a core competency for them? How does that help them? Is there an economic benefit, in addition to some of these technical benefits, when you adopt a full lifecycle approach to development, test, and deployment?

Aracil: To give you an example of economic benefit, we did a project for a very large startup, where all their systems were cloud-based. We basically used HP LoadRunner and HP Diagnostics to look at the code and try to optimize it in conjunction with their development team. By optimizing that code, they reduced the amount of cloud instances required by one-third, which means a 33 percent savings on their monthly bill. That’s straight savings, very important.

Another example is large telecommunication company in Switzerland. Sometimes we focus not only on the benefits for IT, but also the people that they are actually using those services. For example those guys that go to their retail shops to get a new iPhone or to activate a new contract.

Reduce post-production issues by 80 percent
Download the HP white paper
Build applications that meet business requirements

If the systems are not fast enough, sometimes you will see queues of people, which turns into lower sales. If you optimize those systems, that means that the agents are going to be able to process contracts much quicker. This specific example will reduce to one-fifth of the time by using Performance Center. That means that the following Christmas, queues literally disappear from all those retail shops. That turns into higher sales for the customer.

Gardner: Jose, what about the future? What is of interest to you as a HP partner? You mentioned the mobile test products and services. Is there anything else particularly of interest, or anything on the big data side that you can bring to bear on development or help developers make better use of analytics?

Big data

Aracil: There are a number of innovations that are coming out this year that  are extremely interesting to us. These are things such as HP AppPulse Mobile, StormRunner, both are new tools and they are very innovative.

When it comes to big data, I'm very excited to see the next releases in the ALM suite from HP, because I think they will make a very big use of big data, and obviously they will try to get all the information, all the data that testers are entering into the application from requirements. The predictive test and the traceability will be much better handled by this kind of big data system. I think we will need to wait a few more months, but there are some new innovations coming out in that area as well.
Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Agile  BriefingsDirect  cloud computing  Dana Gardner  Globe Testing  HP  HPDiscover  Interarbor Solutions  Jose Aracil  mobile app development  mobile computing 

Share |
PermalinkComments (0)

Big data detects and eradicates slavery risks from across B2B supply chains

Posted By Dana L Gardner, Tuesday, April 28, 2015

Business networks and big data are joining to give companies new means to identify and eliminate slavery and other labor risks from across their complex global supply chains.

Data-savvy B2B participants in these networks can now newly uncover unsavory and illegal labor practices that may be hidden deep inside multi-level supplier ecosystems.

The next BriefingsDirect best business practices panel discussion focuses on how Made in a Free World, a nonprofit group in San Francisco, is partnering with Ariba, an SAP company, to shine more light across the supply chain networks to not only stem these labor practices, but also reduce the risks that companies may unwittingly incur from within their own pool of buying.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To explain practical and effective approaches to forced-labor risk determination and mitigation, we're joined by Tim Minahan, Senior Vice President of Ariba, and Justin Dillon, CEO and Founder at Made In A Free World in San Francisco. The panel is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Dillon: I learned about this slavery issue about 10 years ago, just reading an article in The New York Times, and really wanted to figure out a way to get more people involved. That led to Made In A Free World, a nonprofit organization that was started just few years ago.


We began with consumers, just everyday individuals getting involved and being able to leverage their own network, their personal networks. That's now graduated into figuring out how to get businesses to leverage their own networks.

Gardner: What is the problem we are talking about? Is this just slavery? Is this breaking the law in terms of other labor regulations?

Dillon: We define slavery as anyone who is being forced to work without pay under threat of violence, being economically exploited and unable to walk away. There are over 30 million people who fall under that definition today. In some cases, these people find themselves in the sex industry, but in most cases, they're in informal sectors, agricultural or service industries, much of which is finding its way into supply chains.

Part of what we believe is that we have to find a way to be able to connect the dots and figure out how we can use the systems that currently exist to protect the world's most vulnerable resource.

Gardner: Tim, why is this becoming such an important issue now for nearly any business?

Minahan: Over the past decade, as companies begin to outsource more processes and manufacturing and assembly to low-cost regions, they've really looked to drive costs down. Unfortunately, what they haven't done is really take a close look at their sub-tier supply chain.


So they might have outsourced a process, but they didn't outsource accountability for the fact that there may be forced labor in their suppliers' suppliers. That's a real issue, getting that transparency into that problem, understanding whether there's a potential risk for the threat of forced labor in a sub-tier supply chain.

Gardner: How big of a problem is this in terms of scope, depth and prevalence? Is this something that happens from time to time in rare instances or is this perhaps something that's quite a bit more common than most people think?

Minahan: Dana, this is far more pervasive than most people think. Slavery really has no boundaries. There are incidences of forced labor in all industries, from conflict minerals in the Congo to fishing in Malaysia to, unfortunately, migrant workers right here in the United States.

Palm oil problem

Dillon: It's really not a problem that “exists over there." Just one statistic alone: By 2020, half of all the products you can purchase in a grocery store will have a commodity, palm oil, which has a huge incidence of forced labor, particularly around the Malaysia region and Indonesia.

This small commodity, mostly because it's so easy to pick and harvest, is now finding its way into a myriad of products -- and that's just in agriculture.

Gardner: Why should companies be concerned? Isn’t that someone else's issue, if it’s below the level of their immediate reach?

Minahan: You can certainly outsource process or manufacturing, but you can’t really outsource accountability. Secondly, there is a big movement afoot from regulators, both here in the United States, Federal laws, California state laws, as well as overseas in the UK to hold companies accountable, not just for their first-tier supply chain, but for their sub-tier supply chain.

Gardner: And, Justin, of course visibility being so prominent now -- a camera around every corner, social media -- people can easily react and irreparable damage to a company's reputation. Do you have any examples of where this has come to bear, where not knowing what's going on within your supply chain can be such an issue economically and otherwise?

You can certainly outsource process or manufacturing, but you can’t really outsource accountability.

Dillon: Well, people love to hate the ones they love. So everyone is complaining about Apple products, while they're using their Apple iPhone. But in a recent article in The New York Times, one of the supply-chain folks for Apple said that the days of sloppy globalization is over and they're taking this quite seriously. I would argue that some Apple's work is some of the most innovative human rights work that we've seen.

They've dug deeper down into the sub-tier suppliers and, if their reputation as innovators has anything to do with their products, it certainly has to do with their supply chain as well. They are a great example of a company who sees these challenges, sees the connection to not only their brand, but also to their products and they're taking some remediation work against it.

Gardner: Okay, we understand the depth of the issue and why it’s important. Now, now how do we do get the tools to combat this effectively? What have you done? I understand you have a database, and it’s ongoing. Tell me a little bit about the tools that you’re bringing to bear to solve this problem and to help companies take better accountability.

Dillon: The most important thing to do first is to realize that we aren't operating in the 1990s anymore, in the sweatshop era. That was an era that where we found problems in supply chains and we started to build solutions around that, auditing, monitoring, all the rest of that. That was 25 years ago.

Era of big data

We're now in the 21st Century, in the era of big data, and we need to be able to use those tools to be able to combat 21st Century problems. For us as an organization, we've realized this was a space where we could be helpful to business. Our mission is to use the free market to free people. That’s what we do as an organization.

We've realized that there is a lot of data missing, and there is a lot of synthesis of data missing. So we, as an organization, decided to pull together the bible of databases, when it comes to supply chains, the UNSPSC. Then, we built off of that taxonomy a risk analysis on every single thing good, service, or commodity that can be bought or sold. That becomes the lingua franca, so to speak, when it comes to supply chain, risk management.

Gardner: How does it work? When a company comes in and gives you information about who their suppliers are, how do you come up with inference, insight and some sort of a predictive capability that identifies the risk or the probability that they could be in trouble?

Dillon: We are using all the best databases that currently exist on the issue. Everything from forced-labor databases to child-labor databases to rule-of-law, governance, migration, trade flows. All of that is synthesized into an algorithm that can be applied to any individual’s spend data. You're able to get a dashboard on all that spend, which gives you some optics into your sub-tier suppliers, which is where we need the optics. It’s not a crystal ball, but it’s the next best thing.

That database and analysis are now available to anyone, any size, any sector, to leverage their influence to the extent that they can. We've recognized that you can't be everywhere at all times but you’re somewhere at some times and that's the place we feel like any company can make the greatest influence.

Gardner: Now, Tim, the labor issues that we’re talking about are certainly a risk issue, but at the same time, we’re looking to find more ways to automate and make visible other types of risk, any kind of risk, in the supply chain in procurement, buy-and-sell environment.

That database and analysis are now available to anyone, any size, any sector, to leverage their influence to the extent that they can.

Tell us a little bit about how this particular risk fits into a larger risk category, and then how the databases for each of them, or many of them, come together for a whole greater than the sum of the parts.

Minahan: As Justin said, the information here is shining a light on a particular issue, being able to mass, for the very first time, data points from hundreds of different data sources around the world to predict and project the threat of forced labor.

Extending that further, that's one risk indicator that companies need to manage. Companies are managing a whole host of sustainability issues around social and eco-responsibility, but also financial risk, and threat of disruption risk.

Being able to pull all those together into a common risk factor is what we're attempting to do through the power of business networks. By connecting the world’s businesses and connecting their supply chains and automating their processes, that was phase one.

Phase two is harnessing all the information from all of those interactions to provide a better level of visibility that raises that transparency for everyone. Then, you can predict future risk. Being able to tap into what Made In A Free World has done in this particular area, pull that into the network, and expose that as another risk factor that companies can evaluate is a very powerful opportunity for us together.

Gardner: As we've seen in other aspects of the networked economy, the better the data, the more influence, the more action as a result of that data, that encourages more people to see value and add more data back into the pool and so on and so forth. Tell us about the news you made in April at the Ariba LIVE conference and how that works into this notion of a virtuous adoption cycle.

New partnership

Minahan: Ariba and Made In A Free World announced a new partnership to bring the power of the Made In A Free World database together for predictive analytics of forced labor with the power of the Ariba toolset and the Ariba Network to help the Global 2000 and beyond be able to have the transparency they need to identify potential risk, in this case, of forced labor and their supply chain, and be able to take action and rectify it.

Gardner: Justin, how do you see this announcement benefiting your cause?

Dillon: Well, it moves us past the point of saying there is nothing that you can do as a company. That is no longer an option on the table. What you're going to do is the next question. We have removed the word "if" in this conversation -- and we couldn’t have done that without Ariba.

And we can see how the Ariba Network is going to be helpful, because companies are coming to us and coming at the federal government saying, "What are we supposed to do? How do we move forward? This seems like a huge problem." They're right, and the solution is right in front of them.

Gardner: Tim, for organizations that are now more aware of this problem, and of the general risk issue across supply chains, how do they start? Where do they go to begin the process for getting better control that allows them to have better accountability?

Minahan: That’s exactly what Ariba and Made In A Free World are trying to do --  provide the tools necessary for companies to get started. We mentioned together the technical solution that we 're bringing to bear to allow folks within the Ariba community to be able to access the Made In A Free World input and be able to project potential issues of forced labor in their supply chain.

Together, through that effort, we've developed a playbook that provides suggested guidelines for folks to get started. You can’t fix what you don’t know about. You can’t improve without having some understanding of where you fit in the mix. We believe that data and doing an analysis on the freedom platform is the best way for a company to get started.

We think it’s the best synthesis of practices, data, influence, and the network effect that anyone can begin to take. We suggest that they start to look at their own risk based on what they are buying and based on their own exposure. Every company that comes to us, we're able to do this risk analysis with them, and they're are getting new insights.

In addition, there is a whole ecosystem of companies that are really looking at this issue hard. The thing that’s most exciting about it is that everyone is very transparent and they're willing to share. So you have companies from Patagonia to Apple and the like that are sharing their practices freely, because this is an issue that we want to address. The business world has the opportunity to address probably the most serious issue facing us today, and that is modern slavery.

Learning more

We have every expectation that our partnership with Ariba is going to improve this tool, not just for us, but for the planet.

Gardner: It strikes me that this is a game of numbers. If enough companies examine their supply chain sufficiently and then eradicate the areas where there is trouble, the almighty Dollar, Peso, or what currency it may be, comes to bear, and things like corruption don’t work and bad labor practices don’t get supported. Is this a rolling-thunder sort of thing. If so, what’s the timeframe? Is this something that can be solved in a fairly short amount of time if people actually come together and work on it?

Dillon: To me, it’s the greatest story ever told. This is the thing that we all appreciate, and frankly we all benefit from it. We're all benefiting from a free market and the way that we look at change in our organization is that we say that change is more about judo than karate. How can we use the force against itself to actually change it, and the marketplace is the way to do that.

So yes, this can move quite quickly. In my opinion, it's much quicker than governments can move. We think that they're going to be followers in this case and we highly expect them to be followers.

Gardner: Tim, if companies do this right, if they leverage big data, if they examine risk properly, what do they get, other than of course the direct reduction of that particular risk? It strikes me that we are not just tackling risk, but we're putting into place systematic ability to manage risk over time, which is way more powerful. Is that what we're going to get?

Looking at risk

Minahan: Absolutely. Companies are looking at risk holistically, and this is one key vital input into that. As they continue to address that and, as Justin said, we're using free markets as a powerful lever to free people. They can, through their buying patterns and their buying policies, begin to adjust the market, shine a light not just on their own supply chain but change the sub-tier supply chain practices to make sure that there is fair labor. It becomes unappealing to have forced labor and it becomes a detriment to their ability to win new business by doing that.

That's really what the power of data and the power of business networks can deliver in this scenario.

Dillon: Businesses are the heroes in the story. It's not the non-profit, and it’s not government. Anytime you associate business with human rights, businesses are often qualified as Darth Vader who actually think that they are Luke Skywalker in the story. And for us an Ariba, I don’t need to put that on them, but we are just Yoda.

We're giving them the tools that they need to fight the evil empire and we are going to do it together and celebrate it together. But this is one thing that we'll have to do in concert. We all have individual roles to play, and business has a very clear, distinct role to play.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Network  BriefingsDirect  Dana Gardner  forced labor  Interarbor Solutions  Justin Dillon  Made in a Free World  SAP  slavery  Tim Minahan 

Share |
PermalinkComments (0)

ECommerce portal Avito uses big data to master rapid fraud detection

Posted By Dana L Gardner, Wednesday, April 22, 2015

This BriefingsDirect big data innovation discussion examines how Avito, a Russian eCommerce site and portal, uses big data technology to improve fraud detection, as well as better understand how their users adapt to new advertising approaches.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how big data offers new insights to the eCommerce portal user experience, BriefingsDirect sat down with Nikolay Golov, Chief Data Warehousing Architect at Avito in Moscow. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It sounds like Avito is the Craigslist of Russia. Tell us a little bit about your site and your business.

Golov: Yes, Avito is a Russian Craigslist. It's a big site and also the biggest search engine for some goods. We at Avito have more searches, for example, from iPhones than Google or Yandex. Yandex is a Russian Google.

Gardner: Does Avito cover all types of goods, services, business-to-business commerce?

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Golov: You can sell almost anything that can be bought in the market on Avito. You can sell cars, you can sell houses, or rent them, for example. You can even find boats or business jets. We right now have about three business jets listed.

Size advantage

The main advantages of Avito is, firstly, its size. Everybody in Russia knows that if you want to buy or sell something, the best place for it is Avito. It’s first.


Second is speed. It is very easy to use it. We have a very easy interface. So we must keep these two advantages.

But there are also some people who want to use Avito to sell weapons, drugs, and prohibited medicines. It's absolutely critical for Avito to keep it all clean, to prevent such items from appearing in the queries of our visitors.

We're growing very fast, and if we use moderators we'll have to increase our expense on moderation in a linear progressions as we grow. So, the only solution to avoid a linear increase in expenses is to use automation.

Gardner: In order to rapidly decide which should or should not be appearing on your site, you’ve decided to use a data warehouse that provides a streaming real-time data automation effect. What your requirements are for that technology?

Golov: We need to be able to perform fast fraud detection. The warehouse has to have very little delay.

Our data warehouse has to be big. It has to store months, possibly years, of data

Second, we have to have data for long periods of time to learn our data mining algorithms -- to create reports, and to analyze trends. So our data warehouse has to be very big. It has to store months, possibly years, of data. It has to be fast, or only slightly delayed, and it has to be big.

Third, we're developing very quickly. We're adding some new services, and we're integrating with partners. Not long ago, for example, we added information from Google AdWords to optimize banners. So the warehouse must be very flexible. It must be able to grow in all three ways.

Gardner: How long have you been using HP Vertica and how did you come to choose that particular platform?

Golov: Well over a year now. We chose Vertica for two two main advantages. First, speed of load and data. The I/O speed provided by Vertica is awesome.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Second is its ability to upgrade, thanks to the commodity hardware. So if you have some new requirements that require you to increase performance, you can just buy new hardware -- commodity hardware -- and its power just increases.

It’s great and it can be done really fast. Vertica was the winner.

Measuring the impact

Gardner: Do you have a sense of what the performance and characteristics of Vertica and your data warehouse have gotten for you? Do you have a sense of reduced fraud by X percent or better analytics that have given you a business advantage of some sort? Are there ways to measure the true impact?

Golov: Avito grew really fast over the past year. We have a moderation team of about 250 people at the beginning of this process. Now, we have the same moderation team, but the number of items has increased two-fold. I suppose that's one of the best measures that can be used.

Gardner: Fair enough. Now, looking to the future, when you're working in a business where your margins, your business, your revenue comes from the ability to provide advertisement placements, improving the performance and the value on the actual distribution of ads and the costs associated is critical.

In addition to rapid fraud detection and protection, is there a value from your analytics that refines the business algorithms and therefore the retail value to your customers?

We're starting few more products. The main aim of them is to create our own tool for optimizing the directions of advertising.

Golov: We're creating more products. The main aim of them is to create our own tool for optimizing the directions of advertising. We have banners, marketing campaigns, and SMS. So we've achieved some results in our reporting and in fraud prevention. We'll continue to work in that direction, and we are planning to add some new types of functionality to our data warehouse.

Gardner: It certainly seems that a data warehouse delivers a tactical benefit but then over time moves to a strategic benefit. The more data, inference, and understanding you have of your processes, the more powerful you can become as a total business.

Golov: Yes. One of my teachers in data warehouses explained the role of data warehouses in an enterprise. It’s like a diesel engine inside a ship. It just works, works, and works, and it’s hot around it. You can create various tools to increase it, to make it better.

But there must always be something deep inside that continuously provides all of the associated tools with power and strong data services from all sides of the business.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Gardner: I wonder for others who are listening to you and saying, "We really need to have that core platform in order to build out these other values over time." Do you have any lessons that you have learned that you might share. That is to say, if you're starting out to develop your own data warehouse and your own business intelligence and analytics capabilities, do you have any advice?

Be flexible

Golov: First, you have to be flexible. If you ask a business about changing, they'll tell you that they can’t. It will be absolutely this, every time. And in two months, it will still change. If you're not ready to change using your data warehouse to get needed data and analytics, it would be a disaster. That's first.

Second, there always will be errors in data, there will be gaps, and it's absolutely critical to start building a data warehouse together with an automated data quality system that will automatically control and monitor the quality of all the data. This will help you to see the problems when they occur.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Avito  big data  BriefingsDirect  Dana Gardner  data warehouse  HP  HP Vertica  HPDiscover  Interarbor Solutions  Nikolay Golov 

Share |
PermalinkComments (0)

Enterprise Architecture pioneer John Zachman on gaining synergies among the major EA frameworks

Posted By Dana L Gardner, Thursday, April 16, 2015

What is the relationship between the major Enterprise Architecture (EA) frameworks? Do they overlap, compete, support each other? How? And what should organizations do as they seek a best approach to operating with multiple EA frameworks?

These questions were addressed during a February panel discussion at The Open Group San Diego 2015 conference. Led by moderator Allen Brown, President and Chief Executive Officer, The Open Group, the main speaker John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework, examined the role and benefits of how EA frameworks can co-exist well. He was joined by Steve Nunn, vice president and chief operating officer of The Open Group.

Download a copy of the full transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Zachman: A friend of mine recently did a survey of 108 CEOs, around mostly North America. I was shocked when I saw that the survey said that the biggest problem facing the enterprise is change.


And when I heard that, my reaction was, well, if the CEO thinks the biggest problem facing the enterprise is change, where is the "executive vice president in-charge of change management"? If nobody is in-charge, the high probability is low to zero that you are going to be able to accommodate change.

There are two reasons why I do architecture. One is complexity, and the other one is change.

Create the architecture

If you want to change something that already exists, you are going to have to have architecture -- one way or another. And you have to create that architecture.


Now, the reason that I am saying this is if 108 out of 108 CEOs -- of course those were high visibility CEOs -- said the biggest problem facing the architecture is change, who ought to own the Enterprise Architecture? I would submit it has to be a general management responsibility.

It needs to be an executive vice president. If the CEO thinks the biggest problem facing the enterprise is change, the CEO ought to be in-charge of Enterprise Architecture. If he or she is not going to be in-charge, then it ought to be the person they see every morning when they come in the office ... "Hey, Ralph, how is it going on the architecture?" So it probably should be a general management responsibility. That’s where I would take it.

This same misconception about enterprise is what leads people to misconstrue Enterprise Architecture as being big, monolithic, static, inflexible, and unachievable and it takes too long and costs too much.

I put TOGAF® together with my Zachman framework and, in fact, I kind of integrate them. In fact, I have known Allen Brown for a number of years, and he was in Johannesburg a number of years ago. He introduced me to a TOGAF conference and he said, "For a lot of years, I thought it was either Zachman or TOGAF." He said that’s incorrect. "Actually it’s Zachman and TOGAF."

That basically is where I am going to take this: It’s Zachman and TOGAF. How then would you integrate TOGAF and The Zachman Framework, which I obviously think is where we want to go to?

The first question turns out to be what is architecture? What is it? Some people think this is architecture: the Roman Colosseum. Some people think that is architecture.

Now, notice that is a common misconception. This is not architecture. This same misconception about enterprise is what leads people to misconstrue Enterprise Architecture as being big, monolithic, static, inflexible, and unachievable and it takes too long and costs too much.

If you think that is architecture, I am going to tell you, that’s big and monolithic, and static. It took a long time and it cost a lot of money. How long do you think it took them to build that thing? Not a day, not a week, not a year, not a decade. It was a couple of decades to build it.

In fact, the architecture had to be done long before they ever created the Roman Colosseum. They couldn't have even ordered up the stones to stack on top of each other until somebody did the architecture. The architecture had to be done long before they ever created the Roman Colosseum.

Result of architecture

Now, that is the result of architecture. In the result, you can see the architect’s architecture. The result is an implementation and instance. That is one instance of the architecture. Now, they could have built a hundred of these things, but they only built one.

I was in New Zealand a few years ago and I said that they could have built a hundred of these things, but they only built one. Some guy in the back of the room said they actually built three. I said I didn’t know that. He even knew where they were. I was really impressed.


I was in Rome last June and I am talking to these guys in Rome. I said, you guys could have built a hundred of these things, you only built three. And the guys in Rome said, "We built three; I thought we only built one." Actually I felt a lot better. I mean, you can build as many as you want, but this just happens to be one instantiation. And in fact, that is not architecture. That’s just the result of architecture.

Architecture is a set, it's not one thing, it's a set of descriptive representations relevant for describing a complex object (actually, any object) such that an instance of the object can be created and such that the descriptive representations serve as the baseline for changing an object instance (assuming that the descriptive representations are maintained consistent with the instantiation). If you change the instantiation and don't change the descriptive representations, they would no longer serve as a baseline for ensuing change of that instantiation. In any case, architecture is a set of descriptive representations.

Basically, they are answering six primitive interrogatives: what, how, where, who, when and why. That's been known since the origins of language about 7,000 years ago.

Now, you can classify those descriptive representations in two dimensions. One dimension is what I call it Abstractions. I don't want to digress and say why I happened to choose that word. But if you look at architecture for airplanes, buildings, locomotives, battleships, computers, tables or chairs, or XYZ, they are all going to have Bills of Materials that describes what the thing is made out of.

You have the Functional Specs that describe how the thing works. You have the Drawings or the Geometry that describes where the compound is relative to another. You have the Operating Instructions that describes who is responsible for doing what. You have the Timing Diagram that describes when thing happens, and the Design Objectives that describes why they happen.

So it doesn't make any difference what object you are looking at. They are going to have bills of material, Functional Specs, the Drawings or Geometry or Operating Instructions and so on. You are going to have that set of descriptive representations.

Now, they didn't happen to stumble across that by accident. Basically, they are answering six primitive interrogatives: what, how, where, who, when and why. That's been known since the origins of language about 7,000 years ago. And the linguists would observe for you that's the total set of questions you need to answer to have a complete description of whatever you want to describe; a subject, an object, or whatever.

It's okay if you don't answer them all, but any one of those questions that you don't answer, you are authorizing anybody and everybody to make assumptions about what the answers are that you don't make explicit. So if don't make those answers explicit, people are going to make assumptions. You are authorizing everybody to make assumptions.

The good news is, if the assumptions are correct, it saves you time and money. On the other hand, if the assumptions are incorrect, that could be horrendous, because incorrect assumptions are the source of defects. That’s where defects are, or miscommunications, or discontinuity. That's where you have the defects come from. So you don't have to answer all the questions, but there is a risk associated with not answering all the questions.

And I did not invent that, by the way. That's a classification that humanity has used for 7,000 years. Actually, it's the origins of language basically. I did not invent that. I just happened to see the pattern.

Parts and part structures

Now, there is one other thing I have to tell you, in a Bill of Materials, you have descriptions of parts and part structures. There is no expression of functional specification in the Bill of Materials, there is no expression of Drawings in the Bill of Materials, nor expression of Operating Instructions. There is no expression of Time or Design Objectives. There are parts and part structures.

In the Functional Specs there is Functional Specs. There is no expression of parts or part structures. There is no expression of Geometry or Drawings. There is no expression of operating responsibility, time, or Design Objectives. There are Functional Specs.

In the Geometry, there is no expression of parts and part structures, there is no expression of Functional Specs, operating responsibility, time, or Design Objective. There are the Drawings or the Geometry.

I am not going to do it anymore; you get the idea. If you are trying to do engineering kind of work, you want one, and only one, kind of thing in the picture. You start putting more and more kinds of thing in the picture, and that picture is going to get so complicated that you will never get your brain around it actually.

You want to minimize any potential discontinuity, any kind of disorder. You want to normalize everything.

And if you are going to do engineering work, what you want to do is normalize everything. You want to minimize any potential discontinuity, any kind of disorder. You want to normalize everything. In order to normalize everything you have to see all the parts relative to the whole object. You have to see them all, so that you can get rid of any re-occurrence or any kind of redundancy.

You want to normalize, get the minimal possible set. You only want to look at all the Functional Specs, but you want to look at it for the whole object. Get it down to minimize, minimize the complexity. You want to minimize the redundancy. You don’t want any redundancy showing up and so on. You want to minimize everything. Minimum possible set of components to create the object.

You don't want extraneous anything in that object, whatever that object happens to be,  airplane, building, or whatever. So I just made that observation.

Now I am going to digress. I am going to leap forward into the logic a little bit for you. There is the engineering view. If you want to do engineering work, you want to see the whole. You only want to see one type of fact, but you want to see the whole set for the whole object, for the whole product.

So when you are doing engineering work, you want to see the whole thing, because you have to see how all the parts fit together basically.

Now, if you are doing manufacturing work, however, that's not what you need. You want to take one part, you want to decompose the object down to as small parts as possible and then you want to see the characteristics. You take one part and you need to know the part or part structure. You have to know the functionality for that part, you have to know the Geometry for that part, the operating responsibility, for that part, the Timing Diagram for that part, and the Design Objective for that part. So if you are doing manufacturing, you want to see all the variables relative to one part.

Different models

There are two different kinds of models that are required here. You want the engineering models, which are, in fact, a normalized set. You want to see one fact for the whole object. And the manufacturing model, you want to see for one part. You want to see all the variables for one part. So there are two different kinds of descriptive representations that are relevant for describing the object.

Now, I would just observe this, engineering versus manufacturing. Engineering work requires single-variable, ontologically defined descriptions of the whole of the object, which I would call a primitive.

In contrast, manufacturing work requires multi-variable, holistic descriptions of parts of the object, what I would call a composite. That’s the implementation; that’s the composite.

The interesting phenomenon is -- and somebody talked about this yesterday too -- in manufacturing, this is analysis. You break it down into smaller and smaller pieces. In fact,  it's a good approach if you want to classify, you want to deal with complexity. The way humanity deals with complexity is through classification.

If you just do analysis, and you are doing manufacturing or implementation work, you are going to get disintegration. If you are doing engineering work, you want to deal with the issue of synthesis.

A one-dimensional classification for manufacturing is to decompose it down to various small parts. The reason why that becomes useful, is that it’s cheaper and faster to manufacture the part. The smaller the part, the faster and cheaper it is to manufacture it.

So basically if you go back to The Wealth of Nations by Adam Smith, the idea was to break it down into smaller parts so you can manage the parts, but in doing that, basically what you are doing is you are disintegrating the object, you are disintegrating it.

In contrast, in engineering work, you need to look at synthesis. It you take a one-dimensional classification, you are disintegrating the object. The same content can show up in more than one category, the bottom of the tree. If you want to do engineering work, you want to see how all the parts fit together. That’s a synthesis idea.

So if you just do analysis, and you are doing manufacturing or implementation work, you are going to get disintegration. If you are doing engineering work, you want to deal with the issue of synthesis.

So it’s not an either-or thing; it’s an “and” kind of a thing. And the significant issue is that this is radically different. In fact, it was Fred Brooks who said, programming is manufacturing, not engineering. So those of us who come from the IT community have been doing manufacturing for the last 65 or 70 rears basically. In contrast, this is different; this is a standard. This stuff appears radically different.

So the reason why we build implementations and we get frustration on the part of the enterprise is because the implementations are not integrated, not flexible, not interoperable, not reusable, and not aligned. They are not meeting expectations. Fundamentally, if we use a one-dimensional classification, you're going to end up with disintegrating the thing. It’s not engineering. It’s implemented, but not engineered.

Two-dimensional classification

If you want the thing to be engineered, you have to have a two-dimensional classification. You have to have a schema, a two-dimensional classification, because you have to have two-dimensional in order to normalize things.

I don’t want to digress into that, but Ted Codd was floating around with the relational model. Before Ted Codd and the relational model, we didn’t even have the word normalization at that point in time. But to try to manage the asset you are trying to manage, you have to have a normalized structure.

If you want to manage money, you have to have chartered accountants. If you want to manage an organization, you have to have allocation responsibilities. If you want to manage whatever you want to mange, you have to have a normalized structure.

So if you want the thing to be engineered, integrated, flexible, interoperable, reusable, and so on, then you have to do the engineering work. Those are engineering derived characteristics.

You don't get flexibility, integration and so on from implementation. Implementation is what you get, which is really good. I am not arguing; that’s really good, but on the other hand, if you need integration, flexibility and so on, then you have to do engineering work. So it takes you beyond merely the manufacturing and the implementation.

I gave you one dimension of the classification of descriptive representations, which I called abstractions; the other dimension I call perspectives. Typically, I would take a few minutes to describe this for you, but I'm just going to kind of net this out for you.

You have to take those apart to create the descriptive representations in such a fashion that we can do engineering work with it.

Back in the late 1960s time frame, we had methodologically defined how to transcribe the strategy of the enterprise, but we had to transcribe it. We knew at the time we had to transcribe it in such a fashion that you can do engineering work with it.

It's not adequate to transcribe the strategy in such fashion to say make money or save money or do good or don't do, whatever, or feel good, or feel bad, or go west or go east. Those are all relevant, but you have to take those apart to create the descriptive representations in such a fashion that we can do engineering work with it.

These are in the late 1960s time frame. We knew how to transcribe this strategy in such a fashion that we could do engineering work. What we didn't know is how to transform that strategy into an instantiation such that the instantiation bears any resemblance to what the strategy was fundamentally.

So the problem is that in those days, we tended to describe this in a somewhat abstract fashion: make money or save money, whatever, but down here, you're telling a person how to put a piece of metal in a lathe and then how to turn it to get whatever you're trying to create. Or it could be telling a machine what to do, in which case you're going to have a descriptive representation, like 1,100 and 11,000. So it's a long way from "make money" to "11,000." We didn’t know how to make the transformation of the strategy to the instantiation, such that the instantiation bears any resemblance to the strategy.

We knew architecture had something to do with this, but, if you go back to the late 1960s time frame, we didn’t know what architecture was.

Radical idea

I had a radical idea one day. I said, "What you ought to do is ask somebody who does architecture for something else, like a building, an airplane, a computer, an automobile, or XYZ. Ask them what they think architecture is." If we could figure out what they think architecture is, maybe we can figure out what architecture is for enterprises. That was my radical idea back in those days.

A friend of mine was an architect who built buildings actually. So I went to see my friend Gary, Gary Larson, the architect, and I said, "Gary, talk to me about architecture." He said, "What do you want to know?" I said, "I don't know what I want to know; just talk to me and maybe I'll learn something."

He said, "Somebody came in my office, and said I want to build a building." I said, well, what kind of building do you have in mind? Do you want to work in it? Do you want to sell things in it? Do you want to sleep in it? Is it a house? What are you going to do with it? Are you going to manufacture things in it? What’s the structure of it: steel structure, wood structure, stucco, glass, or whatever?

I have to know something about the footprint. Where are you going to put this thing? What’s the footprint? I have got to know what the workflow is. If you're going to eat in the thing and sleep in the thing, you put the eating and the cooking near each other, you put the sleeping someplace else. You have to know something about the workflows.

By the way, we learned about that a long time ago, those of us who are in IT; separate the independent variables.

I have to know something about the Timing Diagrams. Am I going to design a building that has elevators. It has an up button, you go up, and the down button, you go down. I have to know something about the Design Objectives.

Do you want to change this building, you want flexibility. If you want to change this building after I get a bill, then don't hard bind the wall to the floor. Separate the independent variables. If you want flexibility, you separate the independent variables.

By the way, we learned about that a long time ago, those of us who are in IT; separate the independent variables. I haven’t heard this for 30 or 40 years, but it’s like binding. You don’t want to bind anything together.

You want to bind independent variables together so you collect relationship knowledge. That’s why you bind them together, because as soon as you fix two things together, independent variables, if you want to change one, you have to change them all -- throw the whole thing and you have to start over again.

So if you want to change things, you separate the independent variables. How do you like this for an idea by the way? You have the data division and a procedure division? That’s pretty interesting. You can change one data element and all the instructions. You want to change one data element and all the instructions. So you separate the independent variables if you want to change them.


Now, for manufacturing purpose, you want to hard bind them together. That’s the implementation.

So Gary says, "I have to know whether they want flexibility or whatever. I have to know the Design Objectives. sketch up my bubble charts. I have to understand what the boundaries are here, so I don't get blindsided in effect."

"If I'm going to build a 100-story building, a huge building, then I'll live with the owners for a couple of years, so I find their aesthetic values, what they're thinking about, what their constraints are, what they really want to do, how much money they have, what their purpose is. I have to understand what the concept of that building is."

"I transcribe the concepts of the building. And this is really important. I can take this down to an excruciating level of detail. Actually, I have to build the scale model. It has light bulbs that go on or off. I have water that runs through the pipes. I can build a scale model, so that the owners can look at this and say, 'That is really great; it’s exactly what I had in mind', or 'Whoa, it’s not what I had in mind.'"

"It's really important because if the owner. If this is what they have in mind and they say, 'Okay, chief, sign here, press hard on the dotted line, you have got to go through three copies.'"

So the architect defined these models, then they transformed it into the instantiation. He built the building, but it’s not what the owner had in mind. And it’s a massive lawsuit.

"I have an architect friend right now, who's in the middle of a massive lawsuit. The owners of the building did not want to sit down and define these models up here. They said, 'You know what we have in mind so go ahead and define it. We don’t have the time to think about this or whatever.'"

"So the architect defined these models, then they transformed it into the instantiation. He built the building, but it’s not what the owner had in mind. And it’s a massive lawsuit.

"I said to my architect friend, 'I went out to your website and I figured out, I found out why you're having this lawsuit. They were not involved in defining what the concepts are."

Now, Gary would say, "Once I get the concepts, I have to transform those concepts into design logic for the buildings, because I haven’t got the building design, I only have the concepts transcribed. Now I have to deal with pressure per square inch, metallurgical strength, weight of water to move the water around. I have to deal with earthquakes. I have got to deal with the whole bunch of other stuff."

"I may have some engineering specialization to help me transform the requirement concepts into the engineering design logic." In manufacturing, they call this the as-designed representation. Gary called that the architect’s plans. He called these architect’s Drawings. He called these the architect’s plans.

"Now, I have to get the architect’s plans. "I have to negotiate with the general contractor, because the general contractor may or may not have the technology to build what I have designed. So I have to transform the logic of the design into the physical design. I have got the schematics here, but I have to have the blueprints."

Making transformations

"So we have to negotiate and make the transformations, have some manufacturing engineers help me make that transformation. And in manufacturing they would call this as designed and this as planned."

"I make the transformation, so the implementation. They have the technology to implement, the design. Then, this contractor goes to the subcontractors who have the tooling, and they have to configure the tools or express precisely what they want somebody to do in order to create it and then you build the building."

That’s pretty interesting. You notice, by the way, there are some familiar words here: concepts, logic, and physics in effect. So you have the owner’s view thinking about the concept; the designer’s view thinking about the logic; and the builder’s view thinking about the physics in effect. You have the concepts, the schematics, and the blueprints. Then you have the configuration and the instantiation. That's the other dimension of a classification.

Now, there is a two-dimensional classification structure. That’s an important idea. It’s a really important idea. If you want to normalize anything, you have to be looking at one fact at a time. You want to normalize every fact. You don’t want anything in there that’s extraneous. You want to normalize everything.

The original databases typically were either flat files or hierarchical databases. They're not any good for managing data; they're good for building systems.

So it’s a two-dimensional schema, not a one-dimensional schema, not a taxonomy or a hierarchy or a decomposition; this is a two-dimensional schema.

If you folks go back to the origins in the IT community, the original databases typically were either flat files or hierarchical databases. They're not any good for managing data; they're good for building systems. You break it down, decompose it onto small parts, and they're good for building systems. They're not good for managing data.

So then you had to have a two-dimensional classification and normalization. Ted Codd showed up, and so on. I don’t want to digress into that, but you get the idea here. It’s a two-dimensional classification.

And I was in Houston at one time, talking about the other dimensional classifications. Some guy in the back of the room said, "Oh, that’s reification." I asked what that was. Reification? I never heard the word before. It turns out it comes out of philosophy. 

Aristotle, Plato, and those guys knew the ideas that you can think about are one thing but the instantiation of the ideas is a completely different thing. If you want the instantiation to bear any resemblance to what the idea is, that idea has to go through a well-known set of transformations.

You have to identify it and name it. So you're going to have to dialogue about it. Then you define it and you have the semantic structures. They have to have their representations -- all the interior designs are done with representations -- and then you have to specify it based upon the implementation technology. Then, you configure it based upon the tooling and then you instantiate it. And if it goes through that set of well-known transformations, then the end result will bear some resemblance to the outset.

Set of transformations

If you don’t go through that, you may or may not look out, and say, "A blind thing finds a solution every now and then." Well that’s pretty good, but on the other hand, you won’t have any degree of assurance that whatever you’re going to end up with bears any resemblance to what you had in mind of the outset. It has to go through that set of transformations.

By the way, I didn't define those; those came out about a couple of thousand years ago as reification. The etymology of the word "re" is Latin; it means thing. So you’re taking an idea and transforming it into a thing. That’s the other dimension of classification in any case.

This is the framework for Anything Architecture, okay? They are going to bills of material, the Functional Specs, the Geometry, or Drawing, Operating Instructions, Timing Diagrams, Design Objectives. That’s one dimension. For the other dimension, you have the scoping representation, the boundaries, requirement concepts, the design logic, the plan physics, the tooling configurations, and then you get the instantiations. So that’s the framework for Anything Architecture.

And I don’t care whether you’re talking about airplanes, buildings, locomotives, battleships, tables, chairs, or whatever. It’s anything in effect. That's just a framework for Anything Architecture.

I didn't define those; those came out about a couple of thousand years ago as reification. The etymology of the word "re" is Latin; it means thing. So you’re taking an idea and transforming it into a thing.

Now all I did was I put enterprise names on the same descriptive representation relevant prescribing anything.

Okay, we produced a Bill of Materials, too. We would call these the Inventory Models, actually that's the business name for them. The technical name would be Entity Models. Now what's an entity, what's a set? What's important about sets? Well how many members are in the set? Are they all present or not? It is actually that the business cares less about entity. They don't care about entity; they care about inventories.

So let's call them by their business name. It's the Inventory Model. The technical name is be Entity Model, but there is Inventory Model. Now the system Entity Model would be the logic entity. In fact, we would call it a Logical Model, but that would be sitting right there. But the Bill of Materials we would call them Inventory Models.

The Functional Specs we call it the Process Models, those are processes. It takes on something different, or the input process output.

The Drawings or the Geometry we would call the Geography, the distribution models, the locations where you store things and transport things around. That would be the distribution models or the Geometry of the enterprise. Maybe Geography would be our name.

The Operating Instructions, we call the Responsibility Models, the workflow models. You know what responsibilities are going to assign to various roles within the enterprise; responsibility of workflow.

The Timing Diagrams, we would call Timing Models. Some people say the Dynamics Models. Jay Forrester at MIT basically wrote the book Industrial Dynamics in 1959. They were tracing resource flows in the enterprise. They were using manufacturing concepts in human systems and so they call the Dynamics Model, but a lot of times we will call them Timing Models.

Motivation models

The Design Objectives we might call motivation models. So all I was doing was putting enterprise names on the same concepts. By the same token the Scope Contexts we would call Scope Lists. We are just scoping out. Give me a list of inventory, give me a list of processes.

The Requirement Concept, we would call Business Models; those are models of the business. And the design logic, we call system models. Those are the Logic Models, they are the System Models and we call systems.

The plan physics we call technology models, the technology constraint. The part configuration, we call tooling models and then product instances we call the enterprise implementation.

I calculated 176 different plausible definitions for business architecture . . . So you have to get definitive about it, or else you are like freight trains passing in the night.

The enterprise is sitting down here. Actually all this is architecture, but the instantiation is down here.

Allen Brown made some really good observations about business architecture. I have a whole other observation about business architecture. Now the question is when you say business architecture, what do you mean?

I was talking at a big business architecture conference. They were having animated discussions and they were getting real passionate about it, but the fact of the matter is they weren’t defining business architecture the same way; they were all over the board.

I said this yesterday. I calculated 176 different plausible definitions for business architecture. For those guys, you could be talking about any one of those, but if you don’t define which one you are talking about, whoever you're talking to may be hearing any one of the other 175. So you have to get definitive about it, or else you are like freight trains passing in the night.

I will tell you, there are various combinations of these models up here that somebody can articulate as business architecture. Which one are you talking about when you say business architecture. Are you talking about the business process? Are you talking about the objectives and strategies. Or are you talking about the infrastructure distribution structure?

Or are you talking about some combination? You have to talk about the inventories and the processes and see those together. You can put together whatever combinations you want. There are 176 possibilities basically.

I would have what I would call the primitive components defined and then, depending upon what you want to talk about, I would construct whatever definition you want to construct.

Enterprise names

Now, I just put the enterprise names on it again. So here is The Framework for Enterprise Architecture and I populated this. Here is the Bill of Material, here are the functional specs, here is the Geometry or the Geography, here are the Operating Responsibilities, here are the Timing Diagrams and here is the Design Objectives, and here are the Scoping Representations, here are the Concepts Models, the Requirement Concepts, here are the Design Logic, here is the Building Physics in effect, the as planned, here are the Tool Configuration, and there is the Instantiation. So that’s The Framework for Enterprise Architecture.

I just put the enterprise names on it, You obviously saw what I was doing. You can read The Framework for Anything, you can read The Framework for Enterprise, but I was telling you The Framework for Anything. So it’s all basically the same thing. This is Enterprise Architecture.

Now, I have some of these framework graphics. For anybody who wants to go to the workshop this afternoon, we will make sure you have a copy of it, and anybody who doesn't go to the workshop, we will have them out at the table. So you can pick up a copy.

I wrote a little article on the back of  that John Zachman’s Concise Definition of Zachman Framework.

Actually somebody asked me if I had ever read what Wikipedia said about my Framework? I said no, I had never read it. I don’t need to read Wikipedia to find out the definition of The Zachman Framework. So they said, "You better read it, because whoever wrote it has no idea what you're talking about."

It’s architecture for every other object known to humankind. It’s architecture for airplanes, buildings, locomotives, computers, for XYZ. It doesn't make any difference.

So I read it, and they were right. They had no idea what I was talking about. So I fixed it. I wrote the article and put it out there. A couple of months later some friend of mine said, "Did you ever read what they wrote on Wikipedia about your Framework?" I said I wrote it. He said, "What? You wrote it? I don't believe it. It’s not what you talk about."

So I read it and some yo-yo had changed it back. So I changed it back. And a couple of months later, guess what? They changed it. So I said I'd forget these people. The fact is I wrote my own definition of The Zachman Framework, so that’s on the back there, with the little audio.

Now, you understand what I am telling you. This is Enterprise Architecture. It’s architecture for every other object known to humankind. It’s architecture for airplanes, buildings, locomotives, computers, for XYZ. It doesn't make any difference. I just put enterprise names on it.

By the way, for those of you technical aficionados, the meta-entity names are at the bottom of every cell, and there are only two meta-entities on every cell. But the names are very carefully selected to make sure they are precisely unique and single variable. You only have one and only one thing in each one of these -- one type effect in any one of these cells. So in any case, this is Enterprise Architecture.

Friends of mine wanted me to change the name of this to Zachman Ontology, because if you recognize this, this is not a methodology; this is an ontology. This does not say anything about how you do Enterprise Architecture -- top-down, bottom-up, left to right, right to left, where it starts. It says nothing about how you create it. This just says this is a total set of descriptive representations that are relevant for describing a complex object. d I happen to have enterprise names on them, but it doesn't tell you anything about how to do this.

Not either/or

For a lot of years, people didn't know what to do with this. They were saying, "I don’t know what to do with it. How do you do Enterprise Architecture?" Now you understand where I am going to take you with this. This is an ontology, and you need a methodology. It is neither a methodology or an ontology. It’s an ontology and a methodology. It’s not either/or.

However, this is an ontology. It’s classifying. It has unique categories of every set of facts that are relevant for describing a complex object basically.

Now, by the way, there is another graphic in this and the reason I put this is that my name is on a number of websites, but I am excluded from those websites, I have nothing to do with those websites, even though they have my name on them. There is only one website that I have any access to, and that’s That’s why I put that slide in there and there’s some other stuff in there.

Now, you understand what I basically am saying here. Architecture is architecture is architecture. I simply put enterprise names on the same descriptive representations relevant for describing everything. Why would anyone think that the descriptions of an enterprise are going to be any different from the descriptions of anything else your manager has ever described? I don’t believe it.

I don't think Enterprise Architecture is arbitrary… and it is not negotiable.

Now, you could argue enterprises are different. Hey, airplanes are different than buildings too, and buildings are different than computers, and computers are different than tables, and tables are different than chairs, and everything is different, they are all different, but they all have Bills of Material, Functional Specs, Geometry. They all have Concepts, Logic, Physics, so this is basically architecture is architecture is architecture. That’s my observation.

I am trying to do this in a very short period of time and I haven’t had half a day or a day to soften all you guys up, but get ready, here you go. I don't think Enterprise Architecture is arbitrary… and it is not negotiable. My opinion is, we ought to accept the definitions of architecture that the older disciplines of architecture, construction, engineering, and manufacturing have already established and focus our energy on learning how to use them to actually engineer enterprises. I think that’s what we ought to be doing.

So I don’t think it’s debatable. Architecture is architecture is architecture.

I have to tell you another thing, Depth and Width. For every cell, you could have a cell that’s enterprise wide and it's an excruciating level of detail. That would be the whole cell basically.

Or you could have a model that is enterprise wide and only a medium level of detail. That would be half of it. You could have a model that’s enterprise wide at a high level of detail. So there is nothing that says that you have to have an excruciating level of detail. You can just say that’s another variable.

By the way, you could have a model that’s less enterprise wide. It’s an excruciating level of detail. It’s half of the enterprise excruciating or it could be the whole enterprise excruciating level of detail. So you have those two other variables. You have to be able to represent them in some fashion.

The implication is that anything that is white space here, if you don’t make it explicit, it’s implicit, which basically says that you're allowing anybody and everybody to make way.

Risk of defects

It may be fine. You may be willing to accept the risk of making erroneous assumptions. You're going to accept the risk of defects. In fact, in manufacturing airplanes they will accept some degree of risk of defects. When the parts don’t start to fit together in the scrap, the work cost starts to go up. Now, then they will say, wait a minute, you can’t complete the implementations until you have a complete engineering design release.

So that other variable you have to read into this as well. There are two different things here in ontology. I didn't even know what an ontology was till fairly recently.

I'm going to give you my John Zachman layman's definition of ontology. Some of you guys may be ontological wizards. I don’t know, but the probability in a group this big is that somebody really is familiar with ontology.

The Zachman Framework scheme technically is an ontology. Ontologies they are a theory of existence. Ontologies have to do with what exists, a theory of the existence of a structured set. That says a classification, a schema, that is rational, logical, and structured --  it’s not arbitrary -- of essential components of an object. Those essential components that says the end object is dependent for its existence on the components and the components exist as well.

A structure is not a process, and a process is not a structure. You have two different things going on here.

So you have a kind of existence of the object -- it just isn’t the components -- for which explicit expression is necessary. Probably it’s mandatory for designing, operating, and changing the object -- the object being an enterprise, a department of an enterprise, a value chain, many enterprises, a sliver, a solution, a project, an airplane, a building, a bathtub or whatever -- it doesn’t make too much difference what it is. It’s whatever that object is.

A framework is a structure. A structure defines something. In contrast, a methodology is a process, a process to transform something. And a structure is not a process, and a process is not a structure. You have two different things going on here.

Now, this is really an important idea too. Here is a comparison between ontology and methodology. An ontology is the classification of the total set of primitive elemental components that exist and are relevant to the existence of an object. A methodology produces composite compound implementations of the primitives.

All the implementation, the instantiations, are derivative of the methodology. The methodology produces the implementation. The implementations are compounds, and primitives, elements, are timeless and the compounds are temporal.

Now, that’s an important point, and I'll try to give you an illustration of that.

Here is an ontology. I learned a lot from this metaphor by the way. This is a classification of all the elements in the universe actually. It’s a two-dimensional schema. It’s normalized; one factor in one place. You are classifying the elements of the universe in terms of neutrons and protons -- the number of neutrons and protons by the number of electron. That is not a process.

This tells you nothing about how do you do this: top-down, bottom-up, left to right, right to left, or what compound that you might want to create out of this thing. This just says here is the total set of elements from which you can create whatever you want to create.

And once again, I didn’t say this yet, but until an ontology exists, nothing is repeatable and nothing is predictable. There is no discipline.

Best practices

Before Mendeleev published the periodic table, there were chemists. They weren’t chemists actually; they were alchemists, and they were very clever by the way, really competent, very clever. They could produce implementation, produce compounds, but it was based upon their life experience. It was a best practice kind of a thing, not based upon a theoretical construct.

And elements -- these elements are timeless. If you have an element that has six neutrons and protons and two electrons, that’s carbon. The rest of the world calls it carbon. Do yourself a favor and call it carbon. You can call it whatever you want to, but if you want to communicate with anybody else, just call it by the name that is recognizable by the rest of the universe.

Now, in any case, those are the elements and they are timeless. They are just forever.

Here are compounds. This is a process. A process transforms, creates something. This is a process. Take a bowl of bleach and add it to a bowl of alkali. It has to get transformed into saltwater. This is not an ontology; this is a process. Take this, add it to that, and it’s going to produce whatever you want to produce.

We could not have written this down like that until Mendeleev published the periodic table. We didn’t have any notation to produce that.

Now, the compounds are temporal. You produce saltwater for some reason, something good for some whatever, whatever it happens to be that you are trying to create.

Here are some examples of other compounds. This is an acid and a base, or a base or an alkali, and again, sodium chloride on water. It’s a balanced compound. Here is hydrogen, there is hydrogen, there are the two hydrogen. Here is chlorine, there is chlorine, here is the sodium, there is a sodium, here is the oxygen, there is oxygen.

We could not have written this down like that until Mendeleev published the periodic table. We didn’t have any notation to produce that.

So here are some other compounds: here is salt, that’s sodium chloride, here is aspirin. C9H8O4, Vicodin is C18H21NO3, Naproxen is C14H14O3, Ibuprofen, Viagra, sulphuric acid and water and so on and so on.

How many of these can you create out of the periodic table? The answer is infinite. It would go infinite. I don’t want to take the time to elaborate, but it’s infinite. And these are temporal. These are specifically defined to do specific things.

Here is an ontology. How many different enterprises could you create out of this ontology? And the answer again is going to be infinite. Until an ontology exists, nothing is repeatable, nothing is predictable. There is no discipline. Everything is basically best practice. The perimeters are timeless.

Now, here are some compounds. The elements are what I would call a primitive component. The compounds are implementations, instantiations. COBOL Programs, you can read Java 2 or Smalltalk or whatever you want to read; Objects, BPMN, Swimlanes, Business Architecture, Capabilities, Mobility, Applications, Data Models, Security Architecture, Services, COTS, Technology Architecture, Big Data, Missions/Visions, Agile Code, Business Processes, DoDAF Models, Balanced Scorecard, Clouds, I.B. Watson, TOFAF Artifacts, and so on. How many of these are there? It’s infinite. 

Specific reasons

How long will it be until we can add one to the list? What time is it? People get really creative. They create a lot of these things. And these are temporal. They are for specific reasons at a specific point in time.

Here is alchemy. It’s a practice. It’s a mythology without an ontology. Process is down in the basement with a chemistry set, trying things out. If it works and it doesn’t blow the house up, write that one down; that’s a good one. If it blows up, you probably have to write that one down too; don’t do that one again.

So a process with no ontological structure is ad hoc, fixed, and dependent on practitioner skills. It’s not a science; it is alchemy; it’s a practice.

I've got to tell you, the alchemists were really clever. Man, they figured out how to create gunpowder long before they ever had the periodic table. So these people were really creative. However, few hundred years later, Mendeleev published the periodic table.

I don’t know whether you guys realize this or not, but we tend to think the periodic table has been around forever, because the elements have been around forever. Basically we learn that in chemistry or whatever. Periodic table was only published in the 1880-1890 time frame.

If you just built them to get the code to run, they're not going to be integrated, not flexible, not interoperable, not reusable. They are not aligned; they are not meeting expectations.

If you think about this, within 50 years of the publication of the periodic table, the physicists and chemists basically were splitting atoms. Think about this. Once you have order, now research actually works. Things become predictable and repeatable. We don’t have to learn everything by experience. We can hypothetically define other possibilities and get really creative.

Like I say, in a very short period of time, friction goes to zero, and you can get really creative and really sophisticated in very short periods of time, so I just throw that one away.

So ontology versus process, engineering versus manufacturing, architecture versus implementation. It's not "either/or;" it is "and." And the question is, how did you get your composite manufacturing implementation? Did you reuse components of primitive, ontological, engineering constructs, or did you just manufacture the composite ad hoc to some problem or some system requirement?

Actually the enterprise is the total aggregate sum of composite implementations.

Now, the question is, how did you get your composite? Were you just building systems or did you have the periodic table of primitive components from which you assembled the implementation?

If you just built them to get the code to run, they're not going to be integrated, not flexible, not interoperable, not reusable. They are not aligned; they are not meeting expectations.

So the question is, how did you get the composite, the compounds? Did you have the periodic table? Now, obviously I am taking it to a point where I am saying, it’s not an "or;" it’s an "and."

Allen and I were talking about this yesterday. I don’t want to take a lot of time to develop this, but this came from Roger Greer, who was the Dean of the School of Library and Information Management USC years ago, and I just happened to run across some notes I had taken at an IBM GUIDE Conference in 1991.

Professional vs. trade

Roger was talking about the difference between a profession and a trade. He basically didn’t make any differentiation. This is the Professional Service Cycle. The professional starts with a diagnosis, analysis of need, and diagnoses the problem. Then you prescribe the solution. Then the technician applies the solution. He evaluates the application and, depending upon the evaluation, enters into the cycle again.

So what differentiates the professional from the trade or labor is the diagnosis and a prescription, where the trade or labor is involved with the implementation and any evaluation.

My observation is that this is where the engineering has taken place. That’s where you need the ontology to do the diagnosis and the prescription. And then, you need the methodology to do the implementation basically -- the manufacturing. The engineering work is going on over here; the manufacturing work is going on over there.

So what differentiates the professional from the trade? Well, if you start with the diagnosis of the problem and the prescription, that’s what the doctor does. The x-ray technician shoots the x-ray, takes the picture, and then evaluates whatever the result is.

Those of us who come from the architecture domain, need to begin to develop the characteristics of a profession. This is a profession.

Leon Kappelman is a friend of mine. He's an academic guy. He traces the CEO surveys for years and years -- 20, 30 years. In 20 or 30 years, one of the top ten issues that the CEOs of the world say those of us who come from the information community need to deal with turns out to be alignment.

They're basically saying, "I don’t know what you guys are doing. You're spending a lot of money down there in IT. Whatever you're doing with it does not align with what I think the enterprise is about."

So there's an alignment problem. I would submit to you, if you are starting over here, you are going to always be a solution in search of a problem.

So we want to change it. Allen and I really feel strongly about this. Those of us who come from the architecture domain, need to begin to develop the characteristics of a profession. This is a profession. Well, that presumes a discipline, and the implication is that we need to change our whole concept to diagnose the enterprise problem. In fact, that’s the one last slide I would use.

The end object is not to build the system. The end object is to diagnose the enterprise problem. Then, you can prescribe. The enterprise really complicates it. You can probably prescribe three, four, or a dozen different possible solutions that they could pursue. Okay chief, here are a set of things that you can do.

Somebody, I think it was Steve Jobs in his book, said that you had to go in with two recommendations to Steve Jobs, but you have a third one in your pocket, because he would tear them up. So, you have to go in and have a third one.

How many do you want chief? We can construct however many you want to, and you can evaluate them or analyze them for whatever the implications are. What are the capital expense implications, or cultural? You can analyze them and let them understand what the alternatives are, what the implications are, or the alternatives. and you can pick one and you can do the implementation, then you evaluate and so on.

Lessons to be learned

This is what differentiates the profession from the trade. This is important. The more I think about it, there is really lessons to be learned here.

Here are the research lessons that we've learned. It is possible to solve general management problems very quickly with a small subset of primary components -- simply lists and their interdependencies short of the complete primitive models.

You don’t have to do a lot of architecture to begin. You have enough that you can do the diagnosis of the problem. Then, different complex, composite constructs can be created dynamically, virtually cost-free, from the inventory of primitive lists for addressing subsequent general management problems.

Once you begin to populate the inventory of the primitive components, they are reusable to analyze or diagnose other problems. This is not just limited to whatever precipitated the first population of the primitives.

There is a TOGAF development strategy, and I would evolve TOGAF to become an engineering methodology as well as a manufacturing methodology.

And many scenarios can be evaluated to test strategy alternatives before making commitments. You can analyze the implications and make recommendations around those implications before you actually spend money or actually create infrastructure kinds of things.

These are really important issues. So here are my conclusions.

Here is what I would propose to TOGAF. There is a TOGAF development strategy, and I would evolve TOGAF to become an engineering methodology as well as a manufacturing methodology.

Those of us who come from the IT community, for the last 75 years we've been building it to run a system. Technically that’s what people say. If you ask somebody from IT what they do for a living, we would say we're building to run systems.

So all of us are very manufacturing dominant. That's the way we tend to think. We tend to think in terms of composite contracts. Every artifact, if it has more than one variable in it, is a manufacturing artifact; it's not an engineering artifact. I can tell pretty quickly by looking at the descriptive representation, looking at the model.

If you have more than one variable in that model, I know you're using it for manufacturing, for implementation purposes, probably manufacturing, I would say the implementation purposes.

I would just broaden TOGAF to dig in to deal with the engineering issues. The way I would do that in Phase I. I'll tell you what I think phase two and three might be as well. I'm just getting creative here. Allen may say, "That's interesting, Zachman, but you're out of here. We don't need a lot of help. We already got enough things to do here."

Existing process

But, first of all, I would use the existing data-gathering process to populate the inventory of single-variable, primitive podels. We're already doing the gathering, and I would just factor out what are the primitive components and begin to populate the inventory.

We have a little workshop this afternoon to show you how to do that. It is not that hard. Anybody who goes to the workshop will see. The workshop is created in order to just show you that it's not too complicated. You can do this.

I would just use the existing data-gathering, purchase methodology, begin to populate. Then I would reuse the primitive components in the creations of present TOGAF artifacts. You’ve got to create the artifacts anyway, you might as well just reuse the primitive components.

Now that presumes another thing for those of you who are into the tooling domain, but you’d have to map the primitive metamodel against the TOGAF metamodel. So there is a metamodel issue here.

You have to look at the metamodel, the TOGAF artifacts, and see if there is a composite construct in the Metamodel and just factor out what the primitive components are.

But what that would tell you is that you have to look at the metamodel, the TOGAF artifacts, and see if there is a composite construct in the Metamodel and just factor out what the primitive components are.

That's the way you would map the composite you're trying to create from the primitive implementations. That's what I would do. That would be just looking at right where we are today. So, here is the set of primitives and here is the methodology. Let's just use the methodology to do engineering work, and it will still end up creating the same implementation composites.

I was getting creative, here is what I would do. Here is what we were doing. I probably have to say it that way, and I encourage you to do this by the way. I would extend the methodology for enterprise problem diagnosis and solution prescriptions --single-variable, primitive components, binary relationships, and impact analysis.

What you need in order to do the diagnosis is the single-variable, primitive construct, and only binary models, because what you're going to do with this is to impact analysis. You touch one thing, and here are all the other things that are impacted by it.

That application has been around for a long time. I'm not telling you something that nobody knows how to do. But there are single-variable models and binary models. Building a binary model, is this related to that, is this related to this, there are two things at a time. The models are pretty simple. I'm not going to take more time to do that but -- then I would segue to the current TOGAF methodology.

I would come out of here and go into the current methodology, making enhancements incrementally as practical. You have been improving TOGAF forever, from the very beginning. I would just start to begin to improve it based upon what we've learned with the diagnostic and the prescription enhancement.

The transformation

Then, in Phase III, I would orchestrate the transformation from the TOGAF artifacts to implementation, lower rows. I would orchestrate that transformation. So you have the transformation from the strategy up here and to the concept, the logic and physics, totally a configuration.

I would orchestrate that and I would extend the TOGAF governance process. With a governance process, and TOGAF's is really strong, I would just take a hard look at that and elaborate that to manage the entire reification set of transformation. That's where I would take it to.

Some of you may say, "We will not take so long or cost too much, whatever." The argument might be a guess, but I think that’s where I would go. Principally, I think that's where I go because of the implication of changing the fundamental concept of the Enterprise Architecture. The profound significance is this. It alters the concept of Enterprise Architecture from one of building models to one of solving general management problems.

Man, that would be really interesting. It buys the time for the experts to build out the complete Enterprise Architecture -- thing-relationship-thing -- primitive models iteratively and incrementally. You don’t have to do it all at once, but general management problem, my problem, my problem, my problem, iteratively and incrementally start building out little more adding to the primitives over time.

If we could change the perception of Enterprise Architecture to be one of solving general management problems, we would have no problem getting the resources and the time to do it.

Then, it builds significant creditability for the information-technology community, and I would submit, we need all help we can get. If we begin to be perceived to be the enterprise doctors, we would be perceived to be direct not indirect. It wouldn’t be an optional, but mandatory, kind of a responsibility. Most importantly, it would position Enterprise Architecture to become a general management operational process, not simply an IT exercise. I think that's where you have to go.

If we could change the perception of Enterprise Architecture to be one of solving general management problems, we would have no problem getting the resources and the time to do whatever Enterprise Architecture we want to do. That valuation issue will tend to go away. I saw a presentation yesterday about the valuation. It was talking about the Internet of Things, and it was really a creative presentation. I really appreciated it a lot.

But if we can solve general management problems, you don’t have to worry about valuation. I will say one more thing about valuation. The fundamental problem with architecture is that it doesn't save money in the current accounting period. It’s not an expense. You don't make money or save money in the current accounting period. You are building an inventory of assets. What make an asset different than an expense? How many times you use it. Use it more than once, it's as an asset. So you build the inventory of assets.

The assets don’t save you money in the current accounting period, but they enable you to make money or save money in many accounting periods in the future. The problem with asset valuation is the accounting people in the US are precluded from putting values on assets. That's probably because there's not an absolute way to value the asset, because the value of it is derived from how many times you use it, or from what the market will pay for at some point in time.

It’s difficult to value assets and it’s really difficult to value intellectual assets. I would submit Enterprise Architecture as intellectual asset, and we’re just beginning to learn some things about that. But that issue turns out to go away. If you just solve general management, you don’t have to worry about valuing the value proposition.

I think I made it in an hour, but actually an hour and three minutes. I owe Allen three minutes now and that’s not too bad on my part, but  there will be a panel. We will have some discussion, answer any questions, and then also there is a workshop of anybody who cares about trying to work with some of these things. I would be glad to do it. Thank you, Allen, and thank you guys for taking the time to listen, I appreciate it a lot.


Brown: We have some questions. We talked about professionalizing Enterprise Architecture. We both feel passionately about it, and having these professionals as the enterprise doctors, as you say. The person to ask the questions is actually the CEO of the AEA, Steve Nunn. The AEA is now 44,000 members. And they are actually active, as well, which is great. So, Steve, what have you got?

Steve Nunn: Unsurprisingly no shortage of questions and compliments on the presentation, John.

Here’s a long question, but bear with it. Given a composite of an enterprise, a methodology existed for its construction. Today, I have a million assets with individual change records associated with them. The EA methodology did not govern the maintenance after construction. What do you suggest to correct the EA to the Ontology?

Zachman: This not atypical, by the way. I think that's where most of us live. I normally make the case that those of us who come from IT have been manufacturing the enterprise for last 60 to 70 years. The enterprise was never engineering. We are manufacturers. So we are manufacturing parts. We don't manufacture the enterprise, and the parts don't fit together.

So what do you do if you manufacture parts that don't fit together? We will actually scrap and rework. There is no way to fix that problem after the fact. If you want the parts to fit together, you have to engineer them to fit together before you manufacture them. After you get them manufactured and then try to fit them together, you can't get them to fit together.

You don't have to have them all, but you have to begin. So you have to have solve whatever you can out of the TOGAF activity that you have at your disposal.

We're all sitting in kind of the same position. Somebody has to break the pattern. If you just keep on writing code, you're just going to get more parts. You're not going to change anything. You have to have a different paradigm. I was describing for you a different paradigm, and I was describing for you an engineering paradigm.

I would do just exactly what I said. I’ll start taking TOGAF. We already have this methodology -- very, very widely respected, very widely used. I would take the methodology, the data gathering methodology portion of it, and I would begin to populate the inventory of primitive assets. You don't have to have them all, but you have to begin. So you have to have solve whatever you can out of the TOGAF activity that you have at your disposal.

Once you do that, then you're going to populate them with the primitives that are required to create the TOGAF composites right now, so we can produce whatever we are producing out of TOGAF right now. I would just start with something I know, something I have my hands on. I can start with TOGAF. I would start to populate the primitive artifacts and then create by reusing the primitives to create the composites.

So I would start with there. And then I begin to enhance that over time. I have to begin to enhance the methodology to elaborate. I gave you some thoughts about how I would enhance it, but in the meantime what you can do is, once you start creating the architectural constructs, you have to orchestrate your way out of where you're at.

Migrating what exists

We don't have a blank sheet of paper when we start with our Enterprise Architecture. We already have an enterprise out there. You have to figure out a way to migrate out of the existing environment into the architectural environment. am not just going to tell you what I think the solution is without elaborating how I would use it, but I would use it, the data warehouse kind of a concept. I create the architecture. I extract and transform the database out of the existing application to populate the architectural environment.

I didn't learn this. I didn't figure out this all myself. People from Codec and I were sitting in the Codec Theater one time. They were saying, once we have the architected data, we know what the data is, and now we're going to rebuild all the transaction processing systems to update the data warehouse. Then, after we update the data warehouse, we're going to turn off the legacy system.

Over some period of time, however long you want to take, you just move it out little by little, transaction by transaction by transaction to the architectural environment. If you're going to rebuild the transaction processing systems to populate the data warehouse, then I would add the process specification. I would add the distribution. I would add the other characteristics of architecture. That's the way it would orchestrate my migration out of the existing environment.

Brown: There is also a sense that came out of that question. The architecture, once it was done, it was done. And then things changed afterwards. So there was no concept that the architecture should be referenced any time you make a change to the instantiation and to update the architecture accordingly.

Zachman: Allen, that’s really an important point. I’ll tell you where I learned how to manage change. I set that up at Boeing. It took me about 10 years to figure this out. How do you manage the changes in the Boeing 747’s? Very carefully, right?

If you really want to change the Enterprise Architecture before you change the enterprise, if you have a general management responsibility for Enterprise Architecture, this a piece of cake.

Brown: Yes.

Zachman: You don’t walk around a Boeing 747 with a hammer and screwdriver making the changes. Forget that. You have an engineering administration. Who are they? They're managing the drawings, the functional space, the building materials, and so on. You pull out the one you want to change. You change the artifact and then you figure out the impact on the adjoining artifacts.

When you figure out the impact, you make changes to all those other artifacts. You don’t throw away the old version, but you keep the old version. It is regulated in the airplanes. You have to trace the artifact back to the last time we had an airplane that flew successfully.

But once you change the artifact, then you go to the shop for a particular change kit. You take the change kit out of the Boeing 747 and you put the change in the Boeing 747. If you manage change in that fashion, you will minimize the time, disruption, and cost of change in the Boeing 747.

Every artifact precisely represents the Boeing 747 as it existed in this moment. And one thing that people would not tend to know, every Boeing 747 is unique. They are all different and they have a set of these artifacts for every Boeing 747. You can trace it back to the origin, whatever they changed and flew since the last time. The Boeing 747 is not complicated enough. These artifact were on paper.

The first electronically designed airplane was the 777. Now you understand the reason I'm telling you this. If you really want to change the Enterprise Architecture before you change the enterprise, if you have a general management responsibility for Enterprise Architecture, this a piece of cake.

Making changes

So by the way Ms. Vice President of Marketing, before you change your allocation responsibility, your organization responsibility, come-up and see me. We’ll change the repository first, and then you can change the allocation responsibility.

Oh, by the way Ms. Programmer, before you change a line of code in that program, you come up and see what changes are in the repository first. Then, you can change the line of code on the program. Before you change anything, you change the architecture first, and then you change it.

And, by the way, it’s dynamic because you can continuously solve problems, you can then populate, put more primitive components into the architecture. That's why this becomes really important. It becomes an operating responsibility for the general management.

If they really understood what they are, that’s a knowledge base, everything they can possibly know about the enterprise. They can change anything or have a great deal of creativity to do lots of things they haven’t even dreamed about doing before. That’s really important idea that that becomes a general management who is integrated into the enterprise operation.

The problem we have now is that when you try to make things common or federal, when they are not common, that’s where you get the hate and discontent.

Nunn: Next question. Have you considered what changes might be required to your framework to accommodate an ecosystem of multiple enterprises?

Zachman: That’s what I would call federated architecture. You know some things you want in common with more than one enterprise and some things you want to be provincial, if you will. Some things you want to make federal, some things you want to leave provincial. The problem we have now is that when you try to make things common or federal, when they are not common, that’s where you get the hate and discontent. The framework is really helpful for thinking through what you what to make common or you want to leave a provincial artifact.

That’s the way you need to deal with that and that would be any complex environment. In  most every enterprise these days, there is more than one framework that you might possibly want to populate, but then you have to understand what you want to be the same and what you want to leave. That’s the way we would handle that.

Brown: So if you are an architect, you pull out the drawing for the entire urban landscape. You pull out the drawing for specific buildings. You pull out the drawing for functions within that building, a different framework.

Zachman: Actually this was implemented. I learned it from the Province of Ontario. There was a premier about 20 years ago who was pretty creative. We sorted all of the departments in the Province of Ontario into categories. You can call them clusters.

You had the social services cluster, the land cluster, the finance cluster. Then, he put a super minister in-charge of each cluster, and their role was to integrate -- get rid of all the redundancy and make it as integrated as possible.

That was the process we were using. You have a federation at each cluster, but then you have the federation, a second level up at the province level as well.
Common connector

Nunn: Do you envision a common connector from a given architecture development method, like TOGAF, DoDAF, FEA to the Zachman Framework?

Zachman: When we talk in the workshop, we'll get into that a little bit. If you have the primitive components and say which set you want to change, if you want the TOGAF components, click, there is TOGAF. Oh. You want DoDAF, oh no problem; click, there is the DoDAF. Oh you want balanced scorecard, no problem; click, there is a balanced scorecard. Oh, you want your profit and loss statement; click, there is a profit and loss.

What you're doing is you are creating a composite dynamically. Whatever the composite is, you want to take a look at. I was really impressed with Don's presentation yesterday. I was at Raytheon last week and there was a presentation I had seen recently about the hardware -- the price performance improvements and the capabilities in hardware. What it basically was saying is that you'll be able to put big data, all the structured data, all this data on a chip. And that chip will go into your processor. The guys at Raytheon said that it's not when you can do it; you can do it now.

So if you have big data on a chip, you get dynamically identified threats and opportunities. What do you think that's going to do to the decision cycle? It's going to make that decision cycle very short.

Because you have big data on a chip, and you can analyze that big data and find a threat, an opportunity, or something external or even internal, the immediate question is going to become what are you going to change in the enterprise. Are you going to increase or decrease the inventory, increase or decrease the process transformation, going to increase or decrease the storage capacity of the node? What are you going to do to your enterprise?

So if you have big data on a chip, you get dynamically identified threats and opportunities. What do you think that's going to do to the decision cycle? It's going to make that decision cycle very short.

So you have to make up your mind. What you are going to do real quickly. So you like several alternatives? Okay, chief, here are the three or four alternatives. Which one do you want to pick?

It's going to shorten the decision cycle dramatically. Dawn was frightening me yesterday. We're not talking about the sweet by and by. She was talking about stuff that is here now, and that's what the guys at Raytheon were telling me. This is here now.

I talked about big data before, and the fundamental question is, once you figure out something external or even internal, what are you going to do to your enterprise? Where is your Enterprise Architecture? What are you going to change?

The next question is, who is working on your architecture? Somebody might be working on this. I'll tell you about that. I don't think too many people have an idea of the sense of urgency we have here. You're not going to do this today. You have to start working on this, and you’ve got to eat this elephant -- bite, bite, bite. It's not going to happen overnight.

Nunn: How can the Zachman Framework be applied to create an architecture description that can be implemented later on, without falling into a complex design that could be difficult to construct and maintain. Following on from that, how do you avoid those descriptions becoming out of date, since organizations change so quickly?

Manufacturing perspective

Zachman: Whoever posed that question is thinking about this from a manufacturing perspective. They're thinking about it as a composite construct. And if you separate the independent variables and populate the primitive components, don't bind anything together until you click the mouse, and you can change any primitive component anytime you want to, okay.

You're not locked into anything. You can change with minimum time. You can change one variable without changing everything. The question is couched in terms of our classic understanding of Enterprise Architecture, the big monolithic, static, it takes long time and costs lot of money. That's a wrong idea. Basically, you build this iteratively and incrementally, primitive, by primitive, by primitive, and then you can create the composites on-the-fly.

Basically, that would be the approach that I would take. You're not creating fixed implementation, extreme complex implementations. That’s probably not the way that you want to do it.

Basically, you build this iteratively and incrementally, primitive, by primitive, by primitive, and then you can create the composites on-the-fly.

Nunn: Short question. Is business architecture part of Enterprise Architecture or something different?

Zachman: Well, out of the context in my framework, I started to suggest that some people say, well, the business processes are as architecture, it would be column 2, row 2. Some people say, well, no, it’s actually column 6, row 1. Some people will say, well it’s actually the composite of column 1, and column 2 at row 2.

The Chief Operating Officer of a utility I work with, this is years ago now, he basically said, "My DP people know this. My DP people want to talk to me about data, process, and network, and I don't care about data, process and network. I want to talk about allocation, responsibility, business cycles, and strategy. So I don’t want to talk about column 1, 2, and 3. I only care about column 4, 5, and 6."

I couldn't believe this guy said that. I knew the guy. You don’t care about the inventories, the process transformations, and the distribution structure? Are you kidding me -- in a utility? Come on. It is just unfathomable.

At some point of time you probably are going to wish, you’d have more and more and more of these primitives. Build them up iteratively and incrementally for some long period of time. There's not one way to do it, there are n different ways to do it, and some work better than others. The fact that you’ve got a tested methodology, you had to use that, why not.

Brown: WI think it depends on which one of the 176 different definitions of business architecture you use.

Zachman: Yes.

Business architects

Brown: In my definition, the people I spoke to in Australia and New Zealand, had the title of business architects and they quite clearly felt that they were part of Enterprise Architect. But the other side of things is that some of the greatest business architects would be Bill Gates, Michael Dell, Steve Jobs, Jack Roush.

Zachman: I was pontificating around the architectural idea and I lost sight of the business architecture question. The question turns out to be, which primitives you want to consider. If you want to say you want to open up new markets, then we’ve got to figure out whatever choice you are going to need, what process, what location, and that would create the composite that you need for addressing whatever the issues you have --

Brown: And it is too tough, Enterprise Architecture.

Zachman: Yeah, right exactly.

Nunn: TOGAF’s primary, short- and long-term guidance is achieved through principles or achieved with principles. How would you propose to reconcile that with the idea of extending TOGAF’s framework and method with the Zachman Framework?

The principles don’t go away. One thing is that when you define principles, they have a lifetime.

Zachman: The principles don’t go away. One thing is that when you define principles, they have a lifetime. Somebody was making that case yesterday at the presentation. He defined the principle, and I think there is a set of architectural principles. One thing is, if you want flexibility to separate the independent variables, that’s a good principle, you have a single point of control, you have single sort of truth. Those tend to be principles that people would establish, and then take whatever the principles are that anybody has in mind, and you could figure out how that manifests itself in the architectural structure, in the framework structure and, in fact, the ontological construct, and you manage that. I mean the governance system has to enforce the principle.

There is another principle. I would not change the enterprise without changing the artifact first. I would change the architecture before I change the enterprise. Here is another one. I wouldn't allow any programmer to just spuriously create any line or code or data element or any kind of technology aggregation. You reuse what's in the primitive. And if it’s not, if you need something that’s not in that primitive, then fix the primitive. Don’t create something spurious just to get the code to run. That’s another principle.

There was probably an array thing, and off the top of my head there is a couple that I would be interested in, but those tend to be deriving from these ideas about architecture. I learned it all of this. I didn't invent any of this. I learned it by looking to other people and, I saw the patterns. All I do is I put enterprise names and the same architectural constructs in any other object and then I learned about migration. I learned about federation. I learned about all these other things by managing change and by looking at what other people did.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion on cybersecurity standards for safer supply chains. And another earlier discussion from the event focused on synergies among major Enterprise Architecture frameworks.

Copyright The Open Group and Interarbor Solutions, LLC, 2005-2015. All rights reserved.

You may also be interested in:

Tags:  Allen Brown  BriefingsDirect  Dana Gardner  enterprise architect  enterprise architecture  Interarbor Solutions  John Zachman  Steve Nunn  The Open Group  TOGAF  Zachman Framework 

Share |
PermalinkComments (0)

GoodData analytics developers on what they look for in a big data platform

Posted By Dana L Gardner, Tuesday, April 14, 2015

This BriefingsDirect big data innovation discussion examines how GoodData created a business intelligence (BI)-as-a-service capability across multiple industries that enables users to take advantage of both big-data performance as well as cloud delivery efficiencies. They had to choose the best data warehouse infrastructure to fit their scale and cloud requirements, and they ended up partnering with HP Vertica.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about their choice process for best big data in the cloud platform, we're joined by Tomas Jirotka, Product Manager of GoodData; Eamon O'Neill, the Director of Product Management at HP Vertica, and Karel Jakubec, Software Engineer at GoodData. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a bit about GoodData and why you've decided that the cloud model, data warehouses, and BI as a service are the right fit for this marketplace?

Jirotka: GoodData was founded eight years ago, and from the beginning, it's been developed as a cloud company. We provide software as a service (SaaS). We allow our customers to leverage their data and not worry about hardware/software installations. We just provide them a great service. Their experience is seamless, and our customers can simply enjoy the product.


We provide a platform -- and the platform is very flexible. So it's possible to have any type of data, and create insights. You can analyze data coming from marketing, sales, or manufacturing divisions -- no matter in which industry you are.

Gardner: If I'm an enterprise and I want to do BI, why should I use your services rather than build my own data center? What's the advantage?

Cheaper solution

Jirotka: First of all, our solution is cheaper. We have a multi-tenant environment. So the customers effectively share the resources we provide them. And, of course, we have experience and knowledge of the industry. This is very helpful when you're a beginner in BI.

Gardner: What have been some of the top requirements you’ve had as you've gone about creating your BI services in the cloud?


Jakubec: The priority was to be able to scale, as our customers are coming in with bigger and bigger datasets. That's the reason we need technologies like HP Vertica, which scales very well by just adding nodes to cluster. Without this ability, you realize you cannot implement solutions for the biggest customers. Even if you're running the biggest machines on the market, they're still not able to finish computation in a reasonable time.

Gardner: In addition to scale and cost, you need to also be very adept at a variety of different connection capabilities, APIs, different data sets, native data, and that sort of thing.

Jirotka: Exactly. Agility, in this sense, is really curial.

Gardner: How long you have been using Vertica and how long have you been using BI through Vertica for a variety of these platform services?

Working with Vertica

Gardner: What were some of the driving requirements for changing from where you were before?

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Jirotka: We began moving some of our customers with the largest data marts to Vertica in 2013. The most important factor was performance. It's no secret that we also have Postgres in our platform. Postgres simply doesn’t support big data. So we chose Vertica to have a solution that is scalable up to terabytes of data.

Gardner: What else is creating excitement about Vertica?


O’Neill: Far and away, the most exciting is about real-time personalized analytics. This allows GoodData to show a new kind of BI in the cloud. A new feature we released in our 7.1 release is called Live Aggregate Projections. It's for telling you about what’s going on in your electric smart meter, that FitBit that you're wearing on your wrist, or even your cell-phone plan or personal finances.

A few years ago, Vertica was blazing fast, telling you what a million people are doing right now and looking for patterns in the data, but it wasn’t as fast in telling you about my data. So we've changed that.

With this new feature, Live Aggregate Projections, you can actually get blazing fast analytics on discrete data. That discrete data is data about one individual or one device. It could be that a cell phone company wants to do analytics on one particular cell phone tower or one meter.

That’s very new and is going to open up a whole new kind of dashboarding for GoodData in the cloud. People are going to now get the sub-second response to see changes in their power consumption, what was the longest phone call they made this week, the shortest phone call they made today, or how often do they go over their data roaming charges. They'll get real-time alerts about these kinds of things.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

When that was introduced, it was standing room only. They were showing some great stats from power meters and then from houses in Europe. They were fed into Vertica and they showed queries that last year we were taking Vertica one-and-half seconds. We're now taking 0.2 seconds. They were looking at 25 million meters in the space for a few minutes. This is going to open up a whole new kind of dashboard for GoodData and new kinds of customers.

Gardner: Tomas, does this sound like something your customers are interested in, maybe retail? The Internet of Things is also becoming prominent, machine to machine, data interactions. How do you view what we've just heard Eamon describe, how interesting is it?

More important

Jirotka: It sounds really good. Real-time, or near real-time, analytics is becoming a more-and-more important topic. We hear it also from our customers. So we should definitely think about this feature or how to integrate it into the platform.

Jakubec: Once we introduce Vertica 7.1 to our platform, it will be definitely one of features we will focus on. We have developed a quite complex caching mechanism for intermediate results and it works like a charm for Postgres SQL, but unfortunately it doesn't perform so well for Vertica. We believe that features like Live Aggregate Projection will improve this performance.

Gardner: So it's interesting. As HP Vertica comes out with new features, that’s something that you can productize, take out to the market, and then find new needs that you could then take back to Vertica. Is there a feedback loop? Do you feel like this is a partnership where you're displaying your knowledge from the market that helps them technically create new requirements?

Jakubec: Definitely, it's a partnership and I would say a complex circle. A new feature is released, we provide feedback, and you have a direction to do another feature or improve the current one. It works very similarly with some of our customers.

Engineer-to-engineer exchanges happen pretty often in the conference rooms.

O’Neill: It happens at a deeper level too. Karel’s coworkers flew over from Brno last year, to our office in Cambridge, Massachusetts and hung out for a couple of days, exchanging design ideas. So we learned from them as well.

They had done some things around multi-tenancy where they were ahead of us and they were able to tell us how Vertica performed when they put extra schemers on a catalog. We learned from that and we could give them advice about it. Engineer-to-engineer exchanges happen pretty often in the conference rooms.

Gardner: Eamon, were there any other specific features that are popping out in terms of interest?

O’Neil: Definitely our SQL on Hadoop enhancements. For a couple of years now we've been enabling people to do BI on top of Hadoop. We had various connectors, but we have made it even faster and cheaper now. In this most recent 7.1 release, you can now install Vertica on your Hadoop cluster. So you no longer have to maintain dedicated hardware for Vertica and you don’t have to make copies of the data.

The message is that you can now analyze your data, where it is and as it is, without converting from the Hadoop format or a duplication. That’s going to save companies a lot of money. Now, what we've done is brought the most sophisticated SQL on Hadoop to people without duplication of data.

Using Hadoop

Jirotka: We employ Hadoop in our platform, too. There are some ETL scripts, but we've used it in a traditional form of MapReduce jobs for a long time. This is really costly and inefficient approach because it takes much time to develop and debug it. So we may think about using Vertica directly with Hadoop. This would dramatically decrease the time to deliver it to the customer and also the running time of the scripts.

Become a member of MyVertica
Register now
And gain access to the Free HP Vertica Community Edition.

Gardner: Eamon, any other issues that come to mind in terms of prominence among developers?

O’Neill: Last year, we had our Customer Advisory Board, where I got to ask them about those things. Security came to the forefront again and again. Our new release has new features around data-access control.

We now make it easy for them to say that, for example, Karel can access all the columns in a table, but I can only access a subset of them. Previously, the developers could do this with Vertica, but they had to maintain SQL views and they didn’t like that. Now it's done centrally.

They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

They like the data-access control improvements, and they're saying to just keep it up. They want more encryption at rest, and they want more integration. They particularly stress that they want integration with the security policies in their other applications outside the database. They don’t want have to maintain security in 15 places. They'd like Vertica to help them pull that together.

Gardner: Any thoughts about security, governance and granularity of access control?

Jakubec: Any simplification of security and access controls is a great new. Restriction of access for some users to just subset of values or some columns is very common use case for many customers. We already have a mechanism to do it, but as Eamon said it involves maintenance of views or complex filtering. If it is supported by Vertica directly, it’s great. I didn’t know that before and I hope we can use it.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

Tags:  big data  BriefingsDirect  Dana Gardner  data analytics  Eamon O'Neill  GoodData  HP  HP Big Data  HPDiscover  Interarbor Solutions  Karel Jakubec  Tomas Jirotka 

Share |
PermalinkComments (0)

Source Refrigeration selects agile mobile platform Kony for its large in-field workforce

Posted By Dana L Gardner, Friday, April 10, 2015

The next BriefingsDirect enterprise mobile strategy discussion comes to you directly from the recent Kony World 2015 Conference in Orlando.

This series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Our next innovator interview focuses on how Source Refrigeration and HVAC has been extending the productivity of its workforce, much of it in the field, through the use of innovative mobile applications and services.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

We'll delve in to how Source Refrigeration has created a boundaryless enterprise and reaped the rewards of Agile processes and the ability to extend data and intelligence to where it’s needed most.

To learn how their successful mobile journey has unfolded, we welcome Hal Kolp, Vice President of Information Technology at Source Refrigeration and HVAC in Anaheim, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: It’s my understanding that you have something on the order of several hundred field-based service and installation experts serving the needs of 2,500 or more customers nationwide. Tell us a little bit about why mobility is essential for you and how this has created better efficiency and innovation for you?

Kolp: Source started to explore mobility back in 2006. I was tasked with a project to figure out if it made sense to take our service organization, which was driven by paper, and convert it to an electronic form of a service ticket.


After looking at the market itself and at the technology for cellular telephones back in 2006, as well as data plans and what was available, we came to the conclusion that it did make sense. So we started a project to make life easier for our service technicians and our back office billers, so that we would have information in real time and we'd speed up our billing process.

At that time, the goals were pretty simple. They were to eliminate the paper in the field, shorten our billing cycle from 28 days to 3 days, and take all of the material, labor, and asset information and put it into the system as quickly as possible, so we could give our customers better information about the equipment, how they are performing, and total cost of ownership (TCO).

But over time, things change. In our service organization then, we had 275 guys. Today, we have 600. So we've grown substantially, and our data is quite a bit better. We also use mobility on the construction side of our business, where we're installing new refrigeration equipment or HVAC equipment into large supermarket chains around the country.

Our construction managers and foremen live their lives on their tablets. They know the status of their job, they know their cost, they're looking at labor, they're doing safety reports and daily turnover reports. Anyone in our office can see pictures from any job site. They can look at the current status of a job, and this is all done over the cellular network. The business has really evolved.

Gardner: It’s interesting that you had the foresight to get your systems of record into paperless mode and were ready to extend that information to the edge, but then also be able to accept data and information from the edge to then augment and improve on the systems of record. One benefits the other, or there is a symbiosis or virtuous adoption cycle. What have been some of the business benefits of doing it that way?

Kolp: There are simple benefits on the service side. First of all, the billing cycle changed dramatically, and that generated a huge amount of cash. It’s a one-time win, whatever you would bill between 3 days and 28 days. All of that revenue came in, and there was this huge influx of cash in the beginning. That actually paid for the entire project. Just the generation of that cash was enough to more than compensate for all the software development and all the devices. So that was a big win.

But then we streamlined billing. Instead of a biller looking at a piece of paper and entering a time ticket, it was done automatically. Instead of looking at a piece of paper, then doing an inventory transfer to put it on a job, that was eliminated. Technician’s comments never made it into our system or on paper. They just sent a photocopy of the document to the customer.

Today, within 30 seconds of the person completing a work order, it’s uploaded to the system. It’s been generated into PDF documents where necessary. All the purchase order and work order information has entered into the system automatically, and an acknowledgement of the work order is sent to our customer without any human intervention. It just happens, just part of our daily business.

That’s a huge win for the business. It also gives you data for things that you can start to measure yourself on. We have a whole series of key performance indicators (KPIs) and dashboards that are built to help our service managers and regional directors understand what’s going on their business.

Technician efficiency

Do we have customers where we're spending a lot of time in their store-servicing them? That means there is something wrong. Let’s see if we can solve our customer’s problems. We look at the efficiency of our technicians.

We look at the efficiency of drive times. That electronic data even led us into automatic dispatching systems. We have computers that look at the urgency of the call, the location of the call, and the skills necessary to do that service request. It automatically decides which technician to send and when to send them. It takes a work order and dispatches a specific technician on it.

Gardner: So you've become data-driven and then far more intelligent, responsive, and agile as a result. Tell me how you've been able to achieve that, but at the same time, not get bogged down in an application development cycle that can take a long time, or even find yourself in waterfall-type of affair, where the requirement shift rapidly, and by the time you finish a product, it’s obsolete.

How have you been able to change your application development for your mobile applications in a way that keeps up with these business requirements?

This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us.

Kolp: We've worked on three different mobile platforms. The claim in the beginning was to develop once and just update and move forward. That didn’t really work out so well on the first couple of platforms. The platforms became obsolete, and we essentially had to rewrite the application on to a new platform for which the claim was that it was going to survive.

This last year, we converted to the Kony platform, and all indications so far are that that platform is going to be great for us, because we've done a whole bunch of upgrades in the last 12 months on the platform. We're moving, and our application is migrating very quickly.

So things are very good on that side and in our development process. When we were building our new application initially, we were doing two builds a week. So every couple of days we do a little sprint up. We don’t really call them sprints, but essentially, it was a sprint to add functionality. We go into a quick testing cycle, and while we're testing, we have folks adding new functionality and fixing bugs. Then, we do another release.

At the current stage where we are in production really depends on the needs of the business. Last week, we had a new release and this week, we're having another release as we fix some small bugs or did enhancements to the products that came up during our initial rollout where we are making changes. It’s not that difficult to roll out a new version.

We send an alert. The text says that they have got a new version. They complete the work order that they're on, they perform an update, and they're back in business again. So it's pretty simple.

Field input

Gardner: So it's a very agile, iterative, easily adaptive type of development infrastructure. What about the input from those people in the field. Another aspect of agile development isn’t just the processes for the development itself, but being able to get more people involved with deciding features, functions, and not necessarily forcing the developers to read minds.

Has that crept into your process? Are you able to take either a business analyst or practitioner in the field and allow them to have the input that then creates better apps and better processes?

Kolp: In our latest generation application, we made tremendous changes in the user interface to make it easier for the technicians to do their job and for them to not have to think about anything. If they needed to do something, they knew what they had to do. It was kind of in their face, in other words. We use cues on screens by colors. It’s something that's required for them to do, it’s always red. If there is an input field that is optional, then it’s in blue. We have those kinds of cues.

We also built a little mini application, a web app, that's used by technicians for frequently asked questions (FAQs). If they have got some questions about how this application works, they can look at the FAQs. They can also submit a request for enhancements directly from the page. So we're getting requests from the field.

If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn.

If they have a question about the application, we can take that question and turn it into a new FAQ page, response, or new question that people can click on and learn. We're trying to make the application to be more driven by the field and less by managers in the back office.

Gardner: Are there any metrics yet that would indicate an improvement in the use of the apps, based on this improved user interface and user experience. Is there any way to say the better we make it, the more they use it; the more they use it, the better the business results?

Kolp: We're in early stages of our rollout. In a couple of weeks we'll have about 200 of our 600 guys on the new application, and the guys noticed a few things. Number one, they believe the application is much more responsive to them. It’s just fast. Our application happens to be on iOS. Things happen quickly because of the processor and memory. So that’s really good for them.

The other thing they notice, is that if they're looking at assets and they need to find something in the asset, need to look up a part, or need to do anything, we've added search capability that just makes it brain-dead simple to find stuff that they need to look for. They can use their camera as a barcode scanner within our application. It’s easy to attach pictures.

What they find is that we've made it easier for them to add information and document their call. They have a much greater tendency to add information than they did before. For example, if they're in their work order notes, which for us is a summary, they can just talk. We use voice to text, and that will convert it. If they choose to type, they can type, but many of the guys really like the voice to text, because they have big fingers and typing on the screen is a little bit harder for them.

What's of interest?

Gardner: We are here at Kony World, Hal. Did anything jump out at you that’s particularly interesting? We've heard about solution ecosystems and vertical industries, Visualizer update, some cloud interactions for developers? Did anything really jump out at you that might be of interest for the coming year?

Kolp: I'm very interested in Visualizer 2.0. It appears to be a huge improvement over the original version. We use third-party development. In our case, we used somebody else’s front-end design tool for our project, but I really like the ability to be able to take our project and then use it with Visualizer 2.0, so that we can develop the screens and the flow that we want and hand it off to the developers. They can hook it up to the back end and go.

I just like having the ability to have that control, and now we've done the heavy lifting. For the most part, understanding your data, data flow or the flow of the application is usually where you spend quite a bit more time. For us to be able to do that ourselves is much better than writing on napkins or using PowerPoint or Visio to generate screens or some other application.

It’s nice because ultimately we will be able to go use Visualizer, push it into the application, take the application, push it back into Visualizer, make more changes, and go back and forth. I see that as a huge advantage. That’s one thing I took from the show.

When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Gardner: With this journey that you've been on since 2006, you’ve gone quite a way. Is there anything you could advise others who are perhaps just beginning in extending their enterprise to that mobile edge, finding the ways to engage with the people in the field that will get them to be adding information, taking more intelligence back from the apps into their work? What might you do with 20-20 hindsight and then relate that to people just starting?

Kolp: There are a couple of things that I’ll point out. There was a large reluctance for people to say that this would actually work. When your business says that you can't mobilize some process, it's probably not true. There's this resistance to change that's natural to everyone.

Our technicians today, who have been on mobile applications, hate to be on paper. They don't want to have anything to do with paper, because it's harder for them. They have more work to do. They have to collect the paper, shove the paper in an envelope, or hand it off to someone to do things. So they don’t like it.

The other thing you should consider is what happens when that device breaks? All devices will break at some point for some reason. Look at how those devices are going to get replaced. We operate in 20 states. You can't depend upon the home office to be able to rush out a replacement device for your suppliers in real time. We looked pretty hard at using all kinds of different methods to reduce the downtime for guys in the field.

You should look at that. That’s really important if the device is being used all day, every day for a field worker. That’s their primary communication method.

Simpler is better

The other thing I could say is, “simpler is better.” Don't make an application where you have to type-in a tremendous amount of data. Make data entry as easy as possible via taps or predefined fields.

Think about your entire process front to back and don't hesitate to change the way that you gather information today, as opposed to the way you want to in the future. Don't take a paper form and automate it, because that isn't the way your field worker thinks. You need to generate the new flow of information so that it fits on whatever size screen you want. It can't be a spreadsheet or it can’t be a bunch of checkboxes and stuff, because that doesn't necessarily suit the tool that you are using to drive the information gathering.

Spend a lot of time upfront designing screens and figuring out how the process should work. If you do that, you'll meet with very little pushback from the field once they get it and actually use it. I would communicate with the field regularly if you're developing and tell them what's going on, so that they are not blind-sided by something new.

The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

I'd work closely with the field in designing the application. I'd also be involved with anybody that touches that data. In our case, it's service managers. We work with builders, inventory control, purchasing people, and timecards. All of those were pieces that our applications touch. So people from the business were involved, even people from finance, because we're making financial transactions in the enterprise resource planning (ERP) system.

So get all those people involved and make sure that they're in agreement with what you're doing. Make sure that you test thoroughly and that everybody signs off together at the end. The simpler you can make your application, the faster you can roll it out, and then just enhance, enhance, enhance.

Add a new feature if you're starting something new. If you're replacing an existing application, it's much harder to do that. You'll have to create all of the functionality because the business typically doesn't want to lose functionally.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: Kony, Inc.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Hal Kolp  Interarbor Solutions  Kony  Kony World  Source Refrigeration 

Share |
PermalinkComments (0)

Ariba elevates business user experience with improved mobile apps and interfaces

Posted By Dana L Gardner, Thursday, April 09, 2015

LAS VEGAS -- Ariba today announced innovations to its cloud-based applications and global business network designed to deliver a personal, contextual and increasingly mobile-first user experience for business applications consumers.

Among the announcements are a newly designed user interface (UI) and a mobile app based on the SAP Fiori approach, both due to be available this summer, for Ariba's core procurement, spend management, and other line of business apps. Ariba, an SAP company, made the announcements here at the Ariba LIVE conference in Las Vegas.

"This gives us a cleaner approach, more tiles, and a Fiori-like user experience," said

-- Chris Haydon, Ariba Senior Vice President of Product Management. "Users see all the business network elements in one dashboard, with a native UI for a common, simple experience for Ariba apps upstream and down."

The new UI offers an intuitive experience that simplifies task and process execution for all users – from frontline requisitioners to strategic sourcing professionals, said Ariba. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Users see all the business network elements in one dashboard, with a native UI for a common, simple experience for Ariba apps upstream and down.

The goal is to make the business users want to conduct their B2B processes and transactions using these new interfaces and experiences, in effect having them demand them of their employers. User-driven productivity is therefore a new go-to market for Ariba, with simplicity and everywhere-enabled interactions with processes a growing differentiator.

"We want to make it easier to use the central procurement department, rather than to go around it," said Cirque du Soleil Supply Chain Senior Director Nadia Malek. Ariba Sourcing, she said, helps Montreal-based Cirque standardize to keep all of its many shows going around the world. And by having all the Ariba apps funnel into SAP, they can keep costs managed, on a pay-as-you-go basis and still keep all the end users' needs met and controlled.

In another nod to consumer-level ease in business, B2B enterprise buying on Ariba increasingly looks like an Amazon consumer online shopping experience, with one-click to procure from approved catalogs of items from approved suppliers. Search-driven navigation also makes the experience powerful while simple, said Haydon.

Behind the pretty pictures is an increasingly powerful network and platform, built on SAP HANA. Network-centric and activity-context-aware apps allow Ariba to simplify processes across the cloud, on-premises, and partner ecosystem continuum for its global users. The interface serves up only what the users need at that time, and is context aware to the user's role and needs.

"The future of commerce means processes across multiple business networks, on any device, with shared insights by all," said Tim Minahan, Senior Vice President at Ariba.

Scale and reach

The Ariba Network supports these apps and services in new terms of scale and reach as well. Every minute the Ariba Network adds a new supplier, and every day, it fields 1.5 million catalog searches while generating $1.8 billion in commerce, said Ariba.

"Business strong, consumer simple" is the concept guiding Ariba's user experience development, said new incoming Ariba President Alex Atzberger. And SAP is increasingly focusing on far-reaching processes among and between companies to better control total spend and innovate around the analytics of such activities, he said.

Some 1.7 million companies are now connected by Ariba, more than for Alibaba, eBay and Amazon combined.

Some 1.7 million companies are now connected by Ariba, more than for Alibaba, eBay and Amazon combined, said Atzberger.

Ariba is also in the process of integrating with SAP HANA and other recent SAP acquisitions, including Concur and Fieldglass. Haydon emphasized that Ariba is striving to make those integrations invisible to users.

"Ariba is leveraging SAP HANA to integrate Concur, S4HANA, and Fieldglass, with no middleware needed," said Haydon.

The scale and sophistication of the Network and its data is also enabling new levels of automated risk management to Ariba and SAP users. The Hackett Group says more than half of the companies surveyed recently see reducing supply risk as a key initiative.

The scale and sophistication of the Network and its data is also enabling new levels of automated risk management to Ariba and SAP users. ata-driven risk management is increasingly being embedded across supply chain and procurement apps and processes, says Ariba. Now, the risk of businesses having forced labor or other illegal labor practices inside their supply chain is being uncovered.

Made in a Free World

Also at the conference here, Ariba announced a alliance with San Francisco-based non-profit Made in a Free World to help eliminate modern day slavery from the supply chain. Ariba has donated part of the Ariba LIVE registrations to help fund further development of Made in a Free World's supply chain risk analysis tool, which improve a business's visibility of their supply chain.

Also at Ariba LIVE, the company presented T-Mobile with its 2015 Network Innovation Award. T-Mobile was an early adopter of Ariba Pay, a  cloud-based solution designed to digitize the "pay" in "procedure-to- pay." Prior to Ariba Pay, the payment process was largely paper based, making it costly and inefficient.

Disruption fuels innovation. And innovation drives advantage.

“Emerging payment technologies are certainly disruptive,” said Atzberger. “But as T-Mobile proves, disruption fuels innovation. And innovation drives advantage. And for this, we are pleased to recognize the company with the 2015 Network Innovation Award.”

But the mobile enablement and so-called "total user experience" improvements seemed the most welcomed by the audience. The experience factor may well prove to be where Ariba and SAP can outpace their competitors and fundamentally change how enterprises use business services.

For example, for all Ariba users -- from technicians on the plant floor to sales people in client meetings -- the company now provides a mobile app, Ariba Mobile, which is available from the Apple App Store or Google Play. With the app, users can quickly and easily view, track, and act on requisitions from their mobile devices, transforming the speed and efficiency with which work gets done. Ariba’s approach is not just about replicating existing user interactions onto mobile devices, but providing continuity between desktop interactions and working outside the office.

Easy access

“The Ariba Mobile app provides our requisition approvers with easy access to requisitions anywhere, anytime and gives them the information they need to make spend decisions while keeping up with global demand,” said Alice Hillary, P2P Enablement Lead at John Wiley and Sons. “Using the app, approvers can view their requests in a sleek interface and access details in an easier-to-read format than general emails. The feature has also helped us to address security concerns with approvals done through email.” 

And as part of planned innovations, Ariba will provide access to templates, best practices, videos and more within many of its applications, beginning with its Seller solutions. Users will also be able tap directly into the Ariba Exchange Community, enabling collaboration within the context of their apps with professionals who have the expertise, experience, and knowledge to help them execute a process such as responding to an RFP or creating an auction.

You may also be interested in:

Tags:  Ariba  Ariba LIVE  Ariba Mobile  Ariba Network  BriefingsDirect  Dana Gardner  Interarbor Solutions  SAP  SAP Fiori 

Share |
PermalinkComments (0)
Page 1 of 59
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by®  ::  Legal