Print Page   |   Contact Us   |   Your Cart   |   Sign In   |   Register
Dana Gardner's BriefingsDirect for Connect.
Blog Home All Blogs
Longtime IT industry analyst Dana Gardner is a creative thought leader on enterprise software, SOA, cloud-based strategies, and IT architecture strategies. He is a prolific blogger, podcaster and Twitterer. Follow him at http://twitter.com/Dana_Gardner.

 

Search all posts for:   

 

Top tags: Dana Gardner  Interarbor Solutions  BriefingsDirect  HP  cloud computing  The Open Group  big data  SaaS  virtualization  VMWare  HPDiscover  data center  enterprise architecture  Ariba  data analytics  HP DISCOVER  SOA  Open Group Conference  HP Vertica  Ariba Network  SAP  security  VMWorld  Tony Baer  desktop virtualization  Jennifer LeClaire  Business Intelligence  IT  mobile computing  TOGAF 

IT operations modernization helps energy powerhouse Exelon acquire businesses

Posted By Dana L Gardner, Wednesday, March 25, 2015

This next BriefingsDirect IT innovation discussion examines how Exelon Corporation, based in Chicago, employs technology and process improvements to not only optimize their IT operations but also to both help manage a merger and acquisition transition, and to bring outsourced IT operations back in-house.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how this leading energy provider in the US, with a family of companies having $23.5 billion in annual revenue, accomplishes these goals we're joined by Jason Thomas, Manager of Service, Asset and Release Management at Exelon. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I gave a brief overview of Exelon, but tell us a little bit more. It's quite a large organization that you're involved with.

Thomas: We are vast and expansive. We have a large nuclear fleet, around 40-odd nuclear power plants in three utilities, ComEd in Chicago, in the Illinois space; PECO out of Philadelphia; and BG and E in Baltimore.

So we have large urban utilities center. We also have a large retail presence with the Constellation brand and the sale of power both to corporations and to users. So there's a lot of that we do obviously in the utility space, and there are some element of the trade, the commodity trading side, as well in trading power in these markets.

Gardner: I imagine it must be quite a large IT organization to support all that?

Thomas: There are 1,200 to 1,300 IT employees across the company.

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

Gardner: Tell us about some of the challenges that you've been facing in managing your IT operations and making them more efficient. And, of course, we'd like to hear more about the merger between Constellation and Exelon back in 2012.

Merger is a challenge

Thomas: The biggest challenge is the merger. Obviously, our scale and the number of, for lack of a better word, things that we had to monitor, be aware of, and know about vastly increased. So we had to address that.

Thomas

A lot of our efforts around the merger and post-merger were around bringing everything into one standard monitoring platform, extending that monitoring out, leveraging the Business Service Management (BSM) suite of products, leveraging Universal Configuration Management Database (UCMDB).

Then there was a lot around consolidating asset management. In early 2013, we moved to Asset Manager as our asset manager platform of choice, consolidating data from Exelon from their tool, the Cergus CA Argis tool, into Asset Manager in support of moving to new IT billing that would be driven out of the data and Asset Manager in leveraging some of the executive scorecard and financial manager pieces to make that happen.

There was also a large effort through 2013 to move the company to a standardized platform to support our service desk, incident management, and also our service catalog for end-users. But a lot of this was driven last year around the in-sourcing of our relationship with Computer Sciences Corporation for our IT operations.

This was to basically realize a savings to the company of $12 to $15 million annually from the management of that contract, and also to move both the management and the expertise in house and leverage a lot of the processes that we built up and that had grown through the company as a whole.

Gardner: So knowing yourself well in terms of your IT infrastructure and all the elements of that is super important, and then bringing in-sourcing transition to the picture, involves quite a bit of complexity.

You've leveled the playing field and you have that common set of tools that you're going to drive to take you to the next level.

What do you get when you do this well? Is there a sense of better control, better security, or culture? What is it that rises to the top of your mind when you know that you have your IT service management (ITSM) in order, when you have your assets and configuration management data in order. Is it sleeping better at night? Is it a sense of destiny you have fulfilled -- or what?

Thomas: Sleeping better at night. There is an element of that, but there's also sometimes the aspect of, "Now what's next?" So, part of it is that there's an evolutionary aspect too. We've gotten everything in one place. We're leveraging some of the integrations, but then what’s next?

It's more restful. It's now deciding how we better position ourselves to show the value of these platforms. Obviously, there's a clear monetary value of what we did to in-source, but now how do we show the business the value that we have done? Moving to a common set of tools helps to get there. You've leveled the playing field and you have that common set of tools that you're going to drive to take you to the next level.

Gardner: What might that next level be? Is it a cloud transition? Is it more of a hybrid sourcing for IT? Is this enabling you to take advantage of the different devices in terms of mobile? Where does it go?

Automation and cloud

Thomas: A lot of it is really around automation, the intermediate step around cloud. We've looked at cloud. We do have areas where the company has leveraged it. IT is still trying to wrap their heads around how we do it, and then also how we expose that to the rest of the organization.

But the steps we’ve done around automation are very key in making leaner operations, IT operations, but also being able to do things in an automated fashion, as opposed to requiring the manual elements that, in some cases, we had never done prior to the merger.

Gardner: Any examples? You mentioned $15 million in savings, but are there any other metrics of success or key performance indicator (KPI)-level paybacks that you can point to in terms of having all this in place for managing and understanding your IT?

Thomas: We're still going through what it is we're going to measure and present. There's been a standard set of things that we've measured around our availability and our incidents and whether these incidents are caused by IT, by infrastructure.

One of the key things is how you're changing and how you do IT operations.

We've done a lot better operationally. Now it's taking some of those operational aspects and making them a little bit more business-centric. So for the KPIs, we're going through that process of determining what we're going to measure ourselves against.

Gardner: Jason, having gone through quite a big and complex undertaking in getting your ITSM and Application Lifecycle Management (ALM) activities, what comnes next? Maybe a merger and acquisition is going to push you in a new direction.

Thomas: We recently announced the intent to acquire Pepco Holdings, which is the regional utility in Washington, DC area, that further widens our footprint in the mid-Atlantic area. So yeah, we get to do it all over again with a new partner, bringing Pepco in and doing some elements of this again.

Gardner: Having gone through this and anticipating yet another wave, what words of wisdom might you provide in hindsight for those who are embarking on a more automated, streamlined, and modern approach to IT operations?

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

Thomas: One of the key things is how you're changing and how you do IT operations. Moving towards automation, tools aside, there's a lot of organizational change if you're changing how people do what they do or changing people's jobs or the perception of that.

You need to be clear. You need to clearly communicate, but you also need to make sure that you have the appropriate support and backing from leadership and that the top-down communication is the same message. We certainly had that, and it was great, but there's alway going to be that challenge of making sure everybody is getting that communication, getting the message, and getting constant reinforcement of that.

Organizational changes resulting from a large merger or acquisition are huge. It's key to show the benefits, even to the people who are obviously going to reap some of these immediate benefits,  those in IT. You know the business is going to see some. It's couching that value in the means or method appropriate for those actors, all of those stakeholders.

Full circle

Gardner: Of course, you have mentioned working through a KPI definition and working the executive scorecard. That makes if full circle, doesn’t it?

Thomas: Defining those KPIs, but also having one place where those KPIs can be viewed, seen easily, and drilled into is big. To date, it's been a challenge to provide some of that historiography around that data. Now, you have something where you can even more readily drill into it to see that data -- and that’s huge.

Presenting that, being able to show it, and being able to show it in a way that people can see it easily, is huge, as opposed to just saying, "Well, here's the spreadsheet with some graphs" or "Here’s a whiz-bang PowerPoint doc."

Gardner: And, Jason, I suppose this points to the fact that IT is really maturing. Compared to other business services and functions in corporations, things that had been evolving for 80 or 100 years, IT is, in a sense, catching up.

Now, you have something where you can even more readily drill into it to see that data -- and that’s huge.

Thomas: It's catching up, but I also think it's more of a reflection. It's reflection of a lot of the themes of the new style of IT. A lot of that is that consumerization aspect. In fact,  if you look at the last 10 years ago, the wide presence of all of these, your smart devices and your smartphones, is huge.

We have brought to most people something that was never easily accessible. And having to take that same aspect and make it part of how you present what you do in IT is huge. You see it in how you're manifesting it in your various service catalogs and some of the efforts that we're undertaking to refine and better the processes that underlie our technical service catalog to have a better presentation layer.

That technical service catalog will refer to what we've seen with Propel. It's an easier, nicer, friendlier way to interact, and people expect that. Why can’t this be more like my app store, or why can't this be more like X.

Is IT catching up or has IT become more reachable, has become more warm and fuzzy as opposed to something that’s cold, hard, and stored away somewhere? You kind of know about it, and perhaps the guys in the basement are the ones who are doing all the heavy lifting, and it's more tangible.

Gardner: Humanization of IT, perhaps.

Thomas: Absolutely.

Gardner: All right, one last area I want to get into before we sign off. We've heard quite a bit  about The Machine, HP unveiling more detail from its labs activities. It’s not necessarily on a product roadmap yet, but it’s described through a lower footprint, much more rapid ability to join compute and memory, and then  reduce the size of the data center down to a size of a refrigerator.

I know that it's on the horizon, but how does that strike you, and how interesting is that for you?

Ramp up/ramp down

Thomas: It's interesting, because it allows you to get to bit more ability to ramp up or ramp down, based on what you need, as opposed to you having x amount of servers and x amount of storage that's always somewhere. It gives you a lot more flexibility and, to some extent, gives you a bit more tenability. It's directly applicable to certain aspects of the business, where you need that capability to ramp up and ramp down much more easily.

Reap the rewards of software compliance
Get the HP Toolkit
For Optimized Software Licensing

I had a conversation with one of my peers about that. We were talking about how both that and the Moonshot aspect and the ability to have that for a lot of the customer-facing websites, and the ability to tie them, in particular, the utility customer-facing websites whose utilization tends to spike during weather events.

While they don't spike all at the same time, there is the potential opportunity in the Mid-Atlantic of all the utilities spiking at the same time around a hurricane or Sandy-esque event. There's obviously a need to able to respond to that kind of demand, and that technology positions you with the flexibility to do that rather quickly and easily.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  Asset Manager  BriefingsDirect  Dana Gardner  Exelon  HP  HP BSM  HPDiscover  Interarbor Solutions  ITSM  Jason Thomas 

Share |
PermalinkComments (0)
 

Axeda's machine cloud produces on-demand IoT analysis services

Posted By Dana L Gardner, Friday, March 20, 2015

This BriefingsDirect big data innovation discussion examines how Axeda, based in Foxboro, Mass., has created a machine-to-machine (M2M) capability for analysis -- in other words, an Axeda Machine Cloud for the Internet of Things (IoT).

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how Axeda produces streams of massive data to multiple consumer dashboards that analyze business issues in near-real-time, we're joined by Kevin Holbrook, Senior Director of Advance Development at Axeda. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We have the whole Internet of Things (IoT) phenomenon. People are accepting more and more devices, end points, sensors, even things within the human body, delivering data out to applications and data pools. What do you do in terms of helping organizations start to come to grip with this M2M and IoT data demand?

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Holbrook: It starts with the connectivity space. Our focus has largely been in OEMs, equipment manufacturers. These are people who have the "M" in the M2M or the "T" in the Internet of Things. They are manufacturing things.

The initial drivers to have a handle on those things are basic questions, such as, "Is this device on?" There are multi-million dollar machines that are currently deployed in the world where that question can’t be answered without a phone call.

Initial driver

That was the initial driver, the seed, if you will. We entered into that space from the remote-service angle. We deployed small-agent software to the edge to get the first measurements from those systems and get them pushed up to the cloud, so that users can interact with it.

Holbrook

That grew into remote accesstelnet sessions or remote desktop being able to physically get down there, debug, tweak, and look at the devices that are operating. From there, we grew into software distribution, or content distribution. That could be anything from firmware updates to physically distributing configuration and calibration files for the instrument. We're recently seeing an uptake in content distribution for things like digital signage or in-situ ads being displayed on consumer goods.

From there, we started aggregating data. We have about 1.5 million assets connected to our cloud now globally, and there is all kinds of data coming in. Some of it's very, very basic from a resource standpoint, looking at CPU consumption, disks space, available memory, things of that nature.

It goes all the way through to usage and diagnostics, so that you can get a very granular impression how this machine is operating. As you begin to aggregate this data, all sorts of challenges come out of it. HP has proven to be a great partner for starting to extract value.

We can certainly get to the data, we can connect the device, and we can aggregate that data to our partners or to the customer directly. Getting value from that data is a completely different proposition. Data for data’s sake is not high value.

From our perspective, Vertica represents an endpoint. We've carried the data, cared for the data, and made sure that the device was online, generating the right information and getting it into Vertica.

Gardner:  What is it that you're using Vertica for to do that? Are we creating applications, are we giving analysis as a service? How is this going to market for you?

Holbrook: From our perspective, Vertica represents an endpoint. We've carried the data, cared for the data, and made sure that the device was online, generating the right information and getting it into Vertica.

When we approach customers, were approaching it from a joint-sale perspective. We're the connectivity layer, the instrumentation, the business automation layer there, and we're getting it into Vertica ,so that can be the seed for applications for business intelligence (BI) and for analytics.

So, we are the lowest component in the stack when we walk into one of these engagements with Vertica. Then, it's up to them, on a customer-by-customer basis, to determine what applications to bring to the table. A lot of that is defined by the group within the organization that actually manages connectivity.

We find that there's a big difference between a service organization, which is focused primarily on keeping things up and running, versus a business unit that’s driving utilization metrics, trying to determine not only how things are used, but how it can influence their billing.

Business use

We've found that that's a place where Vertica has actually been quite a pop for us in talking to customers. They want to know not just the simple metrics of the machines' operation, but how that reflects the business use of it.

The entire market has shifted and continues to shift. I was somewhat taken aback only a couple of weeks ago, when I found out that you can no longer buy a jet engine. I thought this was a piece of hardware you purchased, as opposed to something that you may have rented and paid per use. And so [the model changes to leasing] as the machines get  bigger and bigger. We have GE and the Bureau of Engraving and Printing as customers.

We certainly have some very large machines connected to our cloud and we're finding that these folks are shifting away from the notion that one owns a machine and consumes it until it breaks or dies. Instead, one engages in an ongoing service model, in which you're paying for the use of that machine.

While we can generate that data and provide some degree of visibility and insight into that data, it takes a massive analytics platform to really get the granular patterns that would drive business decisions.

Gardner: It sounds like many of your customers have used this for some basic blocking and tackling about inventory and access and control, then moved up to a business metrics of how is it being used, how we're billing, audit trails, and that sort of thing. Now, we're starting to look at a whole new type of economy. It's a services economy, based on cloud interactivity, where we can give granular insights, and they can manage their business very, very tightly.

There's not only a ton of data being generated, but the regulatory and compliance requirements which dictate where you can even leave that data at rest.

Any thoughts about what's going to be required of your organization to maintain scale? The more use cases and the more success, of course, the more demand for larger data and even better analytics. How do you make sure that you don't run out of runway on this?

Holbrook: There are a couple of strategies we've taken, but before I dive into that, I'll say that the issue is further complicated by the issue of data homing. There's not only a ton of data being generated, but the regulatory and compliance requirements which dictate where you can even leave that data at rest. Just moving it around is one problem, and where it sits on a disk is a totally different problem. So we're trying to tackle all of these.

The first way to address the scale for us from an architectural perspective was to try to distribute the connectivity. In order for you to know that something's running, you need to hear from it. You might be able to reach out, what we call contactability, to say, "Tell me if you're still running." But, by and large, you know of a machine's existence and its operation by virtue of it telling you something. So even if a message is nothing more than "Hello, I'm here," you need to hear from this device.

From the connectivity standpoint, our goal is not to try to funnel all of this into a single pipe, but rather to find where to get a point of presence that is closest and that is reasonable. We’ve been doing this on our remote-access technology for years, trying to find the appropriate geographically distributed location to route data through, to provide as easy and seamless an experience as possible.

So that’s the first, as opposed to just ruthlessly federating all incoming data, distributing the connectivity infrastructure, as well as trying to get that data routed to its end consumer as quickly as possible.

We break down data from our perspective into three basic temporal categories. There's the current data, which is the value you would see reading a dial on the machine. There's recent data, which would tell you whether something is trending in a negative direction, say pressure going up. Then, there is the longer-term historical data. While we focus on the first two, we’d deliberately, to handle the scale problem, don't focus on the long-term historical data.

Recent data

I'll treat recent data as being anywhere from 7 to 120 days and beyond, depending on the data aggregation rates. We focus primarily on that. When you start to scale beyond that, where the real long tail of this is, we try to make sure that we have our partner in place to receive the data.

We don't want to be diving into two years of data to determine seasonal trending when we're attempting to collect data from 1.5 million assets and acting as quickly as possible to respond to error conditions at the edge.

Gardner: Kevin, what about the issue of latency? I imagine some of your customers have a very dire need to get analysis very rapidly on an ongoing streamed basis. Others might be more willing to wait and do it in a batch approach in terms of their analytics. How do you manage that, and what are some of the speeds and feeds about the best latency outcomes?

Holbrook: That’s a fantastic question. Everybody comes in and says we need a zero-latency solution. Of course, it took them about two-and-a-half seconds to say that.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

There's no such thing as real-time, certainly on the Internet. Just negotiating up the TCP stack and tearing it down to send one byte is going to take you time. Then, we send it over wires under the ocean, bounce it off a satellite, you name it. That's going to take time.

There are two components to it. One is accepting that near-real-time, which is effectively the transport latency, is the smallest amount of time it can take to physically go from point A to point B, absent having a dedicated fiber line from one location to the other. We can assume that on the Internet that's domestically somewhere in the one- to two-second range. Internationally, it's in the two- to three-second or beyond range, depending on the connectivity of the destination.

What we provide is an ability to produce real-time streams of data outbound. You could take from one asset, break up the information it generates, and stream it to multiple consumers in near-real-time in order to get the dashboard in the control center to properly reflect the state of the business. Or you can push it to a data warehouse in the back end, where it then can be chunked and ETLd into some other analytics tool.

For us, we try not to do the batch ETLing. We'd rather make sure that we handle what we're good at. We're fantastic at remote service, at automating responses, at connectivity and at expanding what we do. But we're never going to be a massive ETL, transforming and converting into somebody’s data model or trying to get deep analytics as a result of that.

Gardner: Was it part of this need for latency, familiarity, and agility that led into Vertica? What were some of the decisions that led to picking Vertica as a partner?

Several reasons

Holbrook: There were a few reasons. That was one of them. Also the fact that there's a massive set of offerings already on top of it. A lot of the other people when we considered this -- and I won't mention competitors that we looked at -- were more just a piece of the stack, as opposed to a place where solutions grew out of.

It wasn't just Vertica, but the ecosystem built on top of Vertica. Some of the vendors we looked at are currently in the partner zone, because they're now building their solutions on top of Vertica.

We looked at it as an entry point into an ecosystem and certainly the in-memory component, the fact that you're getting no disk reads for massive datasets was very attractive for us. We don’t want to go through that process. We've dealt with the struggles internally of trying to have a relational data model scale. That’s something that Vertica has absolutely solved.

Gardner: Now your platform includes application services, integration framework, and data management. Let’s hone in on the application services. How are developers interested in getting access to this? What are their demands in terms of being able to use analysis outcomes, outputs, and then bring that into an application environment that they need to fulfill their requirements to their users?

It wasn't just Vertica, but the ecosystem built on top of Vertica. Some of the vendors we looked at are currently in the partner zone, because they're now building their solutions on top of Vertica.

Holbrook: It breaks them down into two basic categories. The first is the aggregation and the collection of data, and the second is physical interaction with the device. So we focus on both about equally. When we look at what developers are doing, almost always it’s transforming the data coming in and reaching out to things like a customer relationship management (CRM) system. It's opening a ticket when a device has thrown a certain error code or integrating with a backend drop-ship distribution system in the event that some consumable has begun to run low.

In terms of interaction, it's been significant. On the data side, we primarily see that they're  extracting subsets of data for deeper analysis. Sometimes, this comes up in discrete data points. Frequently, this comes up in the transfer of files. So there is a certain granularity that you can survive. Coming down the fire-hose is discrete data points that you can react to, and there's a whole other order of magnitude of data that you can handle when it's shipped up in a bulk chunk.

A good example is one of the use cases we have with GE in their oil and gas division  where they have a certain flow of data that's always ongoing and giving key performance indicators (KPIs). But this is nowhere near the level of data that they're actually collecting. They have database servers that are co-resident with these massive gas pipeline generators.

So we provide them the vehicle for that granular data. Then, when a problem is detected automatically, they can say, "Give me far more granular data for the problem area." it could be five minutes before or five minutes since. This is then uploaded, and we hand off to somewhere else.

So when we find developers doing integration around the data in particular, it's usually when they're diving in more deeply based on some sort of threshold or trigger that has been encountered in the field.

Become a member of myVertica
Register now
Gain access to the HP Vertica Community Edition

Gardner: And lastly, Kevin, for other organizations that are looking to create data services and something like your Axeda Machine Cloud, are there any lessons learned that you could share when it comes to managing such complexity, scale, and the need for speed? What have you learned at a high level that you could share?

All about strategy

Holbrook: It’s all going to be about the data-collection strategy. You're going to walk into a customer or potential customer, and their default response is going to be, "Collect everything." That’s not inherently valuable. Just because you've collected it, doesn’t mean that you are going to get value from it. We find that, oftentimes, 90-95 percent of the data collected in the initial deployment is not used in any constructive way.

I would say focus on the data collection strategy. Scale of bad data is scale for scale’s sake. It doesn’t drive business value. Make sure that the folks who are actually going to be doing the analytics are in the room when you are doing your data collection strategy definition. when you're talking to the folks who are going to wire up sensors,  and when you're talking to the folks who are building the device.

Unfortunately, these are frequently within a larger business ,in particular, completely different groups of people that might report to completely different vice presidents. So you go to one group, and they have the connectivity guys. You talk about it and you wire everything up.

We find that, oftentimes, 90-95 percent of the data collected in the initial deployment is not used in any constructive way.

Then, six to eight months later, you walk into another room. They’ll say "What the heck is this? I can’t do anything with this. All I ever needed to know was the following metric." It wasn’t collected because the two hadn't stayed in touch. The success of deployed solutions and the reaction to scale challenges is going to be driven directly by that data-collection strategy. Invest the time upfront and then you'll have a much better experience in the back.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

 You may also be interested in:

Tags:  Axeda  BriefingsDirect  cloud computing  Dana Gardner  HP  HPDiscover  Interarbor Solutions  Internet of things  Kevin Holbrook  M2M  machine to machine  Vertica 

Share |
PermalinkComments (0)
 

Health Shared Services BC harnesses a healthcare ecosystem using IT asset management

Posted By Dana L Gardner, Tuesday, March 17, 2015

The next BriefingsDirect innovation panel discussion examines how Health Shared Services BC in Vancouver improves process efficiency and standardization through better integration across health authorities in British Columbia, Canada.

We'll explore how HSSBC has successfully implemented one of the healthcare industry’s first Service Asset and Configuration Management Systems to help them optimize performance of their IT systems and applications.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how HSSBC gains up-to-date single views of IT assets across a shared-services environment, please join me in welcoming our guests, Daniel Lamb, Project Manager for the ITSM Program, and Cam Haley, Program Manager for the ITSM Program, both at HSSBC. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Gentlemen, tell me first about the context of your challenge. You're an organization that's trying to bring efficiency and process improvements across health authorities in British Columbia. What is it about that task that made better IT service management (ITSM) an imperative?

Haley: If you look at the healthcare space, where it is right now within British Columbia, we have the opportunity to look at using our healthcare funding more efficiently and specifically focus on delivering more clinical outcomes for consumers of the services.

Haley

That was one of the main drivers behind the formation of HSSBC, to consolidate some of the key supporting and enabling services into an organization that could deliver a standardized set of service offerings across our health authority clients, so that they can focus on clinical delivery.

That was the key business driver around why we're here and why we are doing some of those things. For us to effectively deliver on that mandate, we need the tools and the process capabilities to be able to effectively deliver more consistent service outcomes, all those things that we want to deliver there, and to look at reducing cost a little long-term so that those cost could be again shifted into clinical delivery and to really enable those outcomes.

Necessary system

Gardner: Daniel, why was a Service Asset and Configuration Management System something that was important to accomplish this?

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: We have been in the process of a large data center migration project over the past three years, moving a lot of the assets out of Vancouver and into a new data center. We standardized on HP infrastructure up in Kamloops and we have -- when we put in all our Health Authorities assets, it's going to be upwards of around probably 6,500-7,000 servers to manage.

Lamb

As we merged to the super organization, the manual processes just don’t exist anymore. To keep those assets up-to-date we needed an automated system. The reason we went for those products, which included the asset side and the configuration service management, is that’s really our business. We're going to be managing all these assets for the organization and all the configuration items, and we are providing these services. So this is where the toolset really fitted our goals.

Gardner: So other than scale, size, and the migration, were there any other requirements or problems that you needed to solve that moving into this more modern ITSM capability delivered?

Haley: Just to build on what Daniel said, one of the key drivers in terms of identifying the toolset and the capabilities was to support the migration of infrastructure into the data center.

But along with that, we provide a set of services that go beyond data center. The tool capability that has been delivered in supporting that outcome enables us to focus on optimizing our processes, getting a better view into what's happening in our own environment. So having the configuration items (CIs) in the configuration management data base (CMDB), having the relationships develop both at the infrastructure level, but all the way up to the application or the business service level.

Now we have a view up and down the stack of what's going on. We get better analytics and better data, and we can make some better decisions as well around where we want to focus. What are the pain points that we need to target? We 're able to mine that stuff and really look at opportunities to optimize.

The tool allows us to standardize our processes and roll out the capabilities. Automation is built into the tool, which is fantastic for us in terms of taking that manual overhead out of that and really just allowing us to focus on other things. So it's been great.

Gardner: Any unexpected benefits, ancillary benefits, that come from the standardization with this visibility, knowing your organization better that maybe you didn't anticipate?

Up-to-date information

Lamb: We've been able to track down everything that’s out there. That’s one thing. We just didn’t know where everything was or what we had. So in terms of being able to forecast to the health authorities, "This is how much you need to part with for maintenance, that sort of thing," that was always a guess in the past. We now have that up-to-date information available.

This has also laid the platform for us to better take advantage of the new technologies that are coming in. So what HP is talking about at the moment, we can’t really take advantage of that, but they have this base platform. It’s going to allow us to take advantage of a lot of the new stuff that’s coming out.

Gardner: So in order to get the efficiency and cost benefits of new infrastructure and converged systems and data center efficiencies, having your ducks lined up and understood is a crucial first step.

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: Definitely.

Gardner: Looking down the road, what’s piquing your interest in terms of what HP is doing or new developments, or does this now allow you to then progress into other areas that you are interested in?

Lamb: Personally, I'm looking at obviously the new versions of the product sets we have at the moment. We've also been speaking to other customers on the success that we've had and giving them some lessons learned on how things worked.

One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

Then, we're looking at some of other products we could build on to this -- the PPM, which is the Project Management toolset and the BSM, which is unified monitoring and that sort of thing. Being able to put those products on is where we'll start seeing even more value, like in terms of being able to reduce the amount of tickets and support cost and that sort of thing. So we're looking at that.

Then, just ad-hoc interest are the things around the big data and that sort of thing, just trying to get my head around how that works for us, because we have a lot of data. So some of those new technologies are coming out as well.

Gardner: Cam, given what you've already done, what has it gotten for you? What are some of the benefits and results that you have seen. Are there any metrics of success that you can share with us?

Haley: The first thing is that we're still pretty early in our journey out of the gate, if I just talk about what we've already achieved. One of the things that we have been able to do is enable our staff to be more effective at what they're doing.

We've implemented change management in particular within the toolset, and that’s giving us a more robust set of controls around what's actually happening and what’s actually going into the environment. That's been really important, not only for the staff, although there is bit of a learning curve around that, but in terms of the outcomes for our clients.

Comfort level

They have a higher comfort level that we have more insight or oversight into what’s actually happening in space and we are actually protecting the services that they need to deliver by putting those kinds of capabilities in. So from the process perspective, we've certainly been able to get some benefits in that area in particular.

From a client perspective, it's putting the toolset in it. It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients. That’s something that hasn’t always been there in the past.

I'm not saying that we're all the way there yet, but we're starting to show that we can deliver the services that the health authorities expect us to deliver, and we are using the toolset to help enable that. That’s also an important aspect.

The other thing is that through the work we've done in terms of consolidating some of our contracts, maintenance agreements, and so on into our asset management system, we have a better view of what we're paying for. We've already realized some opportunities to consolidate some contracts and show some savings as well.

It helps us develop that level of trust that we really need in order to have an effective partnering relationship with our clients.

That's just a number of areas where we're already seeing some benefits. As we start to roll out more of the capabilities of the tool in the coming year and beyond that, we expect that we will get some of those standard metrics that you would typically get out of it. Of course, we'll continue to drive out the ROI value as well. So we're already a good way down that path, and we'll just continue to do that.

Gardner: Any words of wisdom, based on your journey so far, for other organizations that might be struggling with spreadsheets and tracking all of their assets and all of their devices and even the processes around IT support? What have you learned. What could you share to someone who is just starting out?

For the visibility you need
Get the HP Toolkit
For maximizing PMO success

Lamb: We had a few key lessons that we spoke about. One was the guiding principles that you are going to do the implementation by. We were very much of the approach that we would try to keep things as out-of-the-box as possible. HP, as they are doing the new releases, would pick up the functionality that we are looking for. So we didn’t do a lot of tailoring.

And we did the project in a short cycle. These projects can go on for years sometimes, and a lot of money can get sunk and there isn’t value gained sometimes. We said, "Let’s do these in more short sprint projects. We'll get something in, we'll start showing value to the organization, then we'll get into another thing." That’s the cycle that we're working in, and that's worked really well.

The other thing is that we had a great consultant partner that we worked with, and that was key. We were feeling a little lost when we came here last year, and that was one of the things we did. We went to a good consultant partner, Effectual Systems from San Francisco, and that helped us.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Cam Haley  Dana Gardner  Daniel Lamb  HP  HPDiscover  HSSBC  Interarbor Solutions  ITSM 

Share |
PermalinkComments (0)
 

Hackathon model plus big data equals big innovation for Thomson Reuters

Posted By Dana L Gardner, Thursday, March 12, 2015

The next BriefingsDirect innovation interview explores the use of a hackathon approach to unlock creativity in the search for better use of big data for analytics. We will hear how Thomson Reuters in London sought to foster innovation and derive more value from its vast trove of business and market information.

The result: A worldwide virtual hackathon that brought together developers and data scientists to uncover new applications, visualizations, and services to make all data actionable and impactful.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about getting developers on board the big-data analysis train, BriefingsDirect sat down with Chris Blatchford, Director of Platform Technology in the IT organization at Thomson Reuters in London. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Blatchford: Thomson Reuters is the world's leading source of intelligent information. We provide data across the finance, legal, news, IP, and science, tax, and accounting industries through product and service offerings, combining industry expertise with innovative technology.

Gardner: It’s hard to think of an organization where data and analysis is more important. It’s so core to your very mission.

Blatchford

Blatchford: Absolutely. We take data from a variety of sources. We have our own original data, third-party sources, open-data sources, and augmented information, as well as all of the original content we generate on a daily basis. For example, our journalists in the field provide original news content to us directly from all over the globe. We also have third-party licensed data that we further enrich and distribute to our clients through a variety of tools and services.

Gardner: And therein lies the next trick, what to do with the data once you have it. About this hackathon, how did you come up upon that as an idea to foster innovation?

Big, Open, Linked Data

Blatchford: One of our big projects or programs of work currently is, as everyone else is doing, big data. We have an initiative called BOLD, which is Big, Open, Linked Data, headed up by Dan Bennett. The idea behind the project is to take all of the data that we ingest and host within Thomson Reuters, all of those various sources that I just explained, stream all of that into a central repository, cleanse the data, centralize it, extract meaningful information, and subsequently expose it to the rest of the businesses for use in their specific industry applications.

As well as creating a central data lake of content, we also needed to provide the tools and services that allow businesses to access the content; here we have both developed our own software and licensed existing tools.

So, we could demonstrate that we could build big-data tools using our internal expertise, and we could demonstrate that we could plug in third-party specific applications that could perform analysis on that data. What we hadn’t proved was that we could plug in third-party technology enterprise platforms in order to leverage our data and to innovate across that data, and that’s where HP came in.

HP was already engaged with us in a number of areas, and I got to speaking with their Big Data Group around their big data solutions. IDOL OnDemand came up. This is now part of the Haven OnDemand platform. We saw some synergies there between what we were doing with the big-data platform and what they could offer us in terms of their IDOL OnDemand API’s. That’s where the good stuff started.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: Software developers, from the very beginning, have had a challenge of knowing their craft, but not knowing necessarily what their end users want them to do with that craft. So the challenge -- whether it’s in a data environment, a transactional environment or interface, or gaming -- has often been how to get the requirements of what you're up to into the minds of the developers in a way that they can work with. How did the hackathon contribute to solving that?

As well as creating a central data lake of content, we also need to provide the tools and services that allow businesses to access the content.

Blatchford: That’s a really good question. That’s actually one of the biggest challenges big data has in general. We approach big data in one of two ways. You have very specific use cases, for example, consider a lawyer working on a particular case for a client, it would be useful for them to analyze prior cases with similar elements. If they are able to extract entities and relevant attributes, they may be able to understand the case final decision, or perhaps glean information that is relevant to their current case.

Then you have the other approach, which is much more about exploration, discovering new insights, trends, and patterns. That’s similar to the the approach we wanted to take with the hackathon -- provide the data and the tools to our developers for them just to go and play with the data.

We didn’t necessarily want to give them any requirements around specific products or services. It was just, "Look, here is a cool platform with some really cool APIs and some capabilities. Here is some nice juicy data. Tell us what we should be doing? What can we come up with from your perspective on the world?"

A lot of the time, these engineers are overlooked. They're not necessarily the most extroverted of people by the nature of what they do and so they miss chances, they miss opportunities, and that’s something we really wanted to change.

Gardner: It’s fascinating the way to get developers to do what you want them to do is to give them no requirements.

Interesting end products

Blatchford: Indeed. That can result in some interesting end-products. But, by and large, our engineers are more commercially savvy than most, hence we can generally rely on them to produce something that will be compelling to the business. Many of our developers have side projects and personal development projects they work on outside of the realms of their job requirement. We should be encouraging this sort of behavior.

Gardner: So what did you get when you gave them no requirements? What happened?

Blatchford: We had 25 teams that submitted their ideas. We boiled that down to 7 finalists based upon a set of preliminary criteria, and out of those 7, we decided upon our first-, second-, and third-place winners. Those three end results were actually taken, or are currently going through a product review, to potentially be implemented into our product lines.

The overall winner was an innovative UI design for mobile devices, allowing users to better navigate our content on tablets and phones. There was a sentiment analysis tool, that allowed users to paste in news stories or any news content source on the web and extract sentiment from that news story.

And the other was more of an internally focused, administrative exploration tool, that  allowed us to more intuitively navigate our own data, which perhaps doesn’t initially seem as exciting as the other two, but is actually a hugely useful application for us.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: Now, how does IDOL OnDemand come to play in this? IDOL is the ability to take any kind of information, for the most part, apply a variety of different services to it, and then create analysis as a service. How did that play into the hackathon? How did the developers use that?

Blatchford: Initially the developers looked at the original 50-plus APIs that IDOL OnDemand provides, and you have everything in there from facial recognition, to OCR, to text analytics, to indexing, all sorts of cool stuff. Those, in themselves, provided sufficient capabilities to produce some compelling applications, but our developers also utilized Thomson Reuters API’s and resources to further augment the IDOL platform.

This was very important, as it demonstrated that not only could we plug in an Enterprise analytics tool into our data, but also that it would fit well with our own capabilities.

Gardner: And HP Big Data also had a role in this. How did that provide value?

Five-day effort

Blatchford: The expertise. We should remember we stood this hackathon up from inception to completion in a little over one month, and that’s I think pretty impressive by any measure.

The actual hackathon lasted for five days. We gave the participants a week to get familiar with the APIs, but they really didn’t need that long because the documentation behind the APIs on IDOL OnDemand and the kind of "try it now" functionality it has was amazing. This is what the engineers and the developers were telling me. That’s not my own words.

The Big Data Group was able to stand this whole thing up within a month, a huge amount of effort on HP’s side that we never really saw. That ultimately resulted in a hugely successful virtual global hackathon. This wasn’t a physical hackathon. This was a purely virtual hackathon the world over.

Gardner: HP has been very close to developers for many years, with many tools, leading tools in the market for developers. They're familiar with the hackathon approach. It sounds like HP might have a business in hackathons as a service. You're proving the point here.

For the benefit of our listeners, if someone else out there was interested in applying the same approach, a hackathon as a way of creating innovation, of sparking new thoughts, light bulbs going off in people's heads, or bringing together cultures that perhaps hadn't meshed well in the past, what would you advise them?

First and foremost, the reason we were successful is because we had a motivated, willing partner in HP.

Blatchford: That’s a big one. First and foremost, the reason we were successful is because we had a motivated, willing partner in HP. They were able to put the full might of their resources and technology capabilities behind this event, and that along side our own efforts ultimately resulted in the events success.

That aside, you absolutely need to get the buy-in of the senior executives within an organization, get them to invest into the idea of something as open as a hackathon. A lot of hackathons are quite focused on a specific requirement. We took the opposite approach. We said, "Look, developers, engineers, go out there and do whatever you want. Try to be as innovative in your approach as possible."

Typically, that approach is not seen as cost effective, businesses like to have defined use cases, but sometimes that can strangle innovation. Sometimes we need to loosen the reins a little.

There are also a lot of logistical checks that can help. Ensure you have clear criteria around hackathon team size and members, event objectives, rules, time frames and so on. Having these defined up front makes the whole event run much smoother.

We ran the organization of the event a little like an Agile project, with regular stand-ups and check-ins. We also stood up a dedicated internal intranet site with all the information above. Finally, we set-up user accounts on the IDOL platform early on, so the participants could familiarize themselves with the technology.

Winning combination

Gardner: Yeah, it really sounds like a winning combination: the hackathon model, big data as the resource to innovate on, and then IDOL OnDemand with 50 tools to apply to that. It’s a very rich combination.

Blatchford: That’s exactly right. The richness in the data was definitely a big part of this. You don’t need millions of rows of data. We provided 60,000 records of legal documents and we had about the same in patents and news content. You don’t need vast amounts of data, but you need quality data.

Then you also need a quality platform as well. In this case IDOL OnDemand.The third piece is what’s in their heads. That really was the successful formula.

Bringing human understanding to the cloud

Helping developers build a new class of apps

Gardner: I have to ask. Of course, the pride in doing a good job goes a long way, but were there any other incentives; a new car, for example, for the winning hackathon application of the day?

Blatchford: Yeah, we offered a 1960s Mini Cooper to the winners. No, we didn't. We did offer other incentives. There were three main incentives. The first one, and the most important one in my view, and I think in everyone’s view, was exposure to senior executives within the organization. Not just face time, but promotion of the individual within the organization. We wanted this to be about personal growth as much as it was about producing new applications.

Going back to trying to leverage your resources and give them opportunities to shine, that’s really important. That’s one of the things the hackathon really fostered -- exposing our talented engineers and product managers, ensuring they are appreciated for the work they do.

We also provided an Amazon voucher incentive, and HP offered some of their tablets to the winners. So it was quite a strong winning set.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  big data  BriefingsDirect  Chris Blatchford  Dana Gardner  hackathon  HP  HP Big Data  Interarbor Solutions  Thomson Reuters 

Share |
PermalinkComments (0)
 

Cybersecurity standards: The Open Group explores security and safer supply chains

Posted By Dana L Gardner, Tuesday, March 10, 2015

Welcome to a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. This follows an earlier discussion from the event last month on synergies among major Enterprise Architecture frameworks with The Open Group.

The latest discussion, examining the both need and outlook for Cybersecurity standards among supply chains, is moderated by Dave Lounsbury, Chief Technology Officer, The Open Group; with guests Mary Ann Davidson, Chief Security Officer, Oracle; Dr. Ron Ross, Fellow of the National Institute of Standards and Technology (NIST), and Jim Hietala, Vice President of Security for The Open Group.  Download a copy of the transcript. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Dave Lounsbury: Mary Ann Davidson is responsible for Oracle Software Security Assurance and represents Oracle on the Board of Directors for the Information Technology Information Sharing and Analysis Center, and on the international Board of the ISSA.

Lounsbury

Dr. Ron Ross leads the Federal Information Security Management Act Implementation Project. It sounds like a big job to fulfill, developing the security standards and guidelines for the federal government.

This session is going to look at the cybersecurity and supply chain landscape from a standards perspective. So Ron and Mary Ann, thank you very much.

Ron Ross: All of us are part of the technology explosion and revolution that we have been experiencing for the last couple of decades.

I would like to have you leave today with a couple of major points, at least from my presentation, things that we have observed in cybersecurity for the last 25 years: where we are today and where I think we might need to go in the future. There is no right or wrong answer to this problem of cybersecurity. It’s probably one of the most difficult and challenging sets of problems we could ever experience.

Ross

In our great country, we work on what I call the essential partnership. It's a combination of government, industry, and academia all working together. We have the greatest technology producers, not just in this country, but around the world, who are producing some fantastic things to which we are all "addicted." I think we have an addiction to the technology.

Some of the problems we're going to experience going forward in cybersecurity aren't just going to be technology problems. They're going to be cultural problems and organizational problems. The key issue is how we organize ourselves, what our risk tolerance is, how we are going to be able to accomplish all of our critical missions and business operations that Dawn talked about this morning, and do so in a world that's fairly dangerous. We have to protect ourselves.

Movie app

I think I can sum it up. I was at a movie. I don’t go to movies very often anymore, but about a month ago, I went to a movie. I was sitting there waiting for the main movie to start, and they were going through all the coming attractions. Then they came on the PA and they said that there is an app you can download. I'm not sure you have ever seen this before, but it tells you for that particular movie when is the optimal time to go to the restroom during the movie.

I bring this up because that's a metaphor for where we are today. We are consumed. There are great companies out there, producing great technologies. We're buying it up faster than you can shake a stick at it, and we are developing the most complicated IT infrastructure ever.

So when I look at this problem, I look at this from a scientist’s point of view, an engineering point of view. I'm saying to myself, knowing what I know about what it takes  to -- I don't even use the word "secure" anymore, because I don’t think we can ever get there with the current complexity -- build the most secure systems we can and be able to manage risk in the world that we live in.

In the army, we used to have a saying. You go to war with the army that you have, not the army that you want. We’ve heard about all the technology advances, and we're going to be buying stuff, commercial stuff, and we're going to have to put it together into systems. Whether it’s the Internet of Things (IoT) or cyber-physical convergence, it all goes back to some fairly simple things.

http://www.oracle.com/us/corporate/press/executives/016331.htm

Davidson

The IoT and all this stuff that we're talking about today really gets back to computers. That’s the common denominator. They're everywhere. This morning, we talked about your automobile having more compute power than Apollo 11. In your toaster, your refrigerator, your building, the control of the temperature, industrial control systems in power plants, manufacturing plants, financial institutions, the common denominator is the computer, driven by firmware and software.

When you look at the complexity of the things that we're building today, we've gone past the time when we can actually understand what we have and how to secure it.

That's one of the things that we're going to do at NIST this year and beyond. We've been working in the FISMA world forever it seems, and we have a whole set of standards, and that's the theme of today: how can standards help you build a more secure enterprise?

The answer is that we have tons of standards out there and we have lots of stuff, whether it's on the federal side with 853 or the Risk Management Framework, or all the great things that are going on in the standards world, with The Open Group, or ISO, pick your favorite standard.

Hietala

The real question is how we use those standards effectively to change the current outlook and what we are experiencing today because of this complexity? The adversary has a significant advantage in this world, because of complexity. They really can pick the time, the place, and the type of attack, because the attack surface is so large when you talk about not just the individual products.

We have many great companies just in this country and around the world that are doing a lot to make those products more secure. But then they get into the engineering process and put them together in a system, and that really is an unsolved problem. We call it a Composability Problem. I can have a trusted product here and one here, but what is the combination of those two when you put them together in the systems context? We haven’t solved that problem yet, and it’s getting more complicated everyday.

Continuous monitoring

For the hard problems, we in the federal government do a lot of stuff in continuous monitoring. We're going around counting our boxes and we are patching stuff and we are configuring our components. That's loosely called cyber hygiene. It’s very important to be able to do all that and do it quickly and efficiently to make your systems as secure as they need to be.

But even the security controls in our control catalog, 853, when you get into the technical controls --  I'm talking about access control mechanisms, identification, authentication, encryption, and audit -- those things are buried in the hardware, the software, the firmware, and the applications.

Most of our federal customers can’t even see those. So when I ask them if they have all their access controls in place, they can nod their head yes, but they can’t really prove that in a meaningful way.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

So we have to rely on industry to make sure those mechanisms, those functions, are employed within the component products that we then will put together using some engineering process.

This is the below-the-waterline problem I talk about. We're in some kind of digital denial today, because below the water line, most consumers are looking at their smartphones, their tablets, and all their apps -- that’s why I used that movie example -- and they're not really thinking about those vulnerabilities, because they can't see them, until it affects them personally.

I had to get three new credit cards last year. I shop at Home Depot and Target, and JPMorgan Chase is our federal credit card. That’s not a pain point for me because I'm indemnified. Even if there are fraudulent charges, I don't get hit for those.

If your identity is stolen, that’s a personal pain point. We haven't reached that national pain point yet. All of the security stuff that we do we talk about it a lot and we do a lot of it, but if you really want to effect change, you're going to start to hear more at this conference about assurance, trustworthiness, and resiliency. That's the world that we want to build and we are not there today.

That's the essence of where I am hoping we are going to go. It's these three areas: software assurance, systems security engineering, and supply-chain risk management.

My colleague Jon Boyens is here today and he is the author, along with a very talented team of coauthors, of the NIST 800-161 document. That's the supply chain risk document.

It’s going to work hand-in-hand with another publication that we're still working on, the 800-160 document. We are taking an IEEE and an ISO standard, 15288, and we're trying to infuse into that standard. They are coming out with the update of that standard this year. We're trying to infuse security into every step of the lifecycle.

Wrong reasons

The reason why we are not having a lot of success on the cybersecurity front today is because security ends up appearing either too late or by the wrong people for the wrong reasons.

I'll give you one example. In the federal government, we have a huge catalog of security controls, and they are allocated into different baselines: low, moderate, and high. So you will pick a baseline, you will tailor, and you'll come to the system owner or the authorizing official and say, "These are all the controls that NIST says we have to do." Well, the mission business owner was never involved in that discussion.

One of the things we are going to do with the new document is focus on the software and systems engineering process from the start of the stakeholders, all the way through requirements, analysis, definition, design, development, implementation, operation, and sustainment, all the way to disposal. Critical things are going to happen at every one of those places in the lifecycle

The beauty of that process is that you involve the stakeholders early. So when those security controls are actually selected they can be traced back to a specific security requirement, which is part of a larger set of requirements that support that mission or business operation, and now you have the stakeholders involved in the process.

Up to this point in time, security operates in its own vacuum. It’s in the little office down the hall, and we go down there whenever there's a problem. But unless and until security gets integrated and we disappear as being our own discipline, we now are part of the Enterprise Architecture, whether it’s TOGAF® or whatever architecture construct you are following, or the systems engineering process. The system development lifecycle is the third one, and people ask what is acquisition and procurement.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

Unless we have our stakeholders at those tables to influence, we are going to continue to deploy systems that are largely indefensible not against all cyber attacks but against the high-end attacks.

We have to do a better job getting at the C-Suite and I tried to capture the five essential areas that this discussion has to revolve around. The acronym is TACIT, and it just happens to be a happy coincidence that it fit into an acronym. But it's basically looking at the threat, how you configure your assets, and how you categorize your assets with regard to criticality.

How complex is the system you're building? Are you managing that complexity in trying to reduce it, integrating security across the entire set of business practices within the organization? Then, the last component, which really ties into The Open Group, and the things you're doing here with all the projects that were described in the first session, that is the trustworthiness piece.

Are we building products and systems that are, number one, more penetration resistance to cyber attacks; and number two, since we know we can't stop all attacks, because we can never reduce complexity to where we thought we could two or three decades ago. Are we building the essential resiliency into that system. Even when the adversary comes to the boundary and the malware starts to work, how far does it spread, and what can it do?

That's the key question. You try to limit the time on target for the advisory, and that can be done very, very easily with good architectural and good engineering solutions. That's my message for 2015 and beyond, at least from a lot of things at NIST. We're going to start focusing on the architecture and the engineering, how to really affect things at the ground level?

Processes are important

Now we always will have the people, the processes, the technologies kind of this whole ecosystem that we have to deal with, and you're going to always have to worry about your sys admins that go bad and dump all the stuff that you don't want dumped on the Internet. But that's part of system process. Processes are very important because they give us structure, discipline, and the ability to communicate with our partners.

I was talking to Rob Martin from Mitre. He's working on a lot of important projects there with the CWEs, CVEs. It gives you the ability to communicate a level of trustworthiness and assurance that other people can have that dialogue, because without that, we're not going to be communicating with each other. We're not going to trust each other, and that's critical, having that common understanding. Frameworks provide that common dialogue of security controls in a common process, how we build things, and what is the level of risk that we are willing to accept in that whole process.

These slides, and they’ll be available, go very briefly into the five areas. Understanding the modern threat today is critical because, even if you don't have access to classified threat data, there's a lot of great data out there with Symantec and Verizon reports, and there's open-source threat information available.

If you haven't had a chance to do that, I know the folks who work on the high assurance stuff in The Open Group RT&ES. look at that stuff a lot, because they're building a capability that is intended to stop some of those types of threats.

The other thing about assets is that we don't do a very good job of criticality analysis. In other words, most of our systems are running, processing, storing, and transmitting data and we’re not segregating the critical data into its own domain where necessary.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies.

I know that's hard to do sometimes. People say, “I’ve got to have all this stuff ready to go 24×7,” but when you look at some of the really bad breaches that we have had over the last several years establishing a domain for critical data, where that domain can be less complex, which means you can better defend it, and then you can invest more resources into defending those things that are the most critical.

I used a very simple example of a safe deposit box. I can't get all my stuff into the safe deposit box. So I have to make decisions. I put important papers in there, maybe a coin collection, whatever.  I have locks on my house on the front door, but they're not strong enough to stop some of those bad guys out there. So I make those decisions. I put it in the bank, and it goes in a vault. It’s a pain in the butt to go down there and get the stuff out, but it gives me more assurance, greater trustworthiness. That's an example of the things we have to be able to do.

Complexity is something that’s going to be very difficult to address because of our penchant for bringing in new technologies. Make no mistake about it, these are great technologies. They are compelling. They are making us more efficient. They are allowing us to do things we never imagined, like finding out the optimal time to go to the restroom during a movie, I mean who could have imagined we could do that a decade ago.

But as with every one of our customers out there, the kinds of things we’re talking about flies below their radar. When you download 100 apps on your smartphone, people in general, even the good folks in cybersecurity, have no idea where those apps are coming from, where the pedigree is, have they been tested at all, have they been evaluated, are they running on a trusted operating system?

Ultimately, that's what this business is all about, and that's what 800-161 is all about. It's about a lifecycle of the entire stack from applications, to middleware, to operating systems, to firmware, to integrated circuits, to include the supply chain.

The adversary is all over that stack. They now figure out how to compromise our firmware so we have to come up with firmware integrity controls in our control catalog, and that's the world we live in today.

Managing complexity

I was smiling this morning when I talked about the DNI, the Director of National Intelligence in building their cloud, if that’s going to go to the public cloud or not. I think Dawn is probably right, you probably won’t see that going to the public cloud anytime soon, but cloud computing gives us an opportunity to manage complexity. You can figure out what you want to send to the public cloud.

They do a good job through the FedRAMP program of deploying controls and they’ve got a business model that's important to make sure they protect their customers’ assets. So that's built into their business model and they do a lot of great things out there to try to protect that information.

Then, for whatever stays behind in your enterprise, you can start to employ some of the architectural constructs that you'll see here at this conference, some of the security engineering constructs that we’re going to talk about in 800-160, and you can better defend what stays behind within your organization.

So cloud is a way to reduce that complexity. Enterprise Architecture, TOGAF, all of those architectural things allow you to provide discipline and structure and thinking about what you're building: how to protect it, how much it’s going to cost and is it worth it? That is the essence of good security. It’s not about running around with a barrel full of security controls or ISO 27000 saying, hey, you’ve got to do all this stuff, or this guy is going to fall, those days are over.

Integration we talked about. This is also hard. We are working with stovepipes today. Enterprise Architects typically don't talk to security people. Acquisition folks, in most cases, don't talk to security people.

The message I'm going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

I see it everyday. You see RFPs go out and there is a whole long list of requirements, and then, when it comes to security, they say the system or the product they are buying must be FISMA compliant. They know that’s a law and they know they have to do that, but they really don't give the industry or the potential contractors any specificity as to what they need to do to bring that product or the system to the state where it needs to be.

And so it's all about expectations. I believe our industry, whether it's here or overseas, wherever these great companies operate, the one thing we can be sure of is that they want to please their customers. So maybe what the message I'm going to send everyday is that we have to be more informed consumers. We have to ask for things that we know we need.

It’s like if you go back with the automobile. When I first started driving a long time ago,  40 years ago, cars just had seatbelts. There were no airbags and no steel-reinforced doors. Then, you could actually buy an airbag as an option at some point. When you fast-forward to today, every car has an airbag, seatbelt, steel-reinforced doors. It comes as part of the basic product. We don't have to ask for it, but as consumers we know it's there, and it's important to us.

We have to start to look at the IT business in the same way, just like when we cross a bridge or fly in an airplane. All of you who flew here in airplanes and came across bridges had confidence in those structures. Why? Because they are built with good scientific and engineering practices.

So least functionality, least privilege, those are kind of foundational concepts in our world and cybersecurity. You really can't look at a smartphone or a tablet and talk about least functionality anymore, at least if you are running that movie app, and you want to have all of that capability.

The last point about trustworthiness is that we have four decades of best practices in trusted systems development. It failed 30 years ago because we had the vision back then of trusted operating systems, but the technology and the development far outstripped our ability to actually achieve that.

Increasingly difficult

We talked about a kernel-based operating system having 2,000, 3,000, 4,000, 5,000 lines of code and being highly trusted. Well, those concepts are still in place. It’s just that now the operating systems are 50 million lines of code, and so it becomes increasingly difficult.

And this is the key thing. As a society, we're going to have to figure out, going forward, with all this great technology, what kind of world do we want to have for ourselves and our grandchildren? Because with all this technology, as good as it is, if we can’t provide a basis of security and privacy that customers can feel comfortable with, then at some point this party is going to stop.

I don't know when that time is going to come, but I call it the national pain point in this digital denial. We will come to that steady state. We just haven't had enough time yet to get to that balance point, but I'm sure we will.

I talked about the essential partnership, but I don't think we can solve any problem without a collaborative approach, and that's why I use the essential partnership: government, industry, and academia.

But the bottom line is that we have to work together, and I believe that we'll do that.

Certainly all of the innovation, or most of the innovation, comes from our great industry. Academia is critical, because the companies like Oracle or Microsoft want to hire students who have been educated in what I call the STEM disciplines: Science, Technology, Engineering -- whether it's "double e" or computer science -- and Mathematics. They need those folks to be able to build the kind of products that have the capabilities, function-wise, and also are trusted.

And government plays some role -- maybe some leadership, maybe a bully pulpit, cheerleading where we can -- bringing things together. But the bottom line is that we have to work together, and I believe that we'll do that. And when that happens I think all of us will be able to sit in that movie and fire up that app about the restroom and feel good that it's secure.

Mary Ann Davidson: I guess I'm preaching to the converted, if I can use a religious example without offending somebody. One of the questions you asked is, why do we even have standards in this area? And of course some of them are for technical reasons. Crypto it turns out is easy for even very smart people to get wrong. Unfortunately, we have reason to find out.

So there is technical correctness. Another reason would be interoperability to get things to work better in a more secure manner. I've worked in this industry long enough to remember the first SSL implementation, woo-hoo, and then it turns out 40 bits wasn't really 40, bits because it wasn’t random enough, shall we say.

Trustworthiness. ISO has a standard -- The Common Criteria. It’s an ISO standard. We talk about what does it mean to have secure software, what type of threats does it address, how do you prove that it does what you say you do? There are standards for that, which helps. It helps everybody. It certainly helps buyers understand a little bit more about what they're getting.

No best practices

And last, but not least, and the reason it’s in quotes, “best practices,” is because there actually are no best practices. Why do I say that -- and I am seeing furrowed brows back there? First of all, lawyers don't like them in contracts, because then if you are not doing the exact thing, you get sued.

There are good practices and there are worst practices. There typically isn't one thing that everyone can do exactly the same way that's going to be the best practice. So that's why that’s in quotation marks.

Generally speaking, I do think standards, particularly in general, can be a force for good in the universe, particularly in cybersecurity, but they are not always a force for good, depending on other factors.

And what is the ecosystem? Well, we have a lot of people. We have standards makers, people who work on them. Some of them are people who review things. Like when NIST is very good, which I appreciate, about putting drafts out and taking comments, as opposed to saying, "Here it is, take it or leave it." That’s actually a very constructive dialogue, which I believe a lot of people appreciate. I know that I do.

Sometimes there are mandators. You'll get an RFP that says, "Verily, thou shall comply with this, less thee be an infidel in the security realm." And that can be positive. It can  be a leading edge of getting people to do something good that, in many cases, they should do anyway.

You get better products in something that is not a monopoly market. Competition is good.

Implementers, who have to take this and decipher and figure out why they are doing it. People who make sure that you actually did what you said you were going to do.

And last, but not least, there are weaponizers. What do I mean by that? We all know who they are. They are people who will try to develop a standard and then get it mandated. Actually, it isn’t a standard. It’s something they came up with, which might be very good, but it’s handing them regulatory capture.

And we need to be aware of those people. I like the Oracle database. I have to say that, right? There are a lot of other good databases out there. If I went in and said, purely objectively speaking, everybody should standardize on the Oracle database, because it’s the most secure. Well, nice work if I can get it.

Is that in everybody else’s interest? Probably not. You get better products in something that is not a monopoly market. Competition is good.

So I have an MBA, or had one in a prior life, and they used to talk in the marketing class about the three Ps of marketing. Don’t know what they are anymore; it's been a while. So I thought I would come up with Four Ps of a Benevolent Standard, which are Problem Statement, Precise Language, Pragmatic Solutions, and Prescriptive Minimization.

Economic analysis

And the reason I say this is one of the kind of discussions I have to have a lot of times, particularly sometimes with people in the government. I'm not saying this in any pejorative way. So please don't take it that way. It's the importance of economic analysis, because nobody can do everything.

So being able to say that I can't boil the ocean, because you are going to boil everything else in it, but I can do these things. If I could do these things, it’s very clear what I am trying to do. It’s very clear what the benefit is. We've analyzed it, and it's probably something everybody can do. Then, we can get to better.

Better is better than omnibus. Omnibus is something everybody gets thrown under if you make something too big. Sorry, I had to say that.

So Problem Statement: why is this important? You would think it’s obvious, Mary Ann, except that it isn't, because so often the discussions I have with people, tell me what problem you are worried about? What are you trying to accomplish? If you don't tell me that, then we're going to be all over the map. You say potato and I say "potahto," and the chorus of that song is, "let’s call the whole thing off."

Buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk.

I use supply chain as an example, because this one is all over the map. Bad quality? Well, buying a crappy product is a risk of doing business. It’s not, per se, a supply chain risk. I'm not saying it’s not important, but it it’s certainly not a cyber-specific supply chain risk.

Bad security: well, that's important, but again, that’s a business risk.

Backdoor bogeyman: this is the popular one. How do I know you didn’t put a backdoor in there? Well, you can't actually, and that’s not a solvable problem.

Assurance, supply chain shutdown: yeah, I would like to know that a critical parts supplier isn’t going to go out of business. So these are all important, but they are all different problems.

So if you don't say what you're worried about, and it can't be all the above. Almost every business has some supplier of some sort, even if it’s just healthcare. If you're not careful how you define this, you will be trying to define a 100 percent of any entity's business operations. And that's not appropriate.

Use cases are really important, because you may have a Problem Statement. I'll give you one, and this is not to ding NIST in any way, shape, or form, but I just read this. It’s the Cryptographic Key Management System draft. The only reason I cite this as an example is that I couldn't actually find a use case in there.

So whatever the merits of that are saying, are you trying to develop a super secret key management system for government, very sensitive cryptographic things you are building from scratch, or you are trying to define a key management system that we have to use for things like TLS or any encryption that any commercial product does, because that's way out of scope?

So without that, what are you worried about? And also what’s going to happen is somebody is going to cite this in an RFP and it’s going to be, are you compliant with bladdy-blah? And you have no idea whether that even should apply.

Problem Statement

So that Problem Statement is really important, because without that, you can't have that dialogue in groups like this. Well, what are we trying to accomplish? What are we worried about? What are the worst problems to solve?

Precise Language is also very important. Why? Because it turns out everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

If you say vulnerability to my vulnerability handling team, they think of that as a security vulnerability that’s caused by a defect in software.

But I've seen it used to include, well, you didn’t configure the product properly. I don’t know what that is, but it’s not a vulnerability, at least not to a vendor. You implemented a policy incorrectly. It might lead to vulnerability, but it isn’t one. So you are seeing where I am going with this. If you don’t have language to find very crisply the same thing, you read something and you go off and do it and you realize you solved the wrong problem.

I am very fortunate. One of my colleagues from Oracle, who works on our hardware, and I also saw a presentation by people in that group at the Cryptographic Conference in November. They talked about how much trouble we got into because if you say, "module" to a hardware person, it’s a very different thing from what it meant to somebody trying to certify it. This is a huge problem because again you say, potato, I say "potahto." It’s not the same thing to everybody. So it needs to be very precisely defined.

Everybody speaks a slightly different language, even if we all speak some dialect of geek, and that is, for example, a vulnerability.

Scope is also important. I don’t know why. I have to say this a lot and it does get kind of tiresome, I am sure to the recipients, COTS isn't GOTS. Commercial software is not government software, and it’s actually globally developed. That’s the only way you get commercial software, the feature rich, reads frequently. We have access to global talent.

It’s not designed for all threat environments. It can certainly be better, and I think most people are moving towards better software, most likely because we're getting beaten up by hackers and then our customers, and it’s good business. But there is no commercial market for high-assurance software or hardware, and that’s really important, because there is only so much that you can do to move the market.

So even a standards developer or big U.S. governments, is an important customer in the market for a lot of people, but they're not big enough to move the marketplace on their own, and so you are limited by the business dynamic.

So that's important, you can get to better. I tell people, "Okay, anybody here have a Volkswagen? Okay, is it an MRAP vehicle? No, it’s not, is it? You bought a Volkswagen and you got a Volkswagen. You can’t take a Volkswagen and drive it around streets and expect it to perform like an MRAP vehicle. Even a system integrator, a good one, cannot sprinkle pixie dust over that Volkswagen and turn it into an MRAP vehicle. Those are very different threat environments.

Why you think commercial software and hardware is different? It’s not different. It’s exactly the same thing. You might have a really good Volkswagen, and it’s great for commuting, but it is never going to perform in an IED environment. It wasn’t designed for that, and there is nothing you can do or make it designed to perform in that environment.

Pragmatism

Pragmatism; I really wish anybody working on any standard would do some economic analysis, because economics rules the world. Even if it’s something really good, a really good idea, time, money, and people, particularly qualified security people, are constrained resourses.

So if you make people do something that looks good on paper, but it’s really time-consuming, it’s an opportunity, the cost is too high. That means what is the value of something you could do with those resources that would either cost less or deliver higher benefit. And if you don’t do that analysis, then you have people say, "Hey, that’s a great idea. Wow, that’s great too. I’d like that." It’s like asking your kid, "Do you want candy. Do want new toys? Do want more footballs?" Instead of saying, "Hey, you have 50 bucks, what you are going to do with it?"

And then there are unintended consequences, because if you make this too complex, you just have fewer suppliers. People will never say, "I'm just not going to bid because it’s impossible." I'm going to give you three examples and again I'm trying to be respectful here. This is not to dis anybody who worked on these. In some cases, these things have been subsequent revisions that have been modified, which I really appreciate. But there are examples of, when you think about it, what were you asking for in the first place.

I really wish anybody working on any standard would do some economic analysis, because economics rules the world.

I think this was an early version of NISTR 7622 and has since been excised. There was a requirement that the purchaser wanted to be notified of personnel changes involving maintenance. Okay, what does that mean?

I know what I think they wanted, which is, if you are outsourcing the human resources for the Defense Department and you move the whole thing to "Hackistan," obviously they would want to be notified. I got that, but that’s not what it said.

So I look at that and say, we have 5,000 products, at least, at Oracle. We have billions and billions of lines of code everyday. Somebody checks out a transaction, getting some code, and they do some work on it and they didn’t write it in the first place.

So am I going to tweet all that to somebody. What’s that going to do for you? Plus you have things like the German Workers Council. We are going to tell the US Government that Jurgen worked on this line of code. Oh no, that’s not going to happen.

So what was it you were worried about, because that is not sustainable, tweeting people 10,000 times a day with code changes is just going to consume a lot of resource.

In another one, had this in an early version of something they were trying to do. They wanted to know, for each phase of development for each project, how many foreigners worked on it? What's a foreigner? Is it a Green Card holder? Is it someone who has a dual passport? What is that going to do for you?

Now again if you had a super custom code for some intelligence, I can understand there might be cases in which that would matter. But general-purpose software is not one of them. As I said, I can give you that information. We're a big company and we’ve got lots of resource. A smaller company probably can’t. Again, what will I do for you, because I am taking resources I could be using on something much more valuable and putting them on something really silly.

Last, but not least, and again, with respect, I think I know why this was in there. It might have been the secure engineering draft standard that you came up with that has many good parts to it.

Root cause analysis

I think vendors will probably understand this pretty quickly. Root Cause Analysis. If you have a vulnerability, one of the first things you should use is Root Cause Analysis. If you're a vendor and you have a CVSS 10 Security vulnerability in a product that’s being exploited, what do you think the first thing you are going to do is?

Get a patch in your customers’ hands or work around? Yeah, probably, that’s probably the number one priority. Also, Root Cause Analysis, particularly for really nasty security bugs, is really important. CVSS 0, who cares? But for 9 or 10, you should be doing that common analysis.

I’ve got a better one. We have a technology we have called Java. Maybe you’ve heard of it. We put a lot of work into fixing Java. One of the things we did is not only Root Cause Analysis, for CVSS 9 and higher. They have to go in front of my boss. Every Java developer had to sit through that briefing. How did this happen?

Last but not least, looking for other similar instances, not just root cause, how did that get in there and how do we avoid it. Where else does this problem exist. I am not saying this to make us look good; I 'm saying for the analytics. What are you really trying to solve here. Root Cause Analysis is important, but it's important in context. If I have to do it for everything, it's probably not the best use of a scarce resource.

If you mandate too much, it will stifle innovation and it won’t work for people.

My last point is to minimize prescriptiveness within limits. For example, probably some people in here don’t know how to bake or maybe you made a pie. There is no one right way to bake a cherry pie. Some people go down to Ralphs and they get a frozen Marie Callendar’s out of the freezer, they stick it in the oven, and they’ve got a pretty good cherry pie.

Some people make everything from scratch. Some people use a prepared pie crust and they do something special with the cherries they picked off their tree, but there is no one way to do that that is going to work for everybody.

Best practice for something. For example, I can say truthfully that a best development practice would not be just start coding, number one; and number two, it compiles without too many errors on the base platform, and ship it. That is not good development practice.

If you mandate too much, it will stifle innovation and it won’t work for people. Plus, as I mentioned, you will have an opportunity cost. If I'm doing something that somebody says I have to do, but there is a more innovative way of doing that.

We don’t have a single development methodology in Oracle, mostly because of acquisitions. We buy a great company, we don't tell them, "You know, that agile thing you are doing, it’s the last year. You have to do waterfall." That’s not going to work very well, but there are good practices even within those different methodologies.

Allowing for different hows is really important. Static analysis is one of them. I think static analysis is kind of industry practice now, and people should be doing it. Third party is really bad. I have been opining about this, this morning.

Third-party analysis

Let just say, I have a large customer, I won't name who used a third-party static analysis service. They broke their license agreement with us. They're getting a lot of it from us. Worse, they give us a report that included vulnerabilities from one of our competitors. I don’t want to know about those, right? I can't fix some. I did tell my competitor, "You should know this report exist, because I'm sure you want to analyze this."

Here's the worst part. How many of those vulnerabilities the third-party found you think had any merit? Run tool is nothing; analyzing results is everything. That customer and the vendor wasted the time of one of our best security leads, trying to make sure there was no there there, and there wasn't.

So again, and last but not least, government can use their purchasing power in lot of very good ways, but realize that regulatory things are probably going to lag actual practice. You could be specifying buggy whip standards and the reality is that nobody uses buggy whips anymore. It's not always about the standard, particularly if you are using resources in a less than optimal way.

This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table.

One of the things I like about The Open Group is that here we have actual practitioners. This is one of the best forums I have seen, because there are people who have actual subject matter expertise to bring to the table, which is so important in saying what is going to work and can be effective.

The last thing I am going to say is a nice thank you to the people in the Trusted TTPF, because I appreciate the caliber of my colleagues, and also Sally Long. They talk about this type of an effort as herding cats, and at least for me, it's probably like herding a snarly cat. I can be very snarly. I'm sure you can pick up on that.

So I truly appreciate the professionalism and the focus and the targeting. Targeting a good slice of making a supply-chain problem better, not boiling the ocean, but very focused and targeted and with very high-caliber participation. So thank you to my colleagues and particularly thank you to Sally, and that’s it, I will turn it over to others.
Jim Hietala: We do, we have a few questions from the audience. So the first one and both here could feel free to chime in on this. Something you brought up Dr. Ross, building security in looking at software and systems engineering processes. How do you bring industry along in terms of commercial off-the-shelf products and services especially when you look at things like IoT, where we have got IP interfaces grafted on to all sorts of devices?

Ross: As Mary Ann was saying before, the strength of any standard is really its implementability out there. When we talk about, in particular, the engineering standard, the 15288 extension, if we do that correctly every organization out there who's already using -- let's say a security development lifecycle like the 27034, you can pick your favorite standard -- we should be able to reflect those activities in the different lanes of the 15288 processes.

This is a very important point that I got from Mary Ann’s discussion. We have to win the hearts and minds and be able to reflect things in a disciplined and structured process that doesn't take people off their current game. If they're doing good work, we should be able to reflect that good work and say, "I'm doing these activities whether it’s SDL, and this is how it would map to those activities that we are trying to find in the 15288."

And that can apply to the IoT. Again, it goes back to the computer, whether it’s Oracle database or a Microsoft operating system. It’s all about the code and the discipline and structure of building that software and integrating it into a system. This is where we can really bring together industry, academia, and government and actually do something that we all agree on.

Different take

Davidson: I would have a slightly different take on this. I know this is not a voice crying in the wilderness. My concern about the IoT goes back to things I learned in business school in financial market theory, which unfortunately has been borne out in 2008.

There are certain types of risks you can mitigate. If I cross a busy street, I'm worried about getting hit by a car. I can look both ways. I can mitigate that. You can't mitigate systemic risk. It means that you created a fragile system. That is the problem with the IoT, and that is a problem that no jury of engineering will solve.

If it's not a problem, why aren’t we giving nuclear weapons’ IP addresses? Okay, I am not making this up. The Air Force thought about that at one point. You're laughing. Okay, Armageddon, there is an app for that.

I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

That's the problem. I know this is going to happen anyway. whether or not I approve of it, but I really wish that people could look at this, not just in terms of how many of these devices and what a great opportunity, but what is a systemic risk that we are creating by doing this.

My house is not connected to the Internet directly and I do not want somebody to shut my appliances off or shut down my refrigerator or lock it so that I can’t get into it or use that for launching an attack, those are the discussions we should be having -- at least as much as how we make sure that people designing these things have a clue.

Hietala: The next question is, how do customers and practitioners value the cost of security, and then a kind of related question on what can global companies due to get C-Suite attention and investment on cybersecurity, that whole ROI value discussion?

Davidson: I know they value it because nobody calls me up and says, "I am bored this week. Don’t you have more security patches for me to apply?" That’s actually true. We know what it costs us to produce a lot of these patches, and it’s important for the amount of resources we spend on that I would much rather be putting them on building something new and innovative, where we could charge money for it and provide more value to customers.

So it's cost avoidance, number one; number two more people have an IT backbone. They understand the value of having it be reliable. Probably one of the reasons people are moving to clouds is that it’s hard to maintain all these and hard to find the right people to maintain them. But also I do have more customers asking us now about our security practices, which is be careful what you wish for

I said this 10 years ago. People should be demanding. They know what we're doing and now I am going to spend a lot of time answering RFPs, but that’s good. These people are aware of this. They're running their business on our stuff and they want to know what kind of care we're taking to make sure we're protecting their data and their mission-critical applications as if it were ours.

Difficult question

Ross: The ROI question is very difficult with regard to security. I think this goes back to what I said earlier. The sooner we get security out of its stovepipe and integrated as just part of the best practices that we do everyday, whether it’s in the development work at a company or whether it’s in our enterprises as part of our mainstream organizational management things like the SDLC, or if we are doing any engineering work within the organization, or if we have the Enterprise Architecture group involved. That integration makes security less of  “hey, I am special” and more of just a part of the way we do business.

So customers are looking for reliability and dependability. They rely on this great bed of IT product systems and services and they're not always focused on the security aspects. They just want to make sure it works and that if there is an attack and the malware goes creeping through their system, they can be as protected as they need to be, and sometimes that flies way below their radar.

So it's got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

So it's got to be a systemic process and an organizational transformation. I think we have to go through it, and we are not quite there just yet.

Davidson: Yeah, and you really do have to bake it in. I have a team of -- I’ve got three more headcount, hoo-hoo -- 45 people, but we have about 1,600 people in development whose jobs are to be security points of contact and security leads. They're the boots on the ground who implement our program, because I don't want to have an organization that peers over everybody’s shoulder to make sure they are writing good code. It's not cost-effective, not a good way to do it. It's cultural.

One of the ways that you do that is seeding those people in the organization, so they become the boots on the ground and they have authority to do things, because you’re not going to succeed otherwise.

Going back to Java, that was the first discussion I had with one of the executives that this is a cultural thing. Everybody needs to feel that he or she is personally responsible for security, not those 10-20 whatever those people are, whoever the security weenie is. It’s got to be everybody and when you can do that, you really have to see change and how things happen. Everybody is not going to be a security expert, but everybody has some responsibility for security.

This has been a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015. Download a copy of the transcript. This follows an earlier discussion from the event on synergies among major Enterprise Architecture frameworks with The Open Group.

You may also be interested in:

Tags:  BriefingsDirect  cybersecurity  Dana Gardner  Dave Lounsbury  Interarbor Solutions  Jim Hietala  Mary Ann Davidson  Ron Ross  supply chain  The Open Group  The Open Group San Diego 2015 

Share |
PermalinkComments (0)
 

Showing value early and often boosts software testing success at Pomeroy

Posted By Dana L Gardner, Friday, March 06, 2015

This next edition of the HP Discover Discussion Series highlights how Pomeroy, a Global IT managed services provider, improves quality for their applications testing, development and packaged applications customization.

By working with a partner, TurnKey Solutions, and HP, Pomeroy improves their overall process for development and thereby achieves far better IT and business outcomes.

 Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn more about how they're improving app testing proficiency, BriefingsDirect sat down with Mary Cathell, Quality Assurance Analyst at Pomeroy in Hebron, Kentucky, and Daniel Gannon, President and CEO at TurnKey Solutions in Denver. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about Pomeroy and then how improved development has boosted software benefits internally, as well as for your end-user customers across your managed-service provider (MSP) offerings.

Cathell: We're a premier provider of IT managed services. We do end user, network, data center, and everything in between. We’re hands on all over the place. We have a global footprint. Quality is absolutely imperative. We have big client companies like Nestle, Goodyear, and Bayer. These are companies that have a certain amount of respect in the business world. They depend upon quality in their products, and we need to deliver quality in our products to them.

Gardner: And you're the sole quality assurance analyst. So you have a big job.

Cathell: I do.

Gardner: What did you find when you got there, and what was the steady state before you started to make some improvements?

Making improvements

Cathell: This was November of 2012. They gave me an opportunity to bring something new that they were unfamiliar with and to teach, which I love to do. They purchased Oracle E-Business Suite (EBS), everyone had their own piece of the process from our sales to logistics, and they were all using different applications to do this process.

Cathell

It was a paradigm shift to take one system and bring us together as one company using one product. There was a lot of struggle through that, and they struggled through testing this, because they had no testing background. I was brought in to bring it to steady state.

After we went live, we got to steady state. Now it was like, "Let's not reinvent the wheel. Let's do this right. Let's begin scripting."

Testing is terrible. It's tedious. No one has the time to do it. No one has the patience to do it. So they either don’t do it or they throw buckshot at it. They do ad-hoc testing, or they just let errors come in and out, and they fix them on the back end, which is client facing.

Does Goodyear want to see a real-estate problem on an invoice? No, they don't, and we lose credibility. Goodyear is talking to their clients. They have friends. Their CEO is talking to another company's CEO. Now, you’ve got a word-of-mouth situation off of one mistake. You can't have that.

Gardner: What were some of the hurdles that you needed to overcome to become more automated, to take advantage of technology, to modernize the quality assurance processes? Then, we'll talk about how TurnKey works in that regard. But let's talk about what you had to overcome first?

We can function better now in just our regular business process, not even testing, but what we do for our customer.

Cathell: I had to show the value. Value is everything, because people ask, "Why do we need to do this? This is so much work. What value is that going to bring to me?"

Again, it lets your processes work with the business function as an oiled machine, because you're not separate anymore. You’re not siloed. You need to work together. It's cross-functional. It taught us our data.

Now there's an understanding that this works. We can function better now in just our regular business process, not even testing, but what we do for our customer. That’s value for our internal customers, which ends up being absolute value to our external customers.

Gardner: The solution you went for included HP Quality Center, but you wanted to take that a step further, and that's where TurnKey comes in

Due diligence

Cathell: I talked to several other companies. You need to. You need to do the due diligence. TurnKey did a wonderful thing. They provided something that no one else was doing.

We didn’t have the bandwidth, the talent internally, to script automation. It's very difficult and it's a very long process, but, they have an accelerator that you can just drag and drop from out-of-the-box Oracle and make changes, as you need to, for their customizations and your personalization.

Seven best practices

for business-ready applications

They also had cFactory, so that when your system changes -- and it will, because your business grows, your process changes -- it tells you the differences. You just click on a form, and it brings back what's there, shows you the comparison on what's changed, and asks if you would like to keep those changes. You don’t have to update your entire test case suite. It does it for you. It takes out that tedious mess of trying to keep updated.

Gardner: Daniel, is this what a lot of your clients go through, and what is it that you're bringing to the table in addition to HP Quality Center that gets their attention and makes this more powerful?

Gannon: Yeah, her story resonates. It’s very, very common for people to have those same issues. If you look at the new style of IT, it's really about two things, the two Vs, volume and velocity. You have a lot more data -- big data -- and it comes at you much faster. The whole notion of agility in business is a real driver, and these are the things that HP is addressing.

Gannon

From the perspective of how we deal with test automation, that’s what our products are designed to do. They enable people to do that quickly, easily, and manage it in a way that doesn't require armies of people, a lot of labor, to make that happen.

If you think about a standard environment like Mary’s at Pomeroy, the typical way people would address that is with a lot of people, a lot of hands, and a lot of manual effort. We think that intelligent software can replace that and help you do things more intelligently, much more quickly, and most importantly, at much, much lower cost.

Gardner: Mary, you've been at this for going on a couple of years. When you do intelligent software well, when you pick your partners well, what sort of results do you get? What’s been the change there?

Cathell: There is a paradigm shift, because now, when they, specifically our sales department, see the tool run, they're wowed. They're working with me to champion the tool to other parts of the business. That's ultimately the biggest reward -- to see people get it and then champion it.

Gardner: Has this translated into your constituents, your users, coming back to you for more customization because they trust this so that they're more interested in working with software, rather than resisting it?

Difficult to automate

Cathell: We absolutely did have that change, again specifically with sales, which is the most difficult process to automate, because it can go in so many different ways. So they're on board. They're leading my fight that we need to do this. This is where this company needs to go. This is where technology is going.

Gardner: And when you bring this mentality of better software quality and giving them the means to do it that’s not too arduous, doesn't that then also extend back into your fuller application development processes? How far back are you going through development and change? Is there a DevOps opportunity here for you to bring this into operations and start to sew this together better?

Cathell: That could happen in the future. Our requirements phase is a lot better, because now they're seeing scenarios with expected results -- pass/fails. Now, when we build something new, they go back and look at what they've written for our test scenarios and say, "Oh, our requirements need to be unambiguous. We need to be more detailed."

I do that liaison work where I speak geek for the developer, and the English for the business. We marry them together, and that creates now new quality products.

At the same time, we provide a common set of tools to provide test automation across the entire portfolio of applications within a company

Gardner: Daniel, Pomeroy uses this to a significant degree with Oracle EBS, but how about some of your other customers? What other applications, activities, and/or products has this been applied to? Do you have any metrics of success across some instances of what people get for this?

Gannon: We find that customers leverage the HP platform as the best-in-class platform for test automation across the broadest portfolio of applications in the industry, which is really powerful. What TurnKey Solutions brings to the table is specialization in conjunction with that platform. Our partnership reaches back well over a decade, where we have developed these solutions together.

We find that people use mission-critical applications, enterprise resource planning (ERP) applications like Oracle EBS, SAP, PeopleSoft and others that run the business. And our solutions address the unique problems of those applications. At the same time, we provide a common set of tools to provide test automation across the entire portfolio of applications within a company.

Many companies will have 600, 700, or thousands of applications that require the same level of due diligence and technology. That's what this kind of combination of technologies provides. 

Seven best practices

for business-ready applications

Gardner:  Mary, now that you’ve done this now with some hindsight -- not just from your current job but in previous jobs -- do you have any words of wisdom for other organizations that know that they've got quality issues? They don't always necessarily know how to go about it, but probably think that they would rather have an intelligent, modern approach. Do you have any words of wisdom that you can give them as they get started.

Break it down

Cathell: Absolutely. Everybody wants to look at the big picture -- and you should look at the big picture -- but you need to break it down. Do that in agile form. Make those into iterations. Start small and build up. So if you want to go from the smallest process that you have and keep building upon it, you're going to see more results than trying to tackle this huge elephant in the room that's just unattainable.

Gardner:  A lot of times with new initiatives, it’s important to establish a victory early to show some returns. How did you do that and how would you suggest others do that in order to keep the ball rolling?

Cathell: Get people excited. Get them onboard. Make sure that they're involved in the decision making and let them know what your plans are. Communication is absolute key, and then you have your champions.

Gardner: Daniel, we're here at HP Discover. This is where they open the kimono in many ways in their software and testing and application lifecycle management, businesses. As a long time HP partner, what are you hoping to see. What interests you? Any thoughts about the show in general?

Big data is both problem and opportunity. The problem is it’s big data. How do you cull and create intelligence from this mass of data.

Gannon: What's exciting is that HP addresses IT problems in general. There's no specificity necessarily. What companies really grapple with is how to put together a portfolio of solutions that addresses entire IT needs, rather than simple, specific, smoke stack kinds of solutions. That’s what’s really exciting. HP brings it all together, really delivers, and then values the customers. That’s what I think is really compelling.

Gardner: Okay, how about the emphasis on big data -- recognizing that applications are more now aligned with data and that analysis is becoming perhaps a requirement for more organizations in more instances? How do you see application customization and big data coming together?

Gannon: For me, big data is both problem and opportunity. The problem is it’s big data. How do you cull and create intelligence from this mass of data. That's where the magic lies. Those people who can tease actionable information from these massive data stores have the ability to act upon that.

There are a number of HP solutions that enable you to do just that. That will propel businesses forward to their next level, because you can use that information -- not just data, but information -- to make business decisions that enable customers going forward.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dan Gannon  Dana Gardner  DevOps  ERP  HP  HPDiscover  Interarbor Solutions  Mary Cathell  Pomeroy  quality assurance  Turnkey Solutions 

Share |
PermalinkComments (0)
 

Kony Visualizer puts mobile apps design control in hands of those closest to the business

Posted By Dana L Gardner, Thursday, March 05, 2015

The next BriefingsDirect enterprise mobile strategy discussion comes to you directly from last month's the Kony World 2015 Conference in Orlando.

This five-part series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

For our next innovation interview, we welcome Ed Gross, Kony Vice President of Product Management. Ed is focused on the Kony Visualizer Product, including requirements prototyping, development oversight, release planning, and lifecycle management. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Ed Gross: Here at Kony World, we're educating our customers on our latest releases in our product portfolio. One that I'm most excited about is the 2.0 release of our Visualizer product, which brings a number of next-generation capabilities with it.

Visualizer is a tool by which you can create engaging and dynamic user experiences on all platforms for mobility, including tablet and desktop as well. What it does is present an opportunity for designers to take back control of the development process of both designing applications and creating rich next-generation user experiences.

Gross

If you look at how applications are designed typically, it's a very rigid process of creating wireframes and mockups and then throwing those materials over the wall to developers. Designers today, prior to the Visualizer, didn't really have a suite of tools that they could  use to create these applications directly using the technology.

Right now, designers create sort of mockups and proxies of that design to hand over to a developer to implement. We thought it would be great if designers had a tool by which they can directly create that user experience in the native and Web channels using the underlying Kony framework.

With Visualizer you can go in with this what-you-see-is-what-you-get (WYSIWYG) environment. It’s actually called WYSIWYM (what you see is what you mobilize). It’s a term that we coined because it’s a unique approach and something we believe to be a new paradigm in designing applications.

What I can do as a designer is just drag and drop widgets onto my forms. I can create dynamic interactions that really showcase the native capabilities that we have with Visualizer. I can then take that design and publish the actual app to the Kony cloud. Then,  using an app on my phone or tablet, I can then download that design directly, look at all the native interactions, review them, and get a feel for the actual application without having to write any code.

This is a true native experience, not some sort of web-based proxy, mockup, or set of wireframes. I'm actually creating the app product itself within Visualizer with this WYSIWYG canvas.

Native capabilities

We provide access to all the native capabilities. For example, I can use a cover flow widget, a page widget, a calendar, or a camera. I get access to all those rich native capabilities, using what we call actions, without having to go down and write code for all these different platforms.

Fundamentally, what this also represents is a collaboration opportunity with business and IT. If I'm a designer working under the marketing arm of an organization or I'm a designer or a developer in the IT organization, by using what we call app preview, I can take this design, publish it to the Kony cloud, and bring it into the shell application that you could download from any of the app stores.

Then, I can review and  write notes on this design. I can send those notes back to the cloud. Ultimately, the Visualizer user can see those comments that I've left across the entire application. They can act upon them and iterate through that design process by republishing that app back to the cloud so that the business user or the developer, the designer, whoever is actually reviewing this application, can annotate on it.

The fundamental principle here is that you are not just creating a set of assets to hand over to a developer. You’re actually creating the app itself. What’s really fundamental is that we're essentially giving all of the power and all of the control back to the designer, so that the designer can finalize this application and then simply hand it over to the developer using Kony Studio.

The developer can take it from there without having to rewrite any of the front end of the application. The developer doesn't need to be concerned with creating all of the user experience components by writing code or creating views. They focus on what they do best, which is hooking that application into back-end services and systems, such as SAP, Siebel, or any enterprise service bus connectors.

What we saw before Visualizer was that most development projects had very large numbers of defects associated with the user experience.

If you want to integrate with a Web service like an XML, SOAP, or JSON service, you do all that in the studio. You don’t worry about writing all the front-end code. You make it production ready, you wire it, and you do the fundamental business logic of the application and the integration with other products.

Because what the designer has given you is already complete, and so it cuts down all those cycles. It also cuts down on defects. What we saw before Visualizer was that most development projects had very large numbers of defects associated with the user experience.

What I mean when I say is that if today you take an application that was developed using other technology and you break down all the defects according to what category they belong in, such as, integration defects or user experience defects, or performance defects, we find that 70 percent to 80 percent of the defects categorically are associated with poor implementation of the user experience.

In that typical waterfall process that I mentioned earlier, there are a lot of gaps We hand those assets over to a developer, and the developer has to make a lot of assumptions in that process. They have to fill in a lot of the holes that the designer may have left, because the designer is not going to make sure that they design and spec out every single tiny component of that application.

What winds up happening is that a developer somewhere in that lifecycle will make assumptions and implement something in a way that doesn't satisfy the requirements of the business. So you have to go through that whole process of designing and developing over and over again.

Rapid iteration

With Visualizer, you have the capability to quickly iterate. You publish that app design, you get feedback from the business, as I had mentioned earlier, and even during the development process, reiterate through that design process. That integration between Visualizer and our studio project is completely bidirectional.

At any point in that development process, you can transfer that application design back in Visualizer, make any adjustments, and then reimport it back into Studio. So your product suite is very well-integrated. At Kony, it’s something that we believe is a true differentiator.

Our core focus is mobility. So we ensure that the developer and designer experience is world class by tightly integrating the entire design and development process and making sure that those two processes are as close as possible to what we call the metal, the underlying channel, and that they can occur in parallel streams. You no longer have to go through sort of a tradition paper-based design process to move forward with implementing your app design.

Gardner: What is specifically new in Visualizer 2.0 as well as Framework 6.0?

Gross: Historically at Kony, we have supported a broad swath of devices. From 2008, look at all Symbian devices, BlackBerry devices, all the way up through iOS, Android, Mobile Web, and even Desktop Web, Windows, etc. What we did is look at our layout model where we had previously recognized that we're going to push forward to the next generation of application design.

It's focusing on those devices, those smartphones, that can provide that next-generation level of experience that we’ve become used to.

By doing so we introduce the different paradigm to layout your application using what we call flex layout that’s supported on the next generation of what we call Hero devices. It's focusing on those devices, those smartphones, that can provide that next-generation level of experience that we’ve become used to.

If you look at Android, iOS, and Windows devices, that’s our core focus as well as Web and Mobile Web. We really up-leveled the entire experience so you can design very engaging experiences using flex layout. We've also introduced a number of capabilities around animation, so that you can get those advanced animation and dynamic interactions that you become used to in consumer grade applications with Kony.

We've also introduced a suite of APIs around this as well. The developer can create very dynamic experiences, or the designer in Visualizer can create these wonderful experiences using what we call Action Editor to access all of those animation components and a bunch of native components, such as the ability to advanced device level actions like invoke a camera or map widget or send an SMS or an e-mail, all without having to write code.

Gardner: A recurring theme here and in the industry at large is the need for speed, closing the gap between the demand for mobile apps and what the IT organization and the developer core can produce. Is there anything about Visualizer and Framework that helps the DevOps process along. Perhaps it's being able to target a cloud or platform-as-a-service (PaaS) type of affair, where you can get that into production rapidly. How does what you brought to the market now help in terms of speed?

Reducing time

Gross: There are number of things. The first principle here is that we're significantly and seriously reducing the time it takes to get from design to development through this process. We're seeing a 15x or higher improvement in the time it takes to develop the front-end of an application, which is significant, and we believe in that very much. That's probably the most important thing.

There are tools underneath the hood that support that, including the app preview that I’d mentioned that lets you get on the device native without having to go through any of the development cycles. So it’s a drastic improvement.

There's also, a huge reduction in the amount of errors in the process. It also increases your capability to iterate. That is really core. You can create multiple designs and use those designs to socialize your idea, your business process, or what impact that will have on your users upfront.

The first principle here is that we're significantly and seriously reducing the time it takes to get from design to development through this process.

So I don't have to go through an entire waterfall process to discover that my user experience may not be right and may not be an effective use of my information architecture, for example. I'm able to do all that up front. And all this is supported with the underlying cloud infrastructure at Kony. When I publish my app preview, or if I publish this to a developer, it’s all supported within our cloud infrastructure.

To get down to brass tacks, I as a designer can publish my project to the Kony cloud and share it with a developer, what we call our functional previews of that application. That app preview that I’d mentioned is all supported with the underlying cloud platform.

Then, when you look at Studio, our Studio product is highly integrated with our MobileFabric solution, and we’re working in our next release to increase that integration even more. You can invoke our mobile cloud services from our development environment. We're going to be working to merge that entire Studio environment with our Visualizer design components, drastically improving the design and design or develop an integration experience.

Gardner: And to tie this into some of the other news and announcements here at Kony World, this is targeted at many of your partners and independent software vendors (ISVs), new ones that were brought in and the burgeoning cloud of supporters. Is this also what you expected, for ISVs to use to create those ready-to-deploy apps like Kony Sales, or are these for custom apps, or all of the above?

Custom app support

Gross: All of the above. Visualizer, if you look at the lowest level, is really built to support custom app design and development. That’s the traditional core of the Kony technology, the Kony platform stack. We're introducing a new product, Kony Modeler, this month, and that product is actually built on the foundation of Visualizer and our underlying developer framework.

When you design a Visualizer, you're essentially designing either custom applications or our model-driven business applications such as Kony Sales. The configuration of those applications inside of Modeler as a business analyst or business user does is also built on the Visualizer stack. So everything you do is highly visual, and this speaks to the user-centered development methodology that we see now.

User experience-driven applications are the future, and we recognize that at Kony. We put the user experience first, not the data model, not writing other kinds of models. We really focus on driving user expectations, increased performance for B2E applications, increased productivity, and it all relates back to user experience.

Gardner: Give me more insight as to why an ISV should think about Kony when going to mobile markets.

We put the user experience first, not the data model, not writing other kinds of models. We really focus on driving user expectations.

Gross: The first reason is that you’re greatly reducing the time it takes to get from design to the end product, which is key. Number two, you're able to reduce man-hours in the development process of the front-end experience.

I'd also like to reiterate that, because of our fundamental underlying JavaScript platform, you're able to write once for all these different channels. A fourth point that I'd like to bring up on top of these is our service-level agreement (SLA), which is unique in the industry.

At Kony, we have a unique SLA that says that within 30 days of a new operating system release, we will provide support within the Kony platform. Nobody else does that. We guarantee that support across our ISV channels and our direct customers, so that they don’t have to worry about revving up to the next version of the given channel. We really take care of that. We mask our customers from that, so that they can focus on innovation.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Kony, Inc.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Ed Gross  Interarbor Solutions  Kony  Kony Vializer  Kony World  Mobile apps 

Share |
PermalinkComments (0)
 

Explore synergies among major Enterprise Architecture frameworks with The Open Group

Posted By Dana L Gardner, Wednesday, March 04, 2015
Updated: Wednesday, March 04, 2015

Welcome to a special BriefingsDirect presentation and panel discussion from The Open Group San Diego 2015, which ran Feb. 2 through Feb. 5.

The following discussion, which examines the synergy among the major enterprise architecture frameworks, consists of moderator Allen Brown, President and Chief Executive Officer, The Open Group; Iver Band, an Enterprise Architect at Cambia Health Solutions; Dr. Beryl Bellman, Academic Director, FEAC Institute; John Zachman, Chairman and CEO of Zachman International, and originator of the Zachman Framework; and Chris Forde, General Manager, Asia and Pacific Region and Vice President, Enterprise Architecture, The Open Group. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excepts:

Iver Band: As an enterprise architect at Cambia Health Solutions, I have been working with the ArchiMate Language for over four years now, both working with and on it in the ArchiMate Forum. As soon as I discovered it in late 2010, I could immediately see, as an enterprise architect, how it filled an important gap.

Band

What is the ArchiMate Language? Well, it's a language we use for building understanding across disciplines in an organization and communicating and managing change.  It’s a graphical notation with formal semantics. It’s a language.

It’s a framework that describes and relates the business, application, and technology layers of an enterprise, and it has extensions for modelling motivation, which includes business strategy, external factors affecting the organization, requirements for putting them altogether and for showing them from different stakeholder perspectives.

You can show conflicting stakeholder perspectives, and even politics. I've used it to model organizational politics that were preventing a project from going forward.

It has a rich set of techniques in its viewpoint mechanism for visualizing and analyzing what’s going on in your enterprise. Those viewpoints are tailored to different stakeholders.  And, of course, ArchiMate, like TOGAF, is an open standard managed by The Open Group.

Taste of Archimate

This is just a taste of ArchiMate for people who haven’t seen it before. This is actually excerpted from the presentation my colleague Chris McCurdy and I are doing at this conference on Guiding Agile Solution Delivery with the ArchiMate Language.

Zachman

What this shows is the Business and Application Layers of ArchiMate. It shows a business process at the top. Each process is represented by a symbol. It shows a data model of business objects, then, at the next layer, in yellow.

Below that, it shows a data model actually realized by the application, the actual data that’s being processed.

Below that, it shows an application collaboration, a set of applications working together, that reads and writes that data and realizes the business data model that our business processes use.

All in all, it presents a vision of an integrated project management toolset for a particular SDLC that uses the phases that you see across the top.

We are going to dissect this model, how you would build it, and how you would develop it in an agile environment in our presentation tomorrow.

I have done some analysis of The Zachman Framework, comparing it to the ArchiMate Language. What’s really clear is that ArchiMate supports enterprise architecture with The Zachman Framework. You see a rendering of The Zachman Framework and then you see a rendering of the components of the ArchiMate Language. You see the Business Layer, the Application Layer, the Technology Layer, its ability to express information, behavior, and structure, and then the Motivation and Implementation and Migration extensions.

So how does it support it? Well, there are two key things here. The first is that ArchiMate models answer the questions that are posed by The Zachman Framework columns.

For what: for Inventory. We are basically talking about what is in the organization. There are Business and Data Objects, Products, Contracts, Value, and Meaning.

For how: for process. We can model Business Processes and Functions. We can model Flow and Triggering Relationships between them.

Where: for the Distribution of our assets. We can model Locations, we can model Devices, and we can model Networks, depending on how you define Location within a network or within a geography.

For who: We can model Responsibility, with Business Actors, Collaborations, and Roles.

When: for Timing. We have Business Events, Plateaus of System Evolution, relatively stable systems states, and we have Triggering Relationships.

Why: We have a rich Motivation extension, Stakeholders, Drivers, Assessments, Principles, Goals, Requirements, etc., and we show how those different components influence and realize each other.

Zachman perspectives

Finally, ArchiMate models express The Zachman Row Perspectives. For the contextual or boundary perspective, where Scope Lists are required, we can make catalogs of ArchiMate Concepts. ArchiMate has broad tool support, and in a repository-based tool, while ArchiMate is a graphical language, you can very easily take list of concepts, as I do regularly, and put them in catalog or metrics form. So it’s easy to come up with those Scope Lists.

Bellman

Secondly, for the Conceptual area, the Business Model, we have a rich set of Business Layer Viewpoints. Like the top of the -- that focus on the top of the diagram that I showed you; Business Processes, Actors, Collaborations, Interfaces, Business Services that are brought to market.

Then at the Logical Layer we have System. We have a rich set of Application Layer Viewpoints and Viewpoints that show how Applications use Infrastructure.

For Physical, we have an Infrastructure Layer, which can be used to model any type of Infrastructure: Hosting, Network, Storage, Virtualization, Distribution, and Failover. All those types of things can be modeled.

And for Configuration and Instantiation, the Application and Technology Layer Viewpoints are available, particularly more detailed ones, but are also important is the Mappings to standard design languages such as BPMN, UML and ERD. Those are straightforward for experienced modelers. We also have a white paper on using the ArchiMate language with UML. Thank you.

Dr. Beryl Bellman: I have been doing enterprise architecture for quite a long time, for what you call pre-enterprise architecture work, probably about 30 years, and I first met John Zachman well over 20 years ago.

Brown

In addition to being an enterprise architect I am also a University Professor at California State University, Los Angeles. My focus there is on Organizational Communications. While being a professor, I always have been involved in doing contract consulting for companies like Digital Equipment Corporation, ASK, AT and T, NCR, then Ptech.

About 15 years ago, a colleague of mine and I founded the FEAC Institute. The initial name for that was the Federal Enterprise Architecture Certification Institute, and then we changed it to Federated. It actually goes by both names.

The business driver of that was the Clinger–Cohen Bill in 1996 when it was mandated by government that all federal agencies must have an enterprise architecture.

And then around 2000, they began to enforce that regulation. My business partner at that time, Felix Rausch, and I felt that we need some certification in how to go about doing and meeting those requirements, both for the federal agencies and the Department of Defense. And so that's when we created the FEAC Institute.
Beginning of FEAC

In our first course, we had the Executive Office of the President, US Department of Fed, which I believe was the first Department of the Federal Government that was hit by OMB which held up their budget for not having an enterprise architecture on file. So they were pretty desperate, and that began the first of the beginning of the FEAC.

Forde

Since that time, a lot of people have come in from the commercial world and from international areas. And the idea of FEAC was that you start off with learning how to do enterprise architecture. In a lot of programs, including TOGAF, you sort of have to already know a little bit about enterprise architecture, the hermeneutical circle. You have to know what it is to know.

In FEAC we had a position that you want to provide training and educating in how to do enterprise architecture that will get you from a beginning state to be able to take full responsibility for work doing enterprise architecture in a matter of three months. It’s associated with the California State University System, and you can get, if you so desire, 12 graduate academic units in Engineering Management that can be applied toward a degree or you can get continuing education units.

So that’s how we began that. Then, a couple of years ago, my business partner decided he wanted to retire, and fortunately there was this guy named John Zachman, who will never retire. He's a lot younger than all of us in this room, right? So he purchased the FEAC Institute.

I still maintain a relationship with it as Academic Director, in which primarily my responsibilities are as a liaison to the universities. My colleague, Cort Coghill, is sort of the Academic Coordinator of the FEAC Institute.

You're just getting a snapshot in time and you're really not having an enterprise architecture that is able to adapt and change. You might be able to have a picture of it, but that’s all you really have.

FEAC is an organization that also incorporates a lot of the training and education programs of Zachman International, which includes managing the FEAC TOGAF courses, as well as the Zachman certified courses, which we will tell you more about.

I'm just a little bit surprised by this idea, the panel, the way we are constructed here, because I didn’t have a presentation. I'm doing it off the top, as you can see. I was told we are supposed to have a panel discussion about the synergies of enterprise architecture. So I prepared in my mind the synergies between the different enterprise architectures that are out there.

For that, I just wanted to make a strong point. I wanted to talk about synergy like a bifurcation between on the one hand, "TOGAF and Zachman" as being standing on one side, whereas the statement has been made earlier this morning and throughout the meeting is "TOGAF and."

Likewise, we have Zachman, and it’s not "Zachman or, but it’s "Zachman and." Zachman provides that ontology, as John talks about it in his periodic table of basic elements of primitives through which we can constitute any enterprise architecture. To attempt to build an architecture out of composites and then venting composites and just modeling you're just getting a snapshot in time and you're really not having an enterprise architecture that is able to adapt and change. You might be able to have a picture of it, but that’s all you really have.

That’s the power of The Zachman Framework. Hopefully, most of you will attend our demonstration this afternoon and a workshop where we are actually going to have people work with building primitives and looking at the relationship of primitives, the composites with a case study.

Getting lost

On the other side of that, Schekkerman wrote something about the forest of architectural frameworks and getting lost in that. There are a lot of enterprise architectural frameworks out there.

I'm not counting TOGAF, because TOGAF has its own architectural content metamodel, with its own artifacts, but those does not require one to use the artifacts in the architectural content metamodel. They suggest that you can use DoDAF. You can use MODAF. You can use commercial ones like NCR’s GITP. You can use any one.

Those are basically the competing models. Some of them are commercial-based, where organizations have their own proprietary stamps and the names of the artifacts, and the wrong names for it, and others want to give it its own take.

I'm more familiar nowadays with the governmental sectors. For example, FEAF, Federal Enterprise Architecture Framework Version 2. Are you familiar with that? Just go on the Internet, type in FEAF v2. Since Scott Bernard has been the head, he is the Chief Architect for the US Government at the OMB, he has developed a model of enterprise architecture, what he calls the Architecture Cube Model, which is an iteration off of John's, but he is pursuing a cube form rather than a triangle form.

I'm not counting TOGAF, because TOGAF has its own architectural content metamodel, with its own artifacts.

Also, for him the FEAF-II, as enterprise architecture, fits into his FEAF-II, because at the top level he has the strategic plans of an organization.

It goes down to different layers, but then, at one point, it drops off and becomes not only a solution, but it gets into the manufacturing of the solution. He has these whole series of artifacts that pertain to these different layers, but at the lower levels, you have a computer wiring closet diagram model, which is a little bit more detailed than what we would consider to be at a level of enterprise architecture.

Then you have the MODAF, the DoDAF, and all of these other ones, where a lot of those compete with each other more on the basis of political choices.

With the MODAF, the British obviously don’t want to use DoDAF, they have their own, but they are very similar to each other. One view, the acquisition view, differs from the project view, but they do the same things. You can define them in terms of each other.

Then there is the Canadian, NAF, and all that, and they are very similar. Now, we're trying to develop the unified MODAF, DoDAF, and NAF architecture, UPDM, which is still in its planning stages. So we are moving toward a more integrated system.

Allen Brown: Let’s move on to some of the questions that folks are interested in. Moving away from what the frameworks are, there is a question here. How does enterprise architecture take advantage of the impact of new emerging technologies like social, mobile, analytics, cloud, and so on?

Bidirectional change

John A. Zachman: The change can take place in the enterprise either from the top, where we change the context of the enterprise, or from the bottom, where we change the technologies.

So technology is expressed in the context of the enterprise, what I would call Rule 4, and that’s the physical domain. And it’s the same way in any other -- the building architecture, the airplane architecture, or anything. You can implement the logic, the as-designed logic, in different technologies.

Whatever the technology is, I made an observation that you want to engineer for flexibility. You separate the independent variables. So you separate the logic at Rule 3 from the physics of Rule 4, and then you can change Rule 4 without changing Rule 3. Basically that’s the idea, so you can accommodate whatever the emerging technologies are.

Bellman: I would just continue with that. I agree with John. Thinking about the synergy between the different architectures, basically every enterprise architecture contains, or should contain, considerations of those primitives. Then, it’s a matter of which a customer wants, which a customer feels comfortable with? Basically as long as you have those primitives defined, then you essentially can use any architecture. That constitute the synergy between the architectures.

One of the jobs of an enterprise architect is to establish a view of the organization that can be used to promote understanding and communicate and manage change.

Band: I agree with what's been said. It’s also true that I think that one of the jobs of an enterprise architect is to establish a view of the organization that can be used to promote understanding and communicate and manage change. With cloud-based systems, they are generally based on metadata, and the major platforms, like Salesforce.com as an example. They publish their data models and their APIs.

So I think that there’s going to be a new generation of systems that provide a continuously synchronized, real-time view of what's going on in the enterprise. So the architectural model will model this in the future, where things need to go, and they will do analyses, but we will be using cloud, big data, and even sensor technologies to understand the state of the enterprise.

Bellman: In the DoDaF 2.0, when that initially came out, I think it was six years ago or so, they have services architecture, a services view, and a systems view. And one of the points they make within the context, not as a footnote, is that they expect the systems view to sort of disappear and there will be a cloud view that will take its place. So I think you are right on that.

Chris Forde: The way I interpreted the question was, how does EA or architecture approach the things help you manage disruptive things? And if you accept the idea that enterprise architecture actually is a management discipline, it’s going to help you ask the right questions to understand what you are dealing with, where it should be positioned, what the risks and value proposition is around those particular things, whether that’s the Internet of Things, cloud computing, or all of these types of activities.

So going back to the core of what Terry’s presentation was about is a decision making framework with informed questions to help you understand what you should be doing to either mitigate the risk, take advantage of the innovation, and deploy the particular thing in a way that's useful to your business. That’s the way I read the question.

Impact of sensors

Band: Just to reinforce what Chris says, as an enterprise architect in healthcare, one of the things that I am looking at very closely is the evaluation of the impact of health sensor technology. Gartner Group says that by 2020, the average lifespan in a developed country will be increased by six months due to mobile health monitoring.

And so there are vast changes in the whole healthcare delivery system, of which my company is at the center as a major healthcare payer and investor in all sorts of healthcare companies. I use enterprise architecture techniques to begin to understand the impact of that and show the opportunities to our health insurance business.

Brown: If you think about social and mobile and you look at the entire enterprise architecture, now you are starting to expand that beyond the limits of the organization, aren’t you? You're starting to look at, not just the organization and the ecosystem, your business partners, but you are also looking at the impact of bringing mobile devices into the organization, of managers doing things on their own with cloud that wasn't part of the architecture. You have got the relationship with consumers out there that are using social and mobile. How do you capture all of that in enterprise architecture?

An architecture can help you within your enterprise understand those things and it can help you connect to other enterprises or other information sources to allow you to make sense of all those things.

Forde: Allen, if I had the answer to that question I would form my own business and I would go sell it.

Back in the day, when I was working in large organizations, we were talking about the extended enterprise, that kind of ecosystem view of things. And at that time the issue was more problematic. We knew we were in an extended ecosystem, but we didn't really have the technologies that effectively supported it.

The types of technologies that are available today, the ones that The Open Group has white papers about -- cloud computing, the Internet of Things, this sort of stuff -- architectures can help you classify those things. And the technologies that are being deployed can help you track them, and they can help you track them not as documents of the instance, but of the thing in real time that is talking to you about what its state is, and what its future state will be, and then you have to manage that information in vast quantities.

So an architecture can help you within your enterprise understand those things and it can help you connect to other enterprises or other information sources to allow you to make sense of all those things. But again, it's a question of scoping, filtering, making sense, and abstracting -- that key phrase that John pointed out earlier, of abstracting this stuff up to a level that is comprehensible and not overwhelming.

Brown: So Iver, at Cambia Health, you must have this kind of problem now, mustn’t you?

Provide value

Band: That's exactly what I am doing. I am figuring out what will be the impact of certain technologies and how our businesses can use them to differentiate and provide value.

In fact, I was just on a call this morning with JeffSTAT, because the whole ecosystem is changing, and we know that healthcare is really changing. The current model is not financially sustainable, and there is also tremendous amount of waste in our healthcare system today. The executives of our company say that about a third of the $2.7 trillion and rising spent on healthcare in the US doesn't do anyone any good.

There's a tremendous amount of IT investment in that, and that requires architecture to tie it altogether. It has to do with all the things ranging from the logic with which we edit claims, to the follow-up we provide people with particularly dangerous and consequently expensive diseases. So there is just a tremendous amount going through an enterprise architecture. It’s necessary to have a coherent narrative of what the organization needs to do.

The way we deal with complexity is through classification. I suggest that there is more than one way to classify things.

Bellman: One thing we all need to keep in mind is even more dynamic than that, if you believe even a little bit of Kurzweil's possibilities is that -- are people familiar with Ray Kurzweil's 'The Singularity Is Near' -- 2037 will be around the singularly between computers and human beings.

So I think that the wrap where he argues that the amount of change is not linear but exponential, and so in a sense you will never catch up, but you need an architecture to manage that.

Zachman: The way we deal with complexity is through classification. I suggest that there is more than one way to classify things. One is one-dimensional classification, taxonomy, or hierarchy, in effect, decompositions, one-dimensional classification, and that's really helpful for manufacturing. From an engineering standpoint of a two-dimensional classification, where we have classified things so that they are normalized, one effect in one place.

Then if you have the problems identified, you can postulate several technology changes or several changes and simulate the various implications of it.

The whole reason why I do architecture has to do with change. You deal with extreme complexity and then you have to accommodate extreme change. There is no other way to deal with it. Humanity, for thousands of years, has not been able to figure out a better way to deal with complexity and change other than architecture.

Forde: Maybe we shouldn't apply architecture to some things.

For example, maybe the technologies or the opportunity is so new, we need to have the decision-making framework that says, you know what, let's not try and figure out all this, just to self-control their stuff in advance, okay? Let's let it run and see what happens, and then when it’s at the appropriate point for architecture, let's apply it, this is a more organic view of the way nature and life works than the enterprise view of it.

So what I am saying is that architecture is not irrelevant in that context. It's actually there is a part of the decision-making framework to not architect something at this point in time because it's inappropriate to do so.

Funding and budgeting

Band: Yeah, I agree that wholeheartedly. If it can't be health solutions, we are a completely agile shop. All the technology development is on the same sprint cycle, and we have three-week sprints, but we also have certain things that are still annual and wonderful like funding and budgeting.

We live in a tension. People say, well, what are you going to do, what budget do you need, but at the same time, I haven't figured everything out. So I am constantly living in that gap of what do I need to meet a certain milestone to get my project funded, and what do I need to do to go forward? Obviously, in a fully agile organization, all those things would be fluid. But then there's financial reporting, and we would also have to be fluid too. So there are barriers to that.

For instance, the Scaled Agile Framework, which I think is a fascinating thing, has a very clear place for enterprise architecture. As Chris said, you don't want to do too much of it in advance.  I am constantly getting the gap between how can I visualize what's going to happen a year out and how can I give the development teams what they need for the sprint. So I am always living in that paradox.

“The effective organization is garrulous, clumsy, superstitious, hypocritical, mostrous, octopoid, wandering, and grouchy."

Bellman: The Gartner Group, not too long ago, came up with the concept of emerging enterprise architecture and what we are dealing with. Enterprises don't exist like buildings. A building is an object, but an enterprise is a group of human beings communicating with one another.

As a very famous organizational psychologist Karl Weick once pointed out, “The effective organization is garrulous, clumsy, superstitious, hypocritical, mostrous, octopoid, wandering, and grouchy." Why? Because an organization is continually adapting, continually changing, and continually adapting to the changing business and technological landscape.

To expect anything other than that is not having a realistic view of the enterprise. It is emerging and it is a continually emerging phenomena. So in a sense, having an architecture concept I would not contest, but architecting is always worthwhile. It's like it's an organic phenomena, and that in order to deal with that what we can also understand and have an architecture for organic phenomena that change and rapidly adapt.

Brown: Chris, where you were going follows the lines of what great companies do, right?

There is a great book published about 30 years ago called ‘In Search of Excellence.’ If you haven’t read it, I suggest that people do. Written by Peters and Waterman, and Tom Peters has tried for ever since to try and recreate something with that magic, but one of the lessons learned was what great companies do, is something that goes simultaneous loose-tight properties. So you let somethings be very tightly controlled, and other things as are suggesting, let them flourish and see where they go before I actually box them in. So that’s a good thought.

So what do we think, as a panel, about evolving TOGAF to become an engineering methodology as well as a manufacturing methodology?

Zachman: I really think it’s a good idea.

Brown: Chris, do you have any thoughts on that?

Interesting proposal

Forde: I think it’s an interesting proposal and I think we need to look at it fairly seriously. The Open Group approach to things is, don’t lock people into a specific way of thinking, but we also advocate disciplined approach to doing things. So I would susspect that we are going to be exploring John’s proposal pretty seriously.

Brown: You mentioned in your talk that decision-making process is a precondition, the decision-making process to govern IT investments, and the question that comes in is how about other types of investments including facilities, inventory and acquisitions?

Forde: The wording of the presentation was very specific. Most organizations have a process or decision-making framework on an annual basis or quarterly whatever the cycles are for allocation of funding to do X, Y or Z. So the implication wasn’t that IT was the only space that it would be applied.

In many organizations, or in a lot of organizations, the IT function is essentially an enterprise-wide activity that’s supporting the financial activities, the plant activities, these sorts of things.

However, the question is how effective is that decision-making framework? In many organizations, or in a lot of organizations, the IT function is essentially an enterprise-wide activity that’s supporting the financial activities, the plant activities, these sorts of things. So you have the P and Ls from those things flowing in some way into the funding that comes to the IT organization.

The question is, when there are multiple complexities in an organization, multiple departments with independent P and Ls, they are funding IT activities in a way that may not be optimized, may or may not be optimized. For the architects, in my view, one of the avenues for success is in inserting yourself into that planning cycle and influencing,  because normally the architecture team does not have direct control over the spend, but influencing how that spend goes.

Over time gradually improving the enterprise’s ability to optimize and make effective the funding it applies for IT to support the rest of the business.

Zachman: Yeah, I was just wondering, you’ve got to make observation.

Band: I agree, I think that the battle to control shadow IT has been permanently lost. We are in a technology obsessed society. Every department wants to control some technology and even develop it to their needs. There are some controls that you do have, and we do have some, but we have core health insurance businesses that are nearly a 100 years old.

Cambia is constantly investing and acquiring new companies that are transforming healthcare. Cambia has over a 100 million customers all across the country even though our original business was a set of regional health plans.

Build relationships

You can't possibly rationalize all of everything I want you to pay for on that thing. It is incumbent upon the architects, especially the senior ones, to build relationships with the people in these organizations and make sure everything is synergetic.

Many years ago, there was a senior architect. I asked him what he did, and he said, "Well, I'm just the glue. I go to a lot of meetings." There are deliverables and deadlines too, but there is a part of consistently building the relationships and noticing things, so that when there is time to make a decision or someone needs something, it gets done right.

Zachman: I was in London when Bank of America got bought by NationsBank, and it was touted as the biggest banking merger in the history of the banking industry.

If I was the CEO and my strategy was to grow by acquisition, I would get really interested in enterprise architecture.

Actually it wasn’t a merger, it was an acquisition NationsBank acquired Bank of America and then changed the name to Bank of America. There was a London paper that was  observing that the headline you always see is, “The biggest merger in the history of the industry.” The headline you never see is, “This merger didn't work.”

The cost of integrating the two enterprises exceeded the value of the acquisition. Therefore, we’re going to have to break this thing up in pieces and sell off the pieces as surreptitiously as possible, so nobody will notice that we buried any accounting notes someplace or other. You never see that article. You’ll only see the one about the biggest merger.

If I was the CEO and my strategy was to grow by acquisition, I would get really interested in enterprise architecture. Because you have to be able to anticipate the integration of the cost, if you want to merge two enterprises. In fact, you’re changing the scope of the enterprise. I have talked a little bit about the role on models, but you are changing the scope. As soon as you change a scope, you’re now going to be faced with an integration issue.

Therefore you have to make a choice, scrap and rework. There is no way, after the fact, to integrate parts that don’t fit together. So you’re gong to be faced a decision whether you want to scrap and rework or not. I would get really interested in enterprise architecture, because that's what you really want to know before you make the expenditure. You acquire and obviously you've already blown out all the money. So now you’ve got a problem.

Once again, if I was the CEO and I want to grow by acquisition or merger acquisition, I would get really interested in enterprise architecture.

Cultural issues

Beryl Bellman: One of the big problems we are addressing here is also the cultural and political problems of organizations or enterprises. You could have the best design type of system, and if people and politics don't agree, there are going to be these kind of conflicts.

I was involved in my favorite projects at consulting. I was involved in consulting with NCR, who was dealing with Hyundai and Samsung and trying to get them together at a conjoint project. They kept fighting with each other in terms of knowledge management, technology transfer, and knowledge transfer. My role of it was to do an architecture of that whole process.

It was called RIAC Research Institute in Computer Technology. On one side of the table, you had Hyundai and Samsung. On the other side of the table, you had NCR. They were throwing PowerPoint slides back and forth at each other. I brought up that the software we used at that time was METIS, and METIS modeled all the processes, everything that was involved.

The frameworks are about creating shared understanding of what we have and where are we going to go, and the frameworks are just a set of tools that you have in your toolbox that most people don't understand.

Samsung said you just hit it with a 2×4. I used to be demonstrating it, rather than tossing out slides, here are the relationships, and be able to show that it really works. To me that was a real demonstration that I can even overcome some of the politics and cultural differences within enterprises.

Brown: I want to give one more question. I think this is more of a concern that we have raised in some people's minds today, which is, we are talking about all these different frameworks and ontologies, and so there is a first question.

The second one is probably the key one that we are looking at, but it asks what does each of the frameworks lack, what are the key elements that are missing, because that leads on to the second question that says, isn't needing to understand old enterprise architecture frameworks is not a complex exercise for a practitioner?

Band: My job is not about understanding frameworks. I have been doing enterprises solution architecture in HP at a standard and diversified financial services company and now at health insurance and health solutions company out for quite a while, and it’s really about communicating and understanding in a way that's useful to your stakeholders.

The frameworks are about creating shared understanding of what we have and where are we going to go, and the frameworks are just a set of tools that you have in your toolbox that most people don't understand.

So the idea is not to understand everything but to get a set of tools, just like a mechanic would, that you carry around that you use all the time. For instance, there are certain types of ArchiMate views that I use when I am in a group. I will draw an ArchiMate business process view with application service use of the same. What are the business processes you need to be and what are the exposed application behaviors that they need to consume?

I had that discussion with people on the business who are in IT, and we drove those diagrams. That's a useful tool, it works for me, it works for the people around me, it works in my culture, but there is no understanding over frameworks unless that's your field of study. They are all missing the exact thing you need for a particular interaction, but most likely there is something in there that you can base the next critical interaction on.

Six questions

Zachman: I spent most of my life thinking about my frameworks. There are six questions you have to answer to have a complete description of whatever it is, what I will describe, what, how, where, who, and why. So that’s complete.

The philosophers have established six transformations interestingly enough, the transfer of idea into an instantiation, so that's a complete set, and I did not invent either one of these, so the six interrogatives. They have the six stages of transformation and that framework has to, by definition, accommodate any factor that’s relevant to the existence of the object of the enterprise.  Therefore any fact has to be classifiable in that structure.

My framework is complete in that regard. For many years, I would have been reluctant to make a categorical statement, but we exercised this, and there is no anomaly. I can’t find an anomaly. Therefore I have a high level of confidence that you can classify any fact in that context.

There is one periodic table. There are n different compound manufacturing processes. You can manufacture anything out of the periodic table. That metaphor is really helpful. There's one enterprise architecture framework ontology. I happened to stumble across, by accident, the ontology for classifying all of the facts relevant to an enterprise.

In terms of completeness I think my framework is complete. I can find no anomalies and you can classify anything relative to that framework.

I wish I could tell you that I was so smart and understood all of these things at the beginning, but I knew nothing about this, I just happened to stumble across it. The framework fell on my desk one day and I saw the pattern. All I did was I put enterprise names on the same pattern for descriptive representation of anything. You’ve heard me tell quite a bit of the story this afternoon. In terms of completeness I think my framework is complete. I can find no anomalies and you can classify anything relative to that framework.

And I agree with Iver, that there are n different tools you might want to use. You don’t have to know everything about every framework. One thing is, whatever the tool is that you need to deal with and out of the context of the periodic table metaphor, the ontological construct of The Zachman Framework, you can accommodate whatever artifacts the tool creates.

You don’t have to analyze every tool, whatever tool is necessary, if you want to do with business architecture, you can create whatever the business architecture manifestation is. If you want to know what DoDAF is, you can create the DoDAF artifacts. You can create any composite, and you can create any compound from the periodic table. It’s the same idea.

I wouldn't spend my life trying to understand all these frameworks. You have to operate the enterprise, you have to manage the enterprise and whatever the tool is, it’s required to do whatever it is that you need to do and there is something good about everything and nothing necessarily does everything.

So use the tool that's appropriate and then you can create whatever the composite constructs are required by that tool out of the primitive components of the framework. I wouldn’t try to understand all the frameworks.

What's missing

Forde: On a daily basis there is a line of people at these conferences coming to tell me what’s missing from TOGAF. Recently we conducted a survey through the Association of Enterprise Architects about what people needed to see. Basically the stuff came back pretty much, please give us more guidance that’s specific to my situation, a recipe for how to solve world hunger, or something like that. We are not in the role of providing that level of prescriptive detail.

The value side of the equation is the flexibility of the framework to a certain degree to allow many different industries and many different practitioners drive value for their business out of using that particular tool.

So some people will find value in the content metamodel in the TOGAF Framework and the other components of it, but if you are not happy with that, if it doesn't meet your need, flip over to The Zachman Framework or vice versa.

John made it very clear earlier that the value in the framework that he has built throughout his career and has been used repeatedly around the world is its rigor, it’s comprehensiveness, but he made very clear, it’s not a method. There is nothing in there to tell you how to go do it.

The value side of the equation is the flexibility of the framework to a certain degree to allow many different industries and many different practitioners drive value for their business out of using that particular tool.

So you could criticize The Zachman Framework for a lack of method or you could spend your time talking about the value of it as a very useful tool to get X, Y, and Z done.

From a practitioner’s standpoint, what one practitioner does is interesting in a value, but if you have a practice between 200 and 400 architects, you don't want everybody running around like a loose cannon doing it their way, in my opinion. As a practice manager or leader you need something that makes those resources very, very effective. And when you are in a practice of that size, you probably have a handful of people trying to figure out how the frameworks come together, but most of the practitioners are tasked with taking what the organization says is their best practice and executing on it.

We are looking at improving the level of guidance provided by the TOGAF material, the standard and guidance about how to do specific scenarios.

For example, how to jumpstart an architecture practice, how to build a secure architecture, how to do business architecture well? Those are the kinds of things that we have had feedback on and we are working on around that particular specification.

Brown: So if you are employed by the US Department of Defense you would be required to use DoDAF, if you are an enterprise architect, because of the views it provides. But people like Terri Blevins that did work in the DoD many years, used TOGAF to populate DoDAF. It’s a method, and the method is the great strength.

If you want to have more information on that, there are a number of white papers on our website about using TOGAF with DoDAF, TOGAF with COBIT, TOGAF with Zachman, TOGAF with everything else.

Forde: TOGAF with frameworks, TOGAF with buy in, the thing to look at is the ecosystem of information around these frameworks is where the value proposition really is. If you are trying to bootstrap your standards practice inside, the framework is of interest, but applied use, driving to the value proposition for your business function is the critical area to focus on.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  enterprise architect  enterprise architecture  Interarbor Solutions  The Open Group  The Open Group Conference  TOGAF  Zachman Framework 

Share |
Permalink
 

Mobility moves from 'nice to have' to 'must have' for large US healthcare insurer

Posted By Dana L Gardner, Friday, February 27, 2015

The next BriefingsDirect enterprise mobile strategy discussion comes to you directly from the Kony World 2015 Conference on Feb. 4 in Orlando.

This five-part series of penetrating discussions on the latest in enterprise mobility explores advancements in applications design and deployment technologies across the full spectrum of edge devices and operating environments.

Our next innovation interview focuses on how a large US insurance carrier, based in the Midwest, has improved its applications’ lifecycle to make enterprise mobility a must-have business strength.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

To learn how, we welcome our guest mobility leader, Scott Jessee, Vice President of IT for this Illinois health insurance provider. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Where is your organization in regard to mobility? Where do you stand?

Jessee: It’s important to think about where we came from. When we started off in mobile, it was not an imperative. It was not something where we had to get in. It was a nice-to-have that was in the forefront, but there wasn’t enough return on investment (ROI) in people’s minds. That shifted quickly in our business model when -- from a sales’ perspective -- it became an absolute requirement that we needed to have mobile in order for us to complete our sales.

Jessee

So fast-forward to where we are now. We've been running with this for a little bit and we're primarily focusing on the consumer market, which is also exciting for us. We're in healthcare, and with the Affordable Care Act, and with lot of shifts and distribution channels, there has been a stronger need for us to have to focus on the consumer.

From a mobile perspective, most of our feedback and requests are driven in that fashion. We had to ask, "What could we do to engage through mobility? What could we do to give them more value as a product through mobility? And, how could we give it to them in a fast manner?" All those dimensions are hitting us pretty heavy right now in terms of what we are trying to think ahead for.

Gardner: Are you delivering mobility on an application-by-application basis? Or can you create a significant platform or reuse benefit in how you produce mobile applications?

Focus on multichannel

Jessee: For us, one of the biggest things from that point of view is getting a mobile application at the consumer level where we permanently focus on multichannel. That's huge for us, because the market is demanding you to have multiple devices, and there are more-and-more devices each and every year.

You have Samsung for the Android side, you have your iOS Apple on your Apple side, and then you have the tablets and smartphones and soon-to-be wearable devices. There is a plethora of different demands that people want to consume the information transaction that you get them in order to have the experience that they want.

From our perspective, we try to be as savvy as we can with that, and we leverage the Kony Platform to help us achieve that. We wouldn't be able to have as lean of a staff as we do to help support and drive that forward. And it’s primarily because the fact we can do some level of "develop once" and then deliver out through these different devices over a period of time. So it’s been big gains for us.

Gardner: What are the business benefits of going with mobile apps?

When they come to us with new pieces that they want, we can typically do it in six to eight weeks, compared to a three- or four-month cycle on the website.

Jessee: We get feedback on from our business folks that it's different than the web. We're able to deliver faster than we can in the web apps space. They're more satisfied on delivery time and cycle time. When they come to us with new pieces that they want, we can typically do it in six to eight weeks, compared to a three- or four-month cycle on the website. That’s just the nature of what we are doing. So they smile with us a little more in the mobile space, which is good.

You hit on metrics. We have good analytics we could provide in terms of page-views that they see. If they're trying to deliver a new content or something to that effect, we could show them that this worked, and this didn't work. We also have individual plans/states that own their marketing efforts. Based on their individual campaigns, we are able to provide them metrics of a particular state is seeing an uptake in downloads or usage of the mobile app.

So those are the two key things I think that they like about what we are able to deliver with them as relates to those two concepts.

Gardner: We're here at Kony World 2015 and one of the things we're hearing about is the importance of the user experience. Now that you're dealing with the Affordable Care Act, you're in more of a marketplace. The way in which your application comes across to a prospective insurance client compares to the other insurance organizations that they might be perusing. So how does the user experience factor into your development and deployment strategy, and how is Kony helping you with that?

Jessee: The big thing this week here at Kony World that was exciting to us is seeing the further enhancements of their Visualizer 2.0 product. Visualizer 2.0 allows marketing and communication leaders to sit up front and design the look/feel of a mobile application using an Adobe Photoshop-type experience. Our marketing communication teams demand this, because user experience is king.

The new imperative is consumer experience. You need to have something that people can use easily.

The new imperative is consumer experience. You need to have something that people can use easily, efficiently, and meet the demands that they're looking for as it relates to the functions they need to accomplish, and then beyond that what's your other value opportunities.

Kony does a great job of setting us up for success in that regard. In addition to the productivity gains we get out of this, they have good tools that will help us provide this customer experience in ways that we could show marketing communications to ask, "What do you think. Let's tweak it. Let’s alter it."

We could leverage agency input in a more efficient streamlined manner for the user interfaces that we create. So all those things are really going to springboard us forward, so we are not spending as much time doing it. From the visual consuming perspective, it should be a better experience, and that’s what we are hoping to get out of it, and showcasing the future opportunities there, too.

Gardner: The thing that’s been intriguing for me here at Kony World is I see their application marketplace and the new application, Kony Sales. This might not be an exact fit for you and your vertical industry, but it seems to me that they're taking a step toward having a packaged application targeted at a specific industry that takes you maybe 80 percent of the way you need to be with a lot of the back-end integration in place, with a lot of the ability to customize, but still governed by the IT department.

So, as the IT person, you're going to get a control over who can do what, but you're also going to have your end users, your line-of-business customers, getting a say as to what their app can do and can't do. It strikes me that another important part of user experience is having more say in an app and being part of the development process.

A step forward

Jessee: That's a big jump. I talked to [Kony CEO] Tom Hogan yesterday and he explained it really well, and he relates well to business users, too. Think of what’s going on with Salesforce, and how those constituencies and stakeholders that leverage that are used to configuring an application base or micro-applications. This is really taking Kony a step forward in meeting that marketplace and even extend it beyond that with the release of the both the marketplace and the two ready-to-go applications.

That's the opportunity at hand for potential business folks, as they've already been doing some of this today in some of the other venues, and now they have an opportunity to do this with Kony.

As an IT person, where we could really take advantage of it, is to reduce our workload with some of the configuration components, so it’s little off our hands. We could focus more on the marketplace, which would allow us to create these micro apps, these core functional areas, that we could then showcase, drop in, share, etc. That really puts us in a good position in terms of facilitating innovation, which obviously is hot in healthcare and all industries, but helps you further move that ball forward.

Gardner: What about the issue of security? Because you're in healthcare and regulatory compliance is so important, how do you see the security with the mobile application developing, and how again does that integrated platform -- write once, run everywhere -- benefit you?

That's the opportunity at hand for potential business folks, as they've already been doing some of this today in some of the other venues, and now they have an opportunity to do this with Kony.

Jessee: That’s a really hard part, especially in mobile. If you think back 10 years ago on the web space, security was probably where mobile is now. In 2013, there were no publicized or known mobile risks that were made, but in 2014 I think there were 9 or 10. So that was a big jump from 0 to 9 or 10 of big named companies.

What’s ahead in 2015 is even scarier, but that relates to what Kony offers. Tom Hogan showed today what they're trying to drive toward, and he used the acronym S-A-U-C-E to describe the value they are driving with their solutions: Security, Agility, Usability, Certainty, and Efficiency. The first one being security in priority order, which puts me at ease.

One of the things that has helped is through some of the security components in Kony. They've been pretty up-to-date with some of the trends that we pull from our third-party auditors that are looking at our mobile applications. It showcases things like SSL pinning, including that in your code, and helping you facilitate the transactions the right way. So that’s a good thing for us.

I think an opportunity for Kony is to continue to showcase those specifics to not just the customer base but the non-customer base. Mobile is going to continue to get exponentially more challenging when it comes to security, because the threats out there are just starting to hit it and they are just getting fresh into it.

Internet of Things

Gardner: Looking forward now to what’s going to come down the highway. We hear about the Internet of Things. We're seeing more and more, in healthcare, data being derived from sensors and devices, and we are seeing closer partnership between payers and providers when it comes to data sharing in the healthcare sector. So where does healthcare and mobility go for you over the next three to five years?

There was another interesting tidbit here at the show, where they said IDC is projecting that by 2017, 25 percent of IT budgets will be devoted in some way to mobility. Does that strike you as a low ball, and how important is mobility going to be to your IT budget?

Jessee: From a budgetary perspective that’s probably a fair guess, because mobility is also being redefined over time. A few years ago, it was just a smartphone, but now it’s people moving around, doing activities, transacting against a multitude of different devices, and I think wearable is a great example of that.

Mobile is going to continue to get exponentially more challenging when it comes to security, because the threats out there are just starting to hit it and they are just getting fresh into it.

What do wearables mean to us? It’s an unknown for us, and it’s on our radar that we need to identify some potential use cases, but we haven’t seen enough of it yet. We've got the Fitbits that are out there that are pretty hot, but now you have got the watches that are coming out. Samsung had theirs last year; Apple is doing theirs this year. What is that going to look like? We are not a 100 percent sure yet.

From our perspective, it’s making sure we have a flag against it for us to see what we could potentially do. It’s a little abstract for us to actually activate against, but we're not leaving it to rest either.

Gardner: And the issue about the sensors and the Internet of Things. Do you consider that mobility or is that a separate area, big data perhaps? How do you see the mobile drive for user experience and life cycle benefits now, and how does that compare to that Internet of Things and sensors and data in healthcare?

Jessee: It’s both. When you say mobility and big data, it goes two ways. One, it’s the consuming of these different sensors across mobile devices and mobile transactions that take place.

The other thing that happens is on the big data front is that it’s an opportunity to collect data and understand your consumer base, to understand your providers, to make better decisions, to help add value along the chain. But it’s two-way information that you have to collect in order to really activate both sides of the house, but they play together. They both have to.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Kony, Inc.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  Interarbor Solutions  Kony  Kony Marketplace  Kony Sales App  Kony World  Mobility  Scott Jessee 

Share |
PermalinkComments (0)
 

RealTime Medicare Data delivers caregiver trends insights by taming its healthcare data

Posted By Dana L Gardner, Thursday, February 26, 2015

The next edition of the HP Discover Podcast Series highlights how RealTime Medicare Data analyzes huge volumes of Medicare data and provides analysis to their many customers on the caregiver side of the healthcare sector.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Here to explain how they manage such large data requirements for quality, speed, and volume, we're joined by Scott Hannon, CIO of RealTime Medicare Data and he's based in Birmingham, Alabama. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your organization and some of the major requirements you have from an IT perspective.

Hannon: RealTime Medicare Data has full census Medicare, which includes Part A and Part B, and we do analysis on this data. We provide reports that are in a web-based tool to our customers who are typically acute care organizations, such as hospitals. We also do have a product that provides analysis specific to physicians and their billing practices.

Gardner:  And, of course, Medicare is a very large US government program to provide health insurance to the elderly and other qualifying individuals.

Hannon: Yes, that’s true.

Gardner: So what sorts of data requirements have you had? Is this a volume, a velocity, a variety type of the problem, all the above?

Volume problem

Hannon: It’s been mostly a volume problem, because we're actually a very small company. There are only three of us in the IT department, but it was just me as the IT department, back when I started in 2007.

Hannon

At that time, we had one state, Alabama and then, we began to grow. We grew to seven states which was the South region: Florida, Georgia, Tennessee, Alabama, Louisiana, Arkansas, and Mississippi. We found that Microsoft SQL Server was not really going to handle the type of queries that we did with the volume of data.

Currently we have 18 states. We're loading about a terabyte of data per year, which is about 630 million claims and our database currently houses about 3.7 billion claims.

Gardner: That is some serious volume of data. From the analytics side, what sort of reporting do you do on that data, who gets it, and what are some of their requirements in terms of how they like to get strategic benefit from this analysis.

Hannon: Currently, most of our customers are general acute-care hospitals. We have a web-based tool that has reports in it. We provide reports that start at the physician level. We have reports that start at the provider level. We have reports that you can look at by state.

This allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

The other great thing about our product is that typically providers have data on themselves, but they can't really compare themselves to the providers in their market or state or region. So this allows them to look not only at themselves, but to compare themselves to other places, like their market, the region, and the state.

Gardner: I should think that’s hugely important, given that Medicare is a very large portion of funding for many of these organizations in terms of their revenue. Knowing what the market does and how they compare to it is essential.

Hannon: Typically, for a hospital, about 40 to 45 percent of their revenue depends on Medicare. The other thing that we've found is that most physicians don't change how they practice medicine based on whether it’s a Medicare patient, a Blue Cross patient, or whoever their private insurance is.

So the insights that they gain by looking at our reports are pretty much 90 to 95 percent of how their business is going to be running.

Gardner: It's definitely mission-critical data then. So you started with a relational database, using standard off-the-shelf products. You grew rapidly, and your volume issues grew. Tell us what the problems were and what requirements you had that led you to seek an alternative.

Exponential increase

Hannon: There were a couple of problems. One, obviously, was the volume. We found that we had to increase the indexes exponentially, because we're talking about 95 percent reads here on the database. As I said, the Microsoft SQL Server really was not able to handle that volume as we expanded.

The first thing we tried was to move to an analysis services back end. For that project, we got an outside party to help us because we would need to redesign our front end completely to be able to query analysis services.

It just so happened that that project was taking way too long to implement. I started looking at other alternatives and, just by pure research, I happened to find Vertica. I was reading about it and thought "I'm not sure how this is even possible." It didn’t even seem possible to be able to do this with this amount of data.

So we got a trial of it. I started using it and was impressed that it actually could do what it said it could do.

and gain access to the

FREE HP Vertica Community Edition

Gardner: As I understand it, Vertica has the column store architecture. Was that something understood? What is it about the difference of the Vertica approach to data -- one that perhaps caught your attention at first, and how has that worked out for you?

Hannon: To me the biggest advantages were the fact that it uses the standard SQL query language, so I wouldn't have to learn the MDX, which is required with the analysis services. I don’t understand the complete technical details about column storage, but I understand that it's much faster and that it doesn't have to look at every single row. It can build the actual data set much faster, which gives you much better performance on the front end.

Gardner: And what sort of performance have you had?

Hannon: Typically we have seen about a tenfold decrease in actual query performance time. Before, when we would run reports, it would take about 20 minutes. Now, they take roughly two minutes. We're very happy about that.

Gardner: How long has it been since you implemented HP Vertica and what are some of supporting infrastructures that you've relied on?

Hannon: We implemented Vertica back in 2010. We ended up still utilizing the Microsoft SQL Server as a querying agent, because it was much easier to continue to interface the SQL reporting services, which is what our web-based product uses. And the stored procedure functionality that was in there and also the open query feature.

So we just pull the data directly from Vertica and then send it through Microsoft SQL Server to the reporting services engine.

New tools

Gardner: I've heard from many organizations that not only has this been a speed and volume issue, but there's been an ability to bring new tools to the process. Have you changed any of the tooling that you've used for analysis? How have you gone about creating your custom reports?

Hannon: We really haven't changed the reports themselves. It's just that I know when I design a query to pull a specific set of data that I don’t have to worry that it's going to take me 20 minutes to get some data back. I'm not saying that in Vertica every query is 30 seconds, but the majority of the queries that I do use don’t take that long to bring the data back. It’s much improved over the previous solution that we were using.

Gardner: Are there any other quality issues, other than just raw speeds and feeds issues, that you've encountered? What are some of the paybacks you've gotten as a result of this architecture?

But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy.

Hannon: First of all, I want to say that I didn’t have a lot of experience with Unix or Linux on the back end and I was a little bit rusty on what experience I did have. But I will tell people to not be afraid of Linux, because Vertica runs on Linux and it’s easy. Most of the time, I don’t even have to mess with it.

So now that that's out of the way, some of the biggest advantages of Vertica is the fact that you can expand to multiple nodes to handle the load if you've got a larger client base. It’s very simple. You basically just install commodity hardware, but whatever flavor of Unix or Linux that you prefer, as long as it’s compatible, the installation does all the rest for you, as long as you tell it you're doing multiple nodes.

The other thing is the fact that you have multiple nodes that allow for fault tolerance. That was something that we really didn't have with our previous solution. Now we have fault tolerance and load balancing.

Gardner: Any lessons learned, as you made this transition from a SQL database to a Vertica columnar store database? You even moved the platform from Windows to Linux. What might you tell others who are pursuing a shift in their data strategy because they're heading somewhere else?

Jump right in

Hannon: As I said before, don’t be afraid of Linux. If you're a Microsoft or a Mac shop, just don’t be afraid to jump in. Go get the free community edition or talk to a salesperson and try it out. You won't be disappointed. Since the time we started using it, they have made multiple improvements to the product.

The other thing that I learned was that with OPENQUERY, there are specific ways that you have to write the store procedures. I like to call it "single-quote hell," because when you write OPENQUERY and you have to quote something, there are a lot of other additional single quotes that you have put in there. I learned that there was a second way of doing it that lessened that impact.

Gardner: Okay, good. And we're here at HP Discover. What's interesting for you to learn here at the show and how does that align with what your next steps are in your evolution?

Hannon:  I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.

I'm definitely interested in seeing all the other capabilities that Vertica has and seeing how other people are using it in their industry and for their customers.

Gardner: In terms of your deployment, are you strictly on-premises for the foreseeable future? Do you have any interest in pursuing a hybrid or cloud-based deployments for any of your data services?

Hannon: We actually use a private cloud, which is hosted at TekLinks in Birmingham. We've been that way ever since we started, and that seems to work well for us, because we basically just rent rack space and provide our own equipment. They have the battery backup, power backup generators, and cooling.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tags:  BriefingsDirect  Dana Gardner  HP  HP Vertica  HPDiscover  Interarbor Solutions  RealTime Medicare  Scott Hannon  SQL 

Share |
PermalinkComments (0)
 
Page 1 of 58
1  |  2  |  3  |  4  |  5  |  6  >   >>   >| 
Page Title
Association Management Software Powered by YourMembership.com®  ::  Legal