ABOUT US
        


Continuous Operations vs DR


Some customers view DR as an insurance policy against downtime or data loss both of which cost a business money.
A better way to view DR technology investments is as efficiency to enable continuous operations. Improving application uptime time can directly affect a business's ability to generate revenue from IT.


What's the difference between DR and Continuous Operations?


DR ensures key systems and the data they depend on, are available at a remote site or data center under all conditions.


The complexity of hosts, LAN and WAN networking, name resolution updates and data replication leads many enterprises to view DR as necessary insurance to maintain business operations. DR processes generally require a multi step and mostly manual failover procedure that requires the data to failover first and many steps to represent the replicated data to secondary hosts at the DR location.


The complexity to develop and execute a DR run book, drives many enterprises into active passive data center designs which means 50% of the available hosts and storage are idle and not used for production workload.


What enterprises want is active active hosts and storage to maximize efficiency and enable continuous operations.


What is continuous operations?


It's the ability to upgrade and maintain the infrastructure but never take application down time.


What are the requirements to enable continuous operations?


Robust failover and failback technology that simplifies the steps with automation, orchestration and health checks of the key infrastructure to protect against human error (leading cause of down time).


A key requirement is time which is both the time to maintain copies of data at two locations and the time it takes to failover data, access to data, name resolution of data and restart applications on secondary hosts.


The good news is the solution exists for EMC Isilon customers with Superna Eyeglass


Thanks for Reading

Andrew


Converged Infrastructure is just technology

March 1, 2015

By Andrew MacKay

As the title suggests I would disagree, it's likely 50% right.  I'll explain.

It's mostly technology but you can't get "it", without knowing the core values behind it.

  1. Focus on deploying your business applications
  2. Focus on running your business applications
  3. Focus on innovation to move your business forward

that's it?  so where are all the typical technology hurdles, problem statements and technologies that solve them?

Easy, they're gone when your Converged Infrastructure provider of choice has a Vision.

Our team at Superna has been collaborating with VCE for several years as a solutions provider on a wide range of services all focused on supporting the growth of Converged infrastructure and the Vblock platform.

This includes the Vblock Ready certification program that we operate for VCE.  The goal of this program is to test VCE partner applications to ensure they integrate with the Vblock platform and don't have any operational impacts that ultimately come at your expense.    This is a key VCE differentiator that we help execute for VCE customers to remove risk from running your business and operational applications on Vblocks.

For customers, it means selecting solutions from the VBlock Ready list as an easy way to reduce risk and simplify.  This takes customers one step closer to the "core" values of Converged Infrastructure.
For VCE partners, Vblock Ready means they can integrate into customers Vblock environments knowing they have reduced their risk with a pre-tested solution but more importantly can demonstrate their role in helping achieve the "core" values to customers. 

What comes after Vblocks and Vblock Ready Solutions?  

The short answer...... A lot more.
The recently introduced converged API Vision IO software (standard on all Vblocks) and new certification of Converged ready applications is the next wave of cost reduction, operational efficiency and most of all....simplification and reduction of IT risk.

What can Vision enabled applications do for me?  

It enables the applications you use today to work better and smarter by managing your Vblocks as an IT system that runs your business and not as collection of unrelated devices like most customers do today with current vendor tools.
It's this new capability and Ecosystem of Vision IO enabled applications that will allow customers to take the final steps to the core values that only converged infrastructure can deliver.   This is why customers need to demand more from the vendors they depend on to reach their operational savings from Vblocks.

Sounds good but how do I get Vision enabled applications? 
It starts by asking the vendors you depend on most.... "When will you have Vision IO enabled on the roadmap?" That's where Superna comes in to the picture.   We will be validating your vendor's Vision IO enabled applications meet the requirements we know will simplify your IT operations and lower your operational costs.

What if my vendor says they're not sure where to start with Vision IO?


Easy.... send them here, and we'll help get it done and qualified.  
What does a Vision IO enabled application look like?
See the applications we developed to show the value of a Converged API's impact to your IT operational simplification Eyeglass virtual appliance for Vblock,  Monitoring with CA Nimsoft and Eyeglass Connect for UIM and Managing change with Eyeglass Connect for BMC Atrium 

oh yeah, to answer why is Converged Infrastructure only 50% technology?
Because the other 50% comes from execution on a Vision that delivers on the core values.

Thanks for reading

Andrew MacKay


Your Data Requires Backup and Disaster Recovery

November 16, 2014
By Michael Arno

Your clients say so. The law says so. And you see how distributing your data centres is beneficial to your business. So why not store your data in the safest, fastest, and most comprehensive data network available today? It's possible now with systems such as with EMC VPLEX, VMWare Vmotion and optical Networking from leading suppliers of Metro DWDM products and it's the best method of data replication technology offered to date. For 20 years the model for data backup has been simple: store the data in one location, back it up or replicate it to another. This is SWAN or Storage Wide Area Networking, and it's widely deployed today. The traditional Disaster Recovery model requires two data centres connected to a fiber optic network; two SAN's one for computing, and one for storage replication. If an emergency takes place at one facility, the second facility will be available to take over. However, traditional storage replication networking has a few disadvantages:

  • Data centres are often located within a relatively small geographic area; a natural disaster could result in the loss of all data.
  • One data center is used for computing and production, the other (remote) is used for storage replication and backup, which means that, even though both data centers are being used, only one is being put to work.
  • The data in storage at the remote data center is not readily available for mining or analysis.
  • In today's environment, the importance of data and its protection are paramount; sometimes a single copy of data is not enough and a 3rd copy is required to meet business needs

Superna has developed a comprehensive approach to test this configuration that is widely deployed and proven model for protecting critical data.

Key Items to Consider in Validating DR Networks

  • Undertake FC testing by replicating the real world environment and reduce the time to validate FC network solution and post deployment issues
  • Secure EMC eLab and Brocade Data Center Ready qualification following their standard test processes for their solutions
  • Review compliance with ANSI standards
  • Set service targets to network equipment vendors or internal IT orgnaization that require EMC eLab Entrance Criteria testing and process/project management
  • Manage the qualification process with EMC and Brocade
  • Develop you SAN network and storage replication use cases
  • Proof-of-concept testing
  • Assess integration with management orchestration systems and on-going DR Orchestration tools

Future posts will explore the emergence of more powerful DR Orchestration technologies and use cases.




Why Converged API's matter

May 30, 2014
By Andrew MacKay

I thought I would share my view on converged API’s and  why Converged Infrastructure matters more than I realized. We’ve  been busy at Superna focusing on new product developments (monitoring, and CMDB) that will show how API’s will change the way software is written and how it accelerates development.  “If you’re an ISV that wants to build applications faster with less effort and reduced complexity…   Keep on reading.”

This occurred to me while we were integrating the data center API for Converged Infrastructure using VCE Vision™ Software api. The problem with data center software today is no object model and no API exists that connects data center objects across domains.  API’s should make developers lives easier to focus on adding value without re-inventing software modules that collection information  every time a new application is written.

For example if I want to build a VM provisioning application or even a fault monitoring application to simplify resource management and alarm correlation up and down and across domains (storage, compute, network) what does my software app need to know?…

  1. vmware host I want the VM to run on
  2. the data store it will be stored on
  3. the disk array the data store is stored on
  4. the network links that connect my vm to the network
  5. the network links that connect my esx host to the storage for policies, qos, security settings to be applied
  6. The logical or physical connections between all the devices
  7. The interface to the devices (SNMP, CLI, passwords, password rules, deal with password changes etc..)…. The list goes on and on

In order to get this information the app will need a couple of software functions (device discovery and topology, typically depending on SNMP and several different mibs) and alarm processing that needs to run and build a model for my provisioning application. These two functions will return device by device information and hopefully enough data to allow physical or at a minimum logical topology to be determined and then overlay alarm data and meta device data into the topological model and try to make sense or correlate the alarm data to allow my app to make decisions automate some IT function (i.e. the reason for building the app in the first place).

What happens in today’s multi-vendor data centers is insufficient data is returned to reliably build a topology  that is accurate enough to handle move add changes to the infrastructure and allow my provisioning/monitoring application to do it’s job without breaking when links change or when mibs change, command syntax changes when OS,  firmware, MIB changes, device password changes and software gets upgraded on various devices etc…..

For developers these glue modules are a necessary evil to build the provisioning and monitoring applications, garbage in means the provisioning and monitoring application is only as good as the discovery detection and topology data it collects.

This is where Converged API’s come in and allows developers to use a simple API to gather this information in a reliable way to focus on building high value applications and less time worrying about device discovery.  Since Converged infrastructure is connected in a reliable way the network topology and relationships are much easier to model and make decisions on.

We used Vision API to build two applications in less than 2 months!  This wouldn’t have been possible if we had to first built  discovery and topology functions first.  There’s more to  Vision than just topology, but it’s a huge requirement to start building converged Infrastructure applications.    I’ll post a future blog on how we used alarms, compliance baselines, root cause and lot’s of device meta data to build  new applications.

Thanks for reading



Virtual machines, PaaS and API's and the future of Simplified Application Scaling

May 25, 2014
By Andrew MacKay

I thought I would take a different viewpoint as a follow up to my previous blog on Converged API’s in the data center.  After working on several projects over the past few months, I got the chance to compare today, tomorrow and next day in terms of data center application scaling evolution.
 
 
Scaling Today
 
We do a lot of projects with VMware and virtual machines with traditional software installed into the OS of a virtual machine assign a number of vCPU’s, ram, NIC’s and virtual disks to a VM to provide it the appropriate scaling parameters to execute with adequate performance.
 
The number one question asked by software vendors is how does my application perform in a virtual environment?  Given the number of permutations that require specific knowledge about the OS, application, physical use of resources like cpu, memory, storage etc…, this question is difficult to answer and often requires testing to quantify application performance.
 
 
The Current Scaling Solution
 
If performance of the hardware serving the VM + app suffers, we can vmotion or svmotion the vm to new hardware and add more virtual capacity (CPU, Memory, storage and network) to scale the application.  This is not ideal as it’s hard to automate and many OS’s require a restart to make use of the new resources, not to mention an administrator needs to execute the steps to identify the issue and then take steps to address.
 
Once a single VM is no longer sufficient to scale, application developers look to N-Tier or clustering solutions to scale the application execution, which adds significant complexity when the application requires more and more VM to communicate and process data.  In summary today’s applications don’t scale well without a lot of manual tuning and testing.
 
 
The Scaling Solution Tomorrow

The days of compiling software then ”installing” into an OS can’t live much longer,  a new solution is here called PaaS (Platform as a Service), very simply push your code into the PaaS and it will install/execute/scale/secure/abstract OS/hardware with no help from the developer.

 PaaS  is an emerging a new model to run applications both commercial and internally developed and it’s designed for visualization.  It uses API visualization at the development layer to isolate the application developer from the underlying OS specifics allows apps to simply ”use” services abstracted from the developer (CPU, memory, network, storage and transactions).
 
This OS and API abstraction is what allows PaaS to handle the scaling of applications without the application developer being required to handle this within the application layer.  An example,  is the Cloudfoundry by Pivotal solution that both runs on VMware as virtual machines but also partitions virtual machines into share able execution environments for application code to execute in silo’s without one application being aware of another.
 
Developers push application code directly to Cloudfoundry and specify instance count and other execution parameters and the PaaS environment executes the application as required by abstracting both compute,storage and networking to the application.  It is also capable of abstracting the shared resources between instances and even transaction services by providing database services to instances of the application without any knowledge of the database required by the developer, thereby allowing application scaling with no OS or application specific code.
 
Developers can stop and restart the application with new execution parameters and the PaaS will scale the application execution.  It’s a simple matter of increasing the instance count or execution parameters and allow the PaaS to scale up the execution.In the previous scenario compiled application code needs to be written to take advantage of additional cpu’s for example with a multi-threading design.
 
This all sounds great but… in practice the problem of scaling across all available hardware requires knowledge of the hardware, state of the hardware (failed, degraded, healthy), utilization of the hardware, topology of the hardware components and especially the relationships between compute and storage.
 
The Scaling Solution of the future
 
Back to Converged API’s….now that we have a way to scale applications in a developer friendly way, the PaaS environment needs to navigate the hardware platform it runs on to offer simplified scaling services to the applications it hosts. As demand grows for more compute and storage will be required for the PaaS environment and unless this step is automated
 
Now we have a new use case for Converged API’s that could greatly simplify the PaaS automation layer that needs to translate application workload requests into instance(s) of the application that map to virtual machines within the PaaS environment.
 
This mapping function is easily solved with a Converged API and vCenter API that abstracts the PaaS run time environment with an API such as VCE Vision IO that can allow the PaaS to make scaling decisions for vertical scaling or horizontal scaling without regard or direct knowledge of CPU, memory, topology and VM to compute/storage relationships, either physical or logical within the environment PaaS uses to deliver services.
Horizontal Application Scaling

Other use cases enabled with Converged API’s:

  1. Disaster Recovery on Demand – Move running applications on remote PaaS on demand or upon triggered event without application awareness by the developer
  2. “AppMotion” – moving applications from private cloud to public cloud PaaS to gain time of day or compute cost advantages
  3. Big Data with HADOOP – Move analytical PaaS applications to execute co-resident with the data they depend which requires integration between the infrastructure storage layer and the application within the PaaS.
  4. Location Aware Applications –  Many applications today, especially mobile have latitude and longitude awareness,  which would benefit from the service they depend on being executed nearest the application instance for faster, more real-time time response.  This could be enabled with a geospatially aware PaaS that can co-ordinate instances based on geographic inputs.
 
This migration to PaaS unlocks a lot of potential use cases that are not enabled today without complicated application development that must have complete knowledge of the environment.
 
The solution to unlock these use cases is the next layer of abstraction…..Converged API’s.
 
Thanks for reading


Choosing a Data Center or Co-Location Facility.

Feb 5, 2013
By Michael Arno

Where are you storing your data, hosting your servers or applications?

For many companies, this has always been a simple answer – in our private data centers, server rooms and offices.   We have seen a shift in the market, where businesses of all sizes are turning to third-party data center, cloud or co-location facilities to host applications and store critical business data.  The advantage is avoidance of high cost for building, operating and maintaining conditioned IT space.  Data Center space is very expensive – depending on the availability or uptime need, they can easily cost $400-1,000 per square foot to build out new and this is a capital cost.

For startups, this type of facility has never been really considered as part of the business model.  That has changed over the last few years.  We now have multiple options available to businesses to build secure IT capacity, without a large capital investment.   Lets review the options that are available and look at how we have used these resources to help grow our own business.

The Premium Data Center

Data Centers are typically viewed as a facility to host critical corporate data that can offer minimal downtime (from minutes to an hour per year).  Major users include financial institutions and other Fortune 500 companies for their transactional systems.  In many cases, these companies will also mirror data to second disaster recovery locations so that the real time is seconds to recover from an outage.

More typical commercial facilities will offer very limited downtime (e.g., less than 3 hours per year).  A general rule is the higher the availability or redundancy, the more it will cost.   The general industry metric is the amount of downtime per year in hours.  Given a total of 8,760 hours per year, 1 hour of downtime represents 99.988% availability. The Uptime Institute provides guidelines and a lot of useful information on this and related items at the web site http://uptimeinstitute.com/.

The Emergence of a New Operating Model for Data Centers

For many companies, it is more efficient to deal with data center capacity as an operating cost, like renting office space.  The pricing models for this type of space is usually structured by server, rack or part of a rack on a per month basis.  For example, the cost of 1/8 of a rack is about $250 per month, which includes connectivity at a certain rate (e.g., 2 Mbps and extra if you consume more bandwidth).   There may also be bandwidth caps and other common charges.  The real issue here is the cost of power and cooling that is usually transferred to the tenant and can easily be 50% in addition to the rack cost.

Most metropolitan areas have several data center operators, sometimes with the local Telco service providers that have diversified into data center and hosting businesses over that last 10 years.

Choosing what type of data center facility to host your servers can prove a challenge, as there is not a one-size fit all option.  There is also a lack of understanding of the market options and how to map that to business requirements.  This can lead to poor decisions, unnecessary overspend and contract arrangements that are difficult to sustain.

Requirement Data Center Type Availability Metric Uptime Classification
Transactional data Mirrored data centers with full redundancy and fail over Minutes or 99.999 Tier III, IV
Web site hosting, development and test site, media servers, no transactional data Cloud or Virtual Server Hosting 1-3 hours or 99.995 Tier III, IV
Off site racks for servers Colocation 3-10 hours or 99.95 Tier II*, III
Local storage or on my private network Server Room > 10 hours or 99.0 Tier I (provided there are backup power options)

* Superna’s facility is now tier/II ready

At Superna, we needed secure, climate controlled conditioned lab space, similar to a telecom central office, without the stringent security and access restrictions that are typical in Tier III or IV facilities and without the uptime costs.  However, we also use some virtual data center options that add flexibility for our software development and testing activities.  For several years, we rented conditioned lab space for our servers and storage equipment and have now transitioned to our own facility.

 The Virtual Data Center

Over the last five years, we have seen a wider range of data center options, led by companies like Amazon EC2 (http://aws.amazon.com/ec2/) and Rackspace (http://rackspace.com).  These players have created a disruption in the market as they drove virtual server and public cloud computing adoption.  They are focusing on businesses that have uptime requirements and other transactional activities but need dynamic compute capability and a fully operational model.

Different Cost Models

The cost models between the cloud providers and server hosting solutions are very similar.  The choice for a business is strictly related to their need, available capital and comfort level.  We have used all the options outlined above.

At Superna, cloud solutions are ideal as we undertake a rapid development project with a relatively low number of virtual servers.  It can quickly be setup and become operational (in minutes), and when then shut down.  The cost structure is pay-as-you-go, without any long-term contract commitments.  Hosting and co-location are typically designed for longer term fixed storage requirements and can have advantages in lowering overall operating costs.

For us, co-location provides the flexibility we need but we also have the ability to administer and manage systems.  For other businesses, minimizing overhead and administration costs is the real impact of hosting solutions where this is part of the cost in a shared operating model.  A good example is Storm Internet (http://www.storm.ca/).

By having used co-location for a several years, it provided us with the perfect growth path.  We have found that our need for conditioned data center space has grown organically over a long period.  Facilities like Storm provided us with the ability to grow and expand/or contract, as we needed.  In our case, we have reached the crossing point where we need our own facility.

 Service Approximate Cost Comments
Amazon EC2 $0.07-0.12 per hour Small server
Rackspace $0.06-0.08 per hour Small server
Virtual Server/Hosting (Telcos) $0.14 per hour $100 per month for a 1GB 40GB storage
Colocation Space $250/month for 1/8 of rack With connectivity (2Mps) provided with added charges if over the limit, and extra for electrical.

The economics of using cloud solutions will depend on the number of servers, assessed against the cost of purchasing servers and dealing with all operating costs.  Leasing servers and using your own facilities versus co-location is a key decision and requires careful evaluation.   Future posts will explore this topic further and expand on the comparative economic model.

 Conclusions

Every business must evaluate its own requirements, against the options presented by the data center providers.   This is not a simple path.  Superna has moved from one solution to another and has progressively built our capacity as our business grew.  There is also not one type of solution.  We use all available options in the market and will continue to do so to accommodate the need of the business.

Superna is in the process of establishing a technology innovation center and network in Ottawa. We plan to leverage our own startup experience and business.  A key part of this facility will be our data center and the access to it.

What are your thoughts on choosing data center space?



An Extended Business Model for Startup Incubators

Jan 10, 2013
By Michael Arno

This is a follow-up to the first post on the emergence of a new type of incubator for startups. This time I would like to discuss the different types of incubators.  There are multiple choices and different options for the startup.  This is not a situation where one option is necessarily better, as each startup may have requirements that can be met by the different incubator models.  The diagram below shows the three typical incubator types with Option 1 being the base real estate with the addition of Option 2 or 3 for business and IT services.

We have faced many challenges as our business grew over the last 5 years.  A key time was the move from basement to structured office space.  Most startups will face the same challenge and the decision on office space is one of those hurdles that can be a significant cost and is typically the first time the business needs to make a long-term financial commitment.  But this move can also represent that point where the business takes on a legitimate shop front.  Incubators offer a great stepping-stone for startups.  While many facilities can offer affordable secure office space with flexible lease terms, this is where many incubators will typically stop.

The real challenge for many startups comes in the next phase where transformational capacity and a wider range of services are needed to realize the business concept.  When I look at our business, and other technology startups, this was the critical step where access to resources was essential to the establishment and growth of the business.  Without this, it may not matter how strong the originating idea.  This is also the time where the startup needs to mature their idea before getting in front of potential investors.

The incubator can provide a path to the many intangibles that businesses need when starting up – access to capital, marketing advice, mentorship, legal and accounting.  New businesses are often ill prepared for the full scale of what’s required and these can become key factors in the failure of a business, not because the viability of the idea, but the lack of effective business support.

What are some of the types of additional services that are needed?  Based on our experience, here are some of the items that are needed in an incubation facility:

  • Access to business support including legal, accounting, business planning, marketing and financing;
  • Access to extended facilities such as conditioned data center or lab space with available servers, storage equipment, software licenses, high speed internet, telephone; and
  • Access to IT skills and knowledge, test and certification knowledge.

If we look at the three types of incubators available in the market, we see some typical requirements and what an incubator can offer.

 

STARTUP NEED INCUBATOR OFFER

1

Don’t have a basement or space at home, and need an office space Only office space, lease by square foot, per desk or by staff.  Will also include parking and shared facilities.

2

Need office infrastructure and extra support Office space plus legal, accounting and other business support

3

Technology startups that needs space, office support, and IT infrastructure Office space with business support, back office infrastructure and IT services

In our extended business model for technology startups we propose to provide all of the above in one location.  We believe that all three dimensions above are required to fully realize the value proposition.  Basic real estate is the first level with a collaborative working space.    IT and Business Services can be provided in a common shared framework to extend the capacity of the startup, creating an environment that is typically only present in larger companies and build a foundation for growth.  We also believe that the next stage is the development of virtual incubators that can extend the physical facility to a wider user group.  Future posts will explore the virtual incubator.

Superna is in the process of establishing a technology innovation facility and network in Ottawa that will fulfill the space for this complex and sophisticated type of business. We plan to leverage our own startup experience and business.  Future posts will follow the progress in this new challenge as we work to make this a reality over the coming months.

What are your thoughts on this concept?  If you have had experience with this type of business model, your input is welcome.



Superna and the Emergence of a New Type of Incubator for Technology Start-ups

Jan 7, 2013
By Michael Arno

Introduction

In the Spring of 2013, Superna will be launching the Superna Innovation Center in Ottawa, Canada. Superna has travelled that typical path from start-up in the basement to a mature and diversified technology business. Based on our experience, we believe that startups need more than basic real estate. Success of the startup can also depend on access to many interrelated systems and support services such as finance, office systems, customer management and advanced information technology.

This is the first in a series of posts that will follow the development of our incubation center for technology start-ups as we go through the actual planning-design-build process to the marketing and operationalization of the new facility.

Incubators are certainly not new. There has been an active movement over the last few years in developing accelerators or incubators for technology start-ups that can provide a more stable foundation for the success of the early stage business. These have been driven by business advocacy organizations and by academic institutions that typically have the infrastructure to support incubators and can realize additional revenue flow to the institution.

We recently visited an incubation facility in London, England, the London Knowledge Innovation Center (LKIC) that is associated with the London South Bank University (http://www.lkic.com/). LKIC has been in operation for several years. The Center is completely occupied with a diverse base of companies. A key success factor for startups has been access to flexible lease terms. The synergy between members has also helped companies deal with business fluctuations through active collaboration. LKIC provides the space and basic services, with access to university facilities, venture funding, and R&D expertise.

The Ottawa Experience

Today, business advocacy and public institutions see incubator facilities as a perfect vehicle to generate new economic and regional development activity. They likely work better where there have been active startup and investment community and culture. In Ottawa, we have had this since the 1970’s with a very active technology sector. Incubation used to happen within large corporations such as Northern Telecom, SHL Systemhouse, Newbridge and more recently RIM. These companies created hundreds of startups through various processes (e.g., investment and spin-off, sale, restructuring or down-sizing). Superna is an example of a company that was created through a Nortel restructuring period in 2002 and once again in 2008. The demise of several of these companies also created a real vacuum in the Ottawa market for startups.

In response to these changes, we have seen the emergence of structured business incubators in Ottawa in recent years. An excellent article published last year by the Ottawa Business Journal (http://www.obj.ca/Technology/2012-02-14/article-2894434/Where-startups-get-started/1) provides a very good synopsis of the types of facilities for startups that have emerged in recent years and the support from advocacy organizations.

What’s the Difference – The New Startup Incubator Model?

A typical new facility business model is a flexible shared working space with extended support for the startup that can include mentorship, financing, and marketing. The business model can also vary. In some cases, the incubator is the investor providing space in exchange for a long-term stake in the business with minimal initial expenditure required by the startup. At the other end of the spectrum, the facility provides a more flexible real estate terms such as monthly rent versus multi-year leases.

There are many excellent articles that have been recently published on the new type of incubator that is emerging. Forbes’ Eight Reasons Startup Incubators Are Better Than Business School can be found at http://www.forbes.com/sites/jjcolao/2012/01/12/eight-reasons-startup-incubators-are-better-than-business-school/. Another very good set of articles can be found with Dave Cummings Blog who has written extensively on startups and provides some excellent insights into incubation athttp://davidcummings.org/2012/11/03/next-generation-flex-office-space-for-startups/.

What we see emerging are both physical and virtual facilities that can provide collaborative working environments combined with a variety of complimentary value added activities. We believe that the value added component is critical and that there is an opportunity to create a new type of resource center that can also extend the facility with emerging cloud computing resources.

As stated earlier, Superna is in the process of establishing an Innovation Center and network starting in Ottawa that will leverage our own startup experience, resources and unique facility capacity. Future posts will follow the progress in this new endeavor. Input and suggestions are welcome.



Virtual Networks in the Cloud but why?

June 17, 2010
By Andrew MacKay

With virtualization showing up everywhere and Cloud this and Cloud that being referenced on a daily basis, I thought I would explore a concept we have been thinking about at Superna “Virtual networks”. I have an interesting job as I get to tinker with technology and see if it’s got any commercial value before bringing new products and solutions to market. I decided to experiment on the feasibility of building a virtual network, with virtual nodes (routers, Switches, optical devices) that operate exactly like the real device but I wanted to leverage Cloud computing. The key question to ask is “Who needs this solution?” I see many reasons this solution has value, here are few examples I’m sure there are more 1) customers could mock up a network and test what if scenario’s before making a buying decision on new equipment and see if network convergence (OSFP, BGP, Optical) or latency meets their business requirements 2) network management vendors often struggle with the cost of buying real devices to integrate into their management platforms. A much cheaper alternative is simulated nodes that have the same SNMP, TL1 or CLI interface of the real device, this dramatically reduces the cost and allows for large complex network to stress test management applications without the expense of hardware 3) Learning how to provision and operate network equipment requires an expensive lab environment but with virtual networks and management software, the lab itself can be virtualized in the Cloud.

The test I setup was validating a virtual optical network with simulators that ran in Amazon’s Elastic computing environment (EC2). I wanted to see if our management software could interact with virtual nodes that were hundreds of milliseconds away over the Internet hosted in a Cloud offering. The test setup used 30 virtual optical simulators to represent a 30 node network.

The final results show it’s feasible and everything worked as expected, with 25 alarms a second from Amazon simulators testing our management software over a 400 millisecond connection. Entire networks with 100′s of nodes can be turned on with the click of button and turned off when not needed saving on power and space versus traditional hardware based testing models.

Stay tuned for updates on this project.



Management Software in the Cloud?

June 8, 2010
By Andrew MacKay

With Cloud computing topping all CIO’s lists of priorities, I start to wonder what applications will actually get deployed in the Cloud. If you look at the applications that “run the business”, I suspect these will be the last applications to find their way into the Cloud due to security, performance and support being so critical. I don’t often see management software as a target to move outside of the datacenter. There are managed services available from service providers but they are very expensive to outsource to a service provider. I see the potential for hosted management software that executes remotely but monitors and manages a network as being a lower cost approach that allows a customer to quickly enable management of a network and distribute a view to anyone that needs access. Typically, managing, patching or adding new device support to this type of software is time consuming along with expanding the features requires a lot of planning. Virtual management software in the Cloud maybe the answer.




Empower Your Field Force With Mobile Network Data

June 3, 2010
By Michael Arno

Continuing the discussion of spatially enabled network data, there are some great reasons for taking the additional step of distributing this data to the telecom mobile field force. Send it right to their laptops and cell phones while they are resolving an outage or engaged in routine maintenance. There are at least two great reasons for doing this:

1. Efficiency. All field tasks will be done faster and more accurately, once workers can see the precise location of network equipment, anticipate job requirements, search for the nearest spares, and contact co-workers with detailed information. It really cuts down on the need to first inspect the site, then drive back to the shop for tools and equipment.
2. Buy-In. As the field force starts actively using spatial data and recognizing its value, they naturally become motivated to maintain that data with corrections from the field. Today’s applications have good tools for ‘redlining’ or updating data from a mobile device. Once the field force buys in to the process of updating network data as part of their work, the corporate data becomes more accurate, reflecting the ‘as built’ network.

Benefits return to the NOC manager and VP of network operations in the form of more timely and accurate reports on the network, to better support management decisions.

Network and spatial data have been converging for some time, but with increasing scope. As Sherlock Holmes would say, “the game is afoot” now with deals such the recent partnership of Nokia and Yahoo (http://news.yahoo.com/s/ap/20100524/ap_on_hi_te/us_tec_yahoo_nokia).

At Superna, we are developing plug-ins that spatially reference EMC Ionix network data. Here is a one-minute video that demonstrates how our Network Discovery Engine can spatially enable the alarm notification system of EMC Ionix and report results to a mobile device.




Virtualization highlights the value of location, location, location

May 18, 2010
By Andrew MacKay

Server virtualization has changed the way IT operates, and is now well established. The promise that virtualization can enable low cost CPUs and storage in the Cloud provisioned in seconds for any IT department on demand seems too good to be true. I just returned from EMC World 2010 and got to witness the “everything thing in the Cloud” pitch based on virtualization technologies.

Aside from the aging Counting Crows concert put on by EMC, the most interesting announcement from the show was the vPLEX product line and “Access Data Anywhere” technology. This promises to allow application data typically residing in the SAN (storage area network)—which for years has been orphaned data inside a fibre channel island—to be accessible by any application, regardless of physical location.

It’s about time fibre channel data was easier to network but a real challenge emerges: terabytes of information that never left the network are now flowing over network links without any predictability as to when or where it’s going. Today’s management applications are poorly designed to handle a network and application workload that is free to execute or run anywhere in the network. I see the old adage about “location, location, location” coming into play with such a dynamic IT environment. Management software will need to know where data is flowing, where it’s coming from, where it’s going, and over which network links. This will need to happen in real time, as the network has become too complex for humans to make sense of the data without some level of automation. Management software needs to make decisions and raise alarms and show trending based not only on devices but also location or locations.

Let’s look at the example of “follow the moon computing” , where workloads in North America migrate to Europe at the end of the work day to take advantage of lower power, available CPU power and lower latency to end users before moving on to Asia. Terabytes of information starts moving and applications never stop running with some executing in North America some in-flight to Europe or Asia. If something fails, where is the latest copy of the data? Where was the application actually running last? Where was the application and data going? Here again, location will be very important for answering these questions and leveraging Cloud computing’s promise of lower TCO. The future of geospatially enabled management systems seems like a promising technology breakthrough.



Disaster Recover Needs a Lifeline

May 18, 2010
By Michael Arno

Disaster recovery is more complex today than ever before. The introduction of virtualization into every layer of the OSI stack means the number of troubleshooting steps and devices in the data path has expanded tenfold. Clearly, efficient network management demands a new approach.

This point was made recently in the NetQoS blog, Network Performance Daily:

The main concern that the lack of visibility presents to enterprise IT shops is the idea that mission critical applications that performed fine before virtualization may perform poorly when virtualized, and the IT shop will have no way of being proactive in finding performance problems, nor will they have the tools they need to quickly find the root cause of the problem.

Mission critical applications are generally replicated over a protocol, like EMC’s SRDF, between data centers. The WAN network used is typically WDM, SONET/SDH or more recently a shift towards IP networks. I have experienced customer networks with business impacting issues takes months to find and resolve the root cause. One constant I have witnessed is large complex applications and networks need accurate inventory and topology to help narrow down the search for a root cause issue. The EMC Ionix product targets cross-domain fault correlation and topology, but it has gaps with device support. Targeted solutions for end-to-end path management still don’t exist.

The industry has to move beyond the protection of devices on the network. Monitoring and securing the entire data path is necessary to achieve the next level of business continuity protection.



Map It! Why You Need Spatial Data for Better Network Management

May 10, 2010
By Michael Arno

How can you manage a network without access to spatial data? Badly!

Electrical networks have known this for a long time, but too many telecom network managers rely on logical diagrams and online spreadsheets. As a result, they are unable to visualize the ramifications of physical threats to the network, such as floods, fires, or traffic accidents. Nor can they fully determine the root clause of network outages that could be geographically correlated. It takes much longer to determine the affected equipment at a location and identify the nearest source of spares. When your outside plant infrastructure is visible on a street map or aerial photo, such relationships are easy to spot.

Adding the spatial dimension brings many benefits beyond root-cause analysis. In particular, it helps managers to work proactively, preventing problems and saving costs. For example, if you know the river is rising, your 3D spatial map identifies the affected area for a range of flood scenarios, and shows which equipment to protect or shut down before the waters are lapping at the door. As a result, you can dispatch crews to work more safely in dry conditions. If several alarms pop up in your management system, immediate access to a map of their location might show you the cause and identify other equipment at risk. You can also use network trace algorithms to identify upstream and downstream effects of local issues.

This figure shows an example of how a network event could be displayed with spatial context and additional planning data.

Fortunately, with sources such as Google Earth and the Shuttle Radar Topography Mission (SRTM), network managers now have easy access to spatial data for everywhere on Earth. They also have Web-based applications such as MapGuide (mapguide.osgeo.org) that are easy to use and extend for a wide range of map-handling purposes.

Network planners have been using spatial technologies for wireless applications and we are seeing the expansion to other network systems. Returning to network management, with today’s technology spatial data can be pushed from head office to mobile devices in the hands of maintenance staff on location. They can use the data to work more efficiently, and with a few simple editing tools they can add notes, photos, and corrections to the maps.

EMC Ionix has demonstrated a great capacity to integrate NMS data from heterogeneous systems. But the power of the suite could be vastly enhanced if it were spatially enabled, adding both geospatial data and algorithms for proximital analysis, network trace, and so on. Fortunately, such developments are underway. More about that later!



Network Security Gets Fuzzy in the Cloud

May 7, 2010
By Andrew MacKay

Network security is moving into a more advanced state, and many organizations are exploring new strategies to satisfy emerging requirements, such as application end-to-end security. We also see traditional storage applications like EMC’s SRDF or specific line-of-business applications that require different levels of security on a shared network.

A key weakness in the adoption of more widespread Cloud infrastructure is the extent to which the Cloud can be secured. The characteristics of this security are high speed (wire rate) and conforming to industry standards.



The responsiveness of the Cloud and the emergence of new elements such as the hypervisor are welcome, but they do not address the basic problem of transmitting data securely on an undefined, unsecured network. This is especially worrisome for enterprises that contract network services from a commercial service provider. In addition to sharing the network with other customers of the SP, they also share physical locations of virtual servers on the storage media maintained by the SP. The possibilities for threats and data contamination are multiplied and ever-changing in the Cloud.

One strategy that has worked well for us in today’s networks is a new model of encryption service. The SP maintains an encryption framework, while the enterprise customer retains control over the generation and distribution of security keys used in their data transmissions. This service model seems to meet the business interests of both parties.

The SP is most concerned with providing a reliable service to multiple customers, but does not want to get bogged down in the minutiae of key distribution. In fact, their network is probably more secure if the SP does not know the keys.

Meanwhile, the enterprise customer is free to determine the appropriate level of security for different data types, and knows what is happening with their keys at all times.