Boundary punts freebie app monitoring from the clouds

Pockets $15m in VC cash, hooks into Engine Yard


Application monitoring tool provider Boundary, which only peddles its wares as a service running on a cloud, is offering potential customers a freebie version of the service to scare up some business.

The company is also rolling out a 2.0 release of the eponymous app monitoring service while at the same time partnering with platform cloud provider Engine Yard to integrate Boundary into its own cloud.

Gary Read, who came on board Boundary earlier this year to be CEO, is no stranger to cloudy tools. Read was founder and CEO of Nimsoft, which created a set of cloud-based tools aimed at helping service providers manage their systems and networks.

CA Technologies, formerly known as Computer Associates, acquired Nimsoft in March 2011 for a hefty sum of $350m, and the same sort of thing could happen a few years hence if Boundary takes off.

Boundary thinks that application and system monitoring is too complex for the kinds of static tools that were used in the old dot-com days, when IT environments themselves were static, Read tells El Reg.

"Cloud means the server configuration can change at any time, software-defined networking means the network configuration can change at any time, and agile development means that the code in an application can change at any time," explains Read. "Application monitoring is therefore an analytical problem, and you need to collect data continuously and run analytics continuously."

The Boundary service can take in all kinds of operational data from systems through agents as well as taking in input from third party monitoring tools, provisioning tools such as Chef or Puppet, and the APIs from cloud providers such as Amazon EC2, Rackspace Cloud, SoftLayer, and Google Compute Engine.

All of this data is pulled out of the network stack from all of these devices, beamed up to the Boundary service, and chewed on real-time to create an operational dashboard for system admins so they can see trouble before it starts getting out of hand.

You can put the Boundary agent on physical servers and babysit them, too, so long as they run Windows or Linux. The company has not seen demand for any Unix variants yet and so has not created an agent for Solaris, AIX, or HP-UX.

The Boundary service, which launched in early April, now has around 50 paying customers, which is a good start, but that it wants to grow the base. The code behind the Boundary service is not open source, and is not going to be, but the company can give away access to the code for tire kickers and modest users, and that is precisely what the company has done with its Boundary 2.0 release announced this week.

Screen shot of Boundary monitoring apps

Screen shot of Boundary monitoring apps (click to enlarge)

With the freebie Boundary service, all of the bells and whistles work, but the company puts some limitations on it so you don't get carried away and drive up its own costs on its cloud. THe service is hosted at an Equinix data center in Ashburn, Virginia, like Amazon's EC2 location on the East coast of the US.

Customers using the freebie version can accumulate 2GB of operational data per day, and they can store up to one month of data. Depending on the nature of your application servers, Read says that this is enough capacity to monitor maybe 15 to 20 application servers running in your data center or in a cloud (or a mix of the two).

If you want to get the paid version of the service, with full tech support and the ability to store that 2GB of daily operational data for as long as a year, then you pay $199 per month, and if you want to store 5GB of data per day, then it is $395 per month.

Engine Yard, the platform cloud provider, is partnering with Boundary to offer the service to monitor applications running on its cloud as part of the 2.0 release. There are also predefined integrations for Amazon EC2 and Rackspace Cloud as well as the ability to suck in RSS-based status alerts from devops tools from New Relic, Splunk, and Papertrail.

Boundary raised $4m in its Series A round back in January 2011, and in late July raised another $15m to help it build up the business. ®

Similar topics


Other stories you might like

  • AWS says it will cloudify your mainframe workloads
    Buyer beware, say analysts, technical debt will catch up with you eventually

    AWS is trying to help organizations migrate their mainframe-based workloads to the cloud and potentially transform them into modern cloud-native services.

    The Mainframe Modernization initiative was unveiled at the cloud giant's Re:Invent conference at the end of last year, where CEO Adam Selipsky claimed that "customers are trying to get off their mainframes as fast as they can."

    Whether this is based in reality or not, AWS concedes that such a migration will inevitably involve the customer going through a lengthy and complex process that requires multiple steps to discover, assess, test, and operate the new workload environments.

    Continue reading
  • Google calculates Pi to 100 trillion digits
    Claims world record run took 157 days, 23 hours … and just one Debian server

    Google has put its cloud to work calculating the value of Pi all the way out to 100 trillion digits, and claimed that's a world record for Pi-crunching.

    The ad giant and cloud contender has detailed the feat, revealing that the job ran for 157 days, 23 hours, 31 minutes and 7.651 seconds.

    A program called y-cruncher by Alexander J. Yee did the heavy lifting, running on a n2-highmem-128 instance running Debian Linux and employing 128 vCPUs, 864GB of memory, and accessing 100Gbit/sec egress bandwidth. Google created a networked storage cluster, because the n2-highmem-128 maxes out at 257TB of attached storage for a single VM and the job needed at least 554TB of temporary storage.

    Continue reading
  • IT downtime not itself going down, power failures most common cause
    2022 in a nutshell: Missing SLAs, failing to meet customer expectations

    Infrastructure operators are struggling to reduce the rate of IT outages despite improving technology and strong investment in this area.

    The Uptime Institute's 2022 Outage Analysis Report says that progress toward reducing downtime has been mixed. Investment in cloud technologies and distributed resiliency has helped to reduce the impact of site-level failures, for example, but has also added complexity. A growing number of incidents are being attributed to network, software or systems issues because of this intricacy.

    The authors make it clear that critical IT systems are far more reliable than they once were, thanks to many decades of improvement. However, data covering 2021 and 2022 indicates that unscheduled downtime is continuing at a rate that is not significantly reduced from previous years.

    Continue reading
  • Digital sovereignty gives European cloud a 'window of opportunity'
    And US hyperscalers want to shut it ASAP, we're told

    OpenInfra Summit The OpenInfra Foundation kicked off its first in-person conference in over two years with acknowledgement that European cloud providers must use the current window of opportunity for digital sovereignty.

    This is before the US-headquartered hyperscalers shut down that opening salvo with their own initiatives aimed at satisfying regulator European Union, as Microsoft recently did – with President Brad Smith leading a charm offensive.

    Around one thousand delegates turned out for the Berlin shindig, markedly fewer than at CNCF's Kubecon in Valencia a few weeks earlier. Chief operating officer Mark Collier took to the stage to remind attendees that AWS' CEO noted as recently as this April that 95 per cent of the world's IT was not spent in the cloud, but on on-premises IT.

    Continue reading

Biting the hand that feeds IT © 1998–2022