Skip to main content

Posts

Showing posts from September, 2017

What edge computing means for datacenters

There are 8.4 billion connected ‘things’ in use in 2017, according to tech researchers Gartner – up 31% on 2016. In response, over 80% of IT teams want their datacenters to be more reliable and available to handle the pace of growth in IoT. By moving processing to the edge of the network, organizations can combat latency by reducing traffic on the primary network, enabling faster, more efficient decision-making. So, what is edge computing, and how can CIOs ensure their datacenters are ready? Edge computing vs the Cloud Edge computing is where data processing takes place at the edge of a network instead of in the cloud or a centralized datacenter. Edge devices capture streaming data that could be useful in preventing failures, optimizing performance, and dealing with hardware defects without delay. For instance, the device could be a smartphone collecting data from other devices before sending it into the Cloud. In reality, edge computing and the Cloud work regularly in ta

For the win: Why gamification and game-based learning are the future

A quick google of the terms ‘gamification’ and ‘game-based learning’ turns up a surprising lack of clarity about what these terms mean – and, crucially, whether they work. They’re real buzzwords of the moment, and share the common ground of implementing the mechanics of ‘gaming’ to enhance learning, or encourage changes in behavior. Used in everything from marketing strategy to HR policies, various studies have shown the benefits of using ‘gamified systems’. IT staff are perhaps closer than most to the technology that’s shaping how game design and game interaction is revolutionizing professional education. So, what is it, and can these concepts help to cut through some of the current problems with learning new systems and processes for datacenter managers and CIOs? What’s the difference between gamification and game-based learning?   ‘Gamification is using game-based mechanics, aesthetics, and game-thinking to engage people, motivate action, promote learning, and solve

Preparing for the worst: Simulating incidents and outages

Thanks to our increasing dependence on technology, the growth in Big Data and the Internet of Things, reliability of hardware and software is more crucial than ever. Together, they form the foundation for an organization’s Line of Business (LOB) applications and provide the critical stability needed to support ‘always on’ physical, cloud and Edge environments. So, what do the current risks look like, and how can we learn to mitigate them? Hardware is getting more reliable As reliance increases, server hardware and operating systems are getting more robust, as a recent survey by consultants ITIC testifies . They found that market leading IBM z Systems Enterprise servers had just eight seconds of ‘blink and you’ll miss it’ downtime a month on average. Hardware may be getting more resilient, but that doesn’t mean that datacenter outages are falling – and, with our ever-increasing reliance on connectivity, their effects are getting more catastrophic. Denial of Service (DoS)