Technology and business goals collide at the intersection of page load time and conversion rate.

Marketing wants a fully featured page, with lots of images and services to track user behavior.

Page load time has a huge effect on conversion rate and customer happiness:

  • Half of all customers expect a webpage to load in under 2 seconds. If it doesn’t, they lose trust in and patience with the site, and click the back button to navigate to the next search result on Google.
  • User expectations have increased big time. SOASTA conducted an experiment between 2014 and 2015, looking at conversion rates based on page load time. In 2015 they saw a peak conversion rate at for sites loading in 2.4 seconds. This was 30% faster than the peak conversion load time in 2014.  
  • Cedexis found that decreasing load time by just 1 second improves conversion rate by an average of 27.3%, where a conversion is defined as a purchase, download or sign-up.

The necessity of keeping page load time low for a good customer experience means that tech teams need to exercise every option available to them for performance. Effective caching techniques can bring improvements to even the leanest of websites.

Why optimize caching?

Caching is the process of storing elements so that clients can retrieve resources from memory without needing to put strain on the main server. Utilizing caches has three main benefits.

First of all, caching can make web pages load faster, especially if the user has visited before. If you utilize caching to distribute content globally, visitors will see a reduction in latency, which is the time it takes for a request to physically travel from their browser through the network to the server and back again. If your page is cached locally on the user’s browser, they don’t need to download every resource from your server, every time.

Secondly, caching reduces the amount of bandwidth needed. Instead of the server being responsible for delivering resources for every request, it only needs to deliver new content. Everything else can be returned from a cache along the network.

Finally, caching increases how robust your site is. A client can retrieve resources from a cache, even if your server is down or experiencing high traffic. A plan for preparing for volume spikes should include a caching strategy.

Levels of caching

Caching can happen at lots of different checkpoints along the network, right from the browser to the server itself. Every checkpoint has different benefits and challenges associated with them.

Let’s start with the caching options closest to the end user, then move up the chain to the server where the resource being retrieved originates from.

Browser caching – imagine scrolling through the search results on an online shop. You click on an image link to load the product page, decide it’s not quite right, and hit the back button. If your browser had to request the entire search page again, you’d have to wait for all the images to be downloaded to your browser for a second time. Fortunately, browsers use memory to store a version of sites they’ve already visited. Instead of going all the way to the server and back again, your browser just pulls up the version it’s already stored for you. It will also do this for constant pieces of your site, like your logo, for example.

Proxy cache (Web Server Accelerator) – caches can also be shared between many users. ISP use caches to reduce bandwidth requirements by sharing resources. That way, if one user has already requested a static resource (like an image or file) the ISP doesn’t need to request it again from the server – it can provide it instantly.

Content Delivery Network (CDN) – remember how distance between user and server affects load time? CDNs are caches designed to reduce latency by distributing copies of cached files to local servers all over the world. When a user requests a resource, they are connected to their local CDN. Companies with international users should consider using a CDN to reduce latency.

Server side caching/ reverse proxy – if most of your content is static, you can cache it for yourself, so customers won’t need to hit your server to load static content. There  are several tools that do this for you – Redis, Varnish, and phpfm are all popular options.

Database caching – database servers are often separated from the rest of the server. This means that when your server receives a request from a user, they need to request something extra from the database. If a frequent request always returns the same result, you can cache this in a database cache. This prevents the database from crunching the same request over and over again, resulting in better performance, even during busy periods. Search servers  for ecommerce sites also return cacheable queries.

When should you optimize caching?

“I’m not lazy, I’m just efficient” – ever heard that before? Well, think of your servers as the absolute laziest pieces of hardware you own. Never ask them to do something time consuming twice if there’s a way for them to hold onto results in a cache down the line

For example, you sell jewelry online and one of your top link destinations is a list featuring the 20 most popular items. If you didn’t utilize caching, every time a visitor clicked on that link, they’d need to send a new request through their ISP to your server, which would ask the database to calculate the top 20 items and then send back each of the corresponding images and prices. But realistically, you don’t need to compute this full page every time it’s requested. The top 20 items don’t change often enough to require real-time results. Instead, cache the page in a reverse proxy – located in the same country as the customer – and deliver it much faster.

When you start optimizing your caching strategy a good place to begin is by identifying the most popular and largest representations first. You’ll get the biggest benefit from focusing on caching improvements for pages that are resource heavy and requested often. Looking at the waterfall diagrams on the Network tab of your browser can help identify resource intensive modules on the page.

Time To First Byte (TTFB) is a good way to measure the responsiveness of your web server. Improving your caching strategy through reverse proxies, CDNs and compression will help customers experience shorter TTFB, and help your website feel snappier.

However, don’t forget that most customers will have a poorer experience than that seen in testing. They might, for example, be located on the opposite side of the world using a mobile device or an older computer. By utilizing caching best practices, you’ll ensure customers have a great experience, no matter where they are.

When you need to refresh your data

Because we work in a world where everything is frequently updated, it’s important to understand the different methods we have of forcing a cache reset. There are a few ways we can force browsers and other caches to retrieve a fresh copy of data straight from the server.

  1. Set expiration date  – when the site doesn’t need to stay perfectly up to date in real time, but does need to stay reasonably fresh. If you set an expiration date in your header, the browser will dump the cache after that time. If the resource is requested again, a fresh copy will be retrieved.
  2. Set modified-since – the client will download the updated resource only if the server confirms it’s been updated after the modified-since date. Instead of sending everything again, the server can send back a short 304 response without a body, thus saving bandwidth and time.
  3. Clear specific module – you don’t need to refresh your entire blog cache just to display a new comment. Segmenting a page into different modules can help with cache refreshes.
  4. Fingerprinting – caches work by storing and retrieving a specific file when requested. If you change the name of the file, the cache won’t be able to find the file, and the new copy will be downloaded. This is how fingerprinting works to keep assets up to date. By attaching a unique series of characters to the filename, each asset is considered a new file and requested from the server. Because the content is updated every time, you can set an expiration date years in the future and never worry about a stale cache. Many compilers will automatically fingerprint assets for you, so you can keep the same base filename.

Don’t forget that a cache is not long term storage! If you decide to cache something for later, you might find that it’s been invalidated and you need to retrieve the resource again.

Making caching work for you

Determining the perfect solution for your site can be difficult. Rely too much on caching and you might find users have outdated sites, or memory troubles in their browser. Ignore caching entirely and you’ll see page loading times increase and user experience suffer.

By understanding your user’s needs, you can create a great experience from the beginning. If caching is important, it’s worth using a framework that provides out-of-the-box caching optimization. If caching is less relevant because accuracy is more important than speed, then you can make allowances either way.

Caching strategy is a problem to be solved uniquely for each app. Determining where you can utilize caching to save bandwidth is an ongoing learning experience. Keep making incremental improvements and keep it light for your customers.

The future of the world—and the world of work—was on display at TEDxLugano in Switzerland earlier in September. With a theme of “Professions of the Future” the topics of the independently-organized TED event ranged from the future of robots in harsh environments to the practical realities of an AI-first world.

Beyond Artificial Intelligence

The event was a first-hand look at some of the future implications of artificial intelligence and robotics from some of the leading thinkers in that field. But going by each presentation, it added context and nuance to a topic area that often have people jumping to worst case scenarios, reminiscent of a high-budget science fiction movie.

Indeed, when it comes to AI, our culture seems to be in need of a reality check. AI isn’t necessarily the technological advancement that’s going to save the world from all its ills, but neither is it certain to be the “end of the human race” as Stephen Hawking has famously said. Examining some of the main topics that came out of this Tedx event can help give a more granular and realistic vision of the stunning developments in this field.

Robot Intuition May Save Lives

Anna Valente, a professor in automation, robotics and machines gave a stunning talk on intuitive robotics, which have the potential to aid human endeavors in myriad ways. Often, when we think about the growing sophistication of the robotics field, people get panicky and fear the worst: that robots will become smarter than humans and surpass our capacities. However, there is a massive middle ground between where we are now and robots actually taking over the world. Take, for example, the idea that Valente presented: using intuitive robots in harsh environments where human life may be at risk. Creating robots that could intervene in natural disasters or conflict zones without putting human life at risk would be a helpful deployment of this field. As Valente put it, intuitive robotics can help humans “transcend the danger and amplify our ultimate senses” and unlock profound possibilities that both enhance human capabilities and save human lives.

Life After Death

Attendees were excited to hear from Henrique Jorge, a software developer and entrepreneur who is best-known for founding the social network ETER9, which uses AI as part of its core. Currently in beta, ETER9 learns the habits of its users through observing their social media posting patterns. Then, when users aren’t online, AI will continue to post in the user’s absence, amounting to a so-called Counterpart, which is a “virtual self that will stay in the system and interact with the world just like you would if you were present.” The more a user interacts on the network by posting and commenting, the more this Counterpart learns. While this might sound disconcerting to some, the goal is not supplanting human communication for AI. Rather, in his talk Jorge proposed an exciting future where AI could be used to keep the presence of individuals alive long after they’ve passed away. This concept of “digital immortality” through the merger of machines and human consciousness is just one of the exciting and nuanced ways that AI is being used.

Added Value

Another topic presented was by robotics engineer Wyatt Newman. Newman explained how “robotics, automation and Artificial Intelligence are continuously transforming the working world” which means that the skills and jobs we will need people to be adept at in the future could look very different from what we need today. While much has been written about this hypothesis in the media, most of it fuels fear of the future, not fostering an attitude of possibility. As Newman said in his talk: “The robot revolution is here, but is not to be feared. It is our hope for the future.” Through tactics like smart automation, finding new and more nuanced roles for human workers and using robotics for tasks that humans can’t fulfill, we can find a space for robots in the future world of work that does not eclipse our human world.

Imagine More…

A highlight speaker of the event was computer scientist and inventor Jamil El-Imad, whose interests span in Virtual Reality (VR), brain signal analysis, Big Data and Brain Computer Interfaces (BCI). At the event, he debuted his “dream machine” which helps users harness mindfulness to reach peak human experience. As El-Imad intoned in his talk: “Imagine you are in Zurich, your friends are in different countries and you all wish to go to a motor racing or football event together …. technology is changing our live experience” in ways we couldn’t have imagined just years ago.

The work of the TEDxLugano speakers showcases the ways that the fields of AI and robots are not things to be feared, but rather celebrated and explored, as the human race progresses and evolves into the future.

{{cta(‘3f85070c-9c2d-4388-ab71-2191b0457acb’)}}

Nearly every industry today is abuzz with the promises of artificial intelligence. AIs applications span multiple types of business, touting benefits of automation, machine learning, and – more broadly speaking – intrinsically transforming the way the world of enterprise works.

However, theres a little niggle in the nomenclature that plagues AI when we apply it to marketing: artificial. If AI is the marketing revolution weve all been waiting for, how can we transform it from something artificial into real, actionable ways of improving the way our companys marketing works?

Fear not, theres hope. Amidst the hype of artificial intelligence in marketing is a solid foundation of solutions that not only preach the benefits of AI, but actually deliver. Lets line up the most promising of these solutions, and determine which of them could be a real-life game changer for the business world.

 

Artificial Intelligence and Written Content

This is an area of AI that will immediately raise red flags for some. How can a computer create a thoughtful, informed, innovative piece of written content that rivals that of a human being?

Simply put, it cant. AI is years, probably centuries, away from being able to develop engaging, thought-leadership content for a human audience.

Yet all is not lost: if its fact-based content youre after, this is where AI has been earning its stripes lately and in a big way. Companies like Narrative Science (makers of Quill) and Automated Insights have developed technologies that take a set of data surrounding a particular subject and, using a proprietary algorithm built with a specific set of vocabulary, produce natural-sounding written content. Both Narrative Science and Automated Insights have been around for a few years, but theyve spent their time refining their systems to their current states. And we have to say, its pretty darn good.

For example, Automated Insights AI technology was used to produce this story about a Major League Baseball game. Heres a snippet:

Cristian Alvarado tossed a one-hit shutout and Yermin Mercedes homered and had two hits, driving in two, as the Delmarva Shorebirds topped the Greensboro Grasshoppers 6-0 in the second game of a doubleheader on Wednesday.

In a fraction of the time it would have taken a human to write this content, Automated Insights artificial intelligence wrote an article that was of the same quality, if not better, than a human sports reporter.

Marketing AI like that from Automated Insights and Narrative Sciences poses huge potential benefits for companies. Thats not to say that machines are necessarily better at producing written content, but rather that human workers resources should be utilized in areas where technology is currently and perhaps will forever be lacking: creation of ideas, opinions, innovation, new ways of thinking, the Achilles’ heel of computers. For starters, humans are more expensive and slower than machines. Lest we forget the lessons of George Orwells 1984, there are parts of content creation that can be left to machines. When written content is merely a regurgitation of a set of facts, marketing AI can save both time and money for companies.

Another way AI in content offers huge potential benefits is in SEO. A pillar of SEO strategy is to post relevant, fresh, keyword-rich content on a regular basis. AI marketing technology uses data from recent news stories and weaves it into an article while employing a weighted set of keywords focused on SEO strategy, which can then be posted directly to a website. AI therefore gives businesses a constant stream of website content that the bots at Google are sure to love.

Machine Learning and the Sales Funnel

Discussing the ins, outs, ups downs, and everything-in-betweens of the sales funnel is the bane of many marketing and sales teams existence. But what if marketing AI could help these teams navigate their sales funnel say, for example, predict both the likelihood of a given lead converting to a sale and give an estimate of how much the sale would be worth, based solely on the leads behavior? Or if you could predict that one of your current customers was about to start spending more or less before even they know that? For the right type of business, machine learning with predictive analytics could be the most significant game changer to date.

Big data has been available for quite some time now, but weve been waiting for the software tools that will allow us to utilize this data. And thats where the predictive analytics come in. The data set provided for machine learning is the key determinant to how well it can work: machine learning is used to create propensity models that assess a given leads behavior and, depending on the specificity of the data set on which its built, determine how likely it is for that lead to become a sale. From there, human resources from the sales team can either be allocated to the lead in order to nurture it through to a sale, or the lead can be abandoned so that resources dont have to spend time chasing a prospect who is unlikely to convert.

One big player in predictive analytics is IBM, which has two main tools – SPSS Statistics and SPSS Modeler – designed to have a large breadth of applications in multiple parts of a large enterprise. SPSS Statistics is optimized for managing large data sets and producing advanced analytics models for a companys broader marketing plan, where SPSS is action-focused, building models that help businesses make important decisions or realign their primary foci. Another company that offers predictive marketing analytics is Optimove, whose approach is somewhat more customer friendly and easy to understand, and better suited to SMEs.

Dynamic Pricing

This is another area where predictive analytics can pay off in dividends for a marketing team. Propensity models are created to track a given customers behavior in relation to their likelihood to convert, as we discussed above, but dynamic pricing introduces an additional opportunity to convert. Dynamic pricing offers a product at a discounted rate when the propensity model predicts it is necessary to get the sale. By doing this, only some customers are offered a product at a lower cost, therefore increasing the overall profit for the company by not offering the discount to everyone, as well as maintaining a high rate of conversion for customers who would otherwise have moved on to a competitor. Its a bit like those website pop-ups that are triggered when a user begins to move the mouse toward the back or close button, presenting a desired CTA like a newsletter sign-up or discount. Dynamic pricing goes a step further and bases the incentive off a more specific set of behaviors, increasing the overall probability of a sale.

IBM is also a big player in this space, however again is focused at enterprise-level businesses. For companies with both brick-and-mortar stores and an ecommerce setup, dynamic pricing can coordinate prices among all touch points to influence online prices and physical stores. For companies that are publicly listed, it can also take into account market fluctuations when adjusting pricing. Omnia is a company that takes a connected network of sales locations, both online and offline, which they call omni-channel profit”.

Chatbots

Chatbots are another product of marketing and machine learning designed to improve the efficiency with which your customer service team operates. However, theyre not as expensive or difficult to create as one might assume, and as such are available for many businesses to take advantage of, even without a massive budget.

Facebook has developed an easy-to-use development feature to help businesses create their own chatbots on the Facebook Messenger platform. The chatbots created are capable of designing interactive and engaging CTAs, then sending them to customers along with text and images, as well as staying on-brand with a custom welcome screen and invitation to start a conversation.

Even if your business just uses a chatbot as a gatekeeper in order to channel customer inquiries to the appropriate team member, youre offering your customers a higher and more efficient quality of service while allowing specific allocation of valuable human resources to a place where they will provide the most value for your business.

Semantic SEO

This is one area of AI that youve probably already noticed, although its application to digital marketing is somewhat out of the box. Possibly one of the most impressive abilities of search engines like Google and Bing is their ability to use AI to know determine you are attempting to search for, even when your search term isnt an exact match for your intended results. When it comes to marketing, this is a game changer for SEO: soon it will not be enough to optimize pages with keywords alone, but the focus will have to shift to writing about topics more broadly, including both keywords in multiple related formats (AI in marketing, market with AI, how to use AI in your marketing) as well as related phrases and ideas (machine learning and marketing, AI and the sales funnel”, how to automate sales conversions). Weaving these tactics into your content will give you a better shot at making page one as artificial intelligence solidifies its place in search engines.

* * *

At the core of all these types of artificial intelligence is an easier, more intelligent way to do work. Marketing will never be an entirely computer-driven task: humans will always be required to read the subtle nuances of the market and adapt accordingly. However, many of the processes that are part of a marketing departments daily routine can be automated using fewer human resources than before, freeing up those resources to perform human-only tasks. Were starting to discover that AI is not as artificial as we previously thought, and that real results can be delivered today.

{{cta(‘0121d281-7a3b-4ed0-b893-fd7516688494’)}}

Does your technology stack help your business thrive? Can a better server infrastructure enable improved business decisions? As AI and big data continue to find their way into our businesses, the technology driving our strategies needs to keep up. Companies that embrace AI capabilities will have a huge influence over firms unable to take advantage of it.

In this post we take a look at Facebook’s latest upgrade, Big Basin, to understand how some of the biggest tech giants are preparing for the onslaught of AI and big data. By preparing our server infrastructure to handle the need for more processing power and better storage, we can make sure our organizations stay in the lead.

Facebook’s New Server Upgrade

Earlier this year Facebook introduced its latest server hardware upgrade, Big Basin. This GPU powered hardware system replaces Big Sur, which was Facebook’s first system dedicated to machine learning and AI from 2015. Big Basin is designed to train neural models that are 30% larger so Facebook can experiment faster and more efficiently. This is achieved through greater arithmetic throughput and a memory increase from 12GB to 16GB.

One major feature of Big Basin is the modularity of each component. This allows new technologies to be added without a complete redesign. Each component can be scaled independently depending on the needs of the business. This modularity also makes servicing and repairs more efficient, requiring less downtime overall.

Why does Facebook continue to invest in fast multi-GPU servers? Because it understands that the business depends on it. Without top-of-the-line hardware, Facebook can’t continue to lead the market in AI and Big Data. Let’s dive into each of these areas separately to see how they apply to your business.

 

Artificial Intelligence

Facebook’s Big Basin server was designed with AI in mind. It makes complete sense when you look at their AI first business strategy. Translations, image searching and recommendation engines all rely on AI technology to enhance the user experience. But you don’t have to be Facebook to see the benefit of using AI for business.

Companies are turning to AI to assist data scientists in identifying trends and recommending strategies for the company to focus on. Technology like idiomatic can crunch through a huge number of unsorted customer conversations to pull out useful quantitative data. Unlocking the knowledge that lives in unstructured conversations with customers can empower the Voice of the Customer team to make strong product decisions. PWC uses AI to model complex financial situations and identify future opportunities for each customer. They can look at current customer behavior and determine how each segment feels about using insurance and investment products, and how that changes over time. Amazon Web Services uses machine learning to predict future capacity needs. In 2015, a study suggested that 25% of companies currently use AI, or would in the next year, to enable better business decision making.

But all of this relies on the technological ability to enable AI in your organization. What does that mean in practice? Essentially, top of the line GPUs. For simulations that require the same data or algorithm run over and over again, GPUs far exceed the capabilities of CPU computing. While CPUs handle the majority of the code, sending any code that requires parallel computation to GPUs massively improves speed. AI requires computers to run simulations many, many times over, similar to password-cracking algorithms. Because the simulations are very similar, you can tweak each variable slightly and take advantage of the GPU shared memory to run many more simulations much faster. This is why Big Basin is a GPU based hardware system – it’s designed to crunch enormous amounts of data to power their AI systems. To get an idea of the power involved have a look at this:

Processing speed is especially important for deep learning and AI because of the need for iteration. As engineers see the results of experiments, they make adjustments and learn from mistakes. If the processing is too slow, a deep-learning approach can become disheartening. Improvement is slow, a return on investment seems far away and engineers don’t gain practical experience as quickly, all of which can drastically impact business strategy. Say you have a few hypothesis that you want to test when building your neural network. If you aren’t using top quality GPUs, you’ll have to wait a long time between testing each hypothesis, which can draw out development for weeks or months. It’s worth the investment in fast GPUs.

Big Data

Data can come from anywhere. Your Internet-of-Things toaster, social media feeds, purchasing trends or attention-tracking advertisements are all generating data at a far higher rate than we’ve ever seen before. The last estimate is that digital data created worldwide would grow from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. A zettabyte of data is equal to about 250 billion DVDs, and this growth is coming from everywhere. For example – a Ford GT generates about 100GB of data per hour.

The ability to make this influx of data work for you depends on your server infrastructure. Even if you’re collecting massive amounts of data, it’s not worth anything if you can’t analyze it, and quickly. This is where big data relies on technology. Facebook uses big data to drive its leading ad-tech platform, making advertisements hyper targeted.

As our data storage needs expand to handle Big Data, we need to keep two things in mind: accessibility and compatibility. Without a strong strategy, data can become fragmented across multiple servers, regions and formats. This makes it incredibly difficult to form any conclusive analysis.

Just as AI relies on high GPU computing power to run neural network processing, Big Data relies on quick storage and transport systems to retrieve and analyze data. Modular systems tend to scale well and also allow devops teams to work on each component separately, leading to more flexibility. Because so much data has to be shuttled back and forth, investing in secure 10 gigabit connections will make sure your operation has the power and security to last. These features can be grouped into the 3 vs: data storage capacity (volume), rapid retrieval (velocity), and analysis (verification).

Big data and AI work together to superpower your strategy teams. But to function well, your data needs to be accessible and your servers need to be flexible enough to handle AI improvements as fast as they come. Which, it turns out, is pretty quickly.

What This Means For Your Business

Poor server infrastructure should never be the reason your team doesn’t jump on opportunities that come their way. If Facebook’s AI team wasn’t able to “move fast and break things” because their tools couldn’t keep up with neural network processing demands, they wouldn’t be where they are today.

As AI and Big Data continue to dominate the business landscape, server infrastructure needs to stay flexible and scalable. We have to adopt new technology quickly, and need to be able to scale existing components to keep up with ever increasing data collection requirements. Clayton Christensen recently tweeted, “Any strategy is (at best) only temporarily correct.” When strategy changes on a dime, your technology stack better keep you.

Facebook open sources all of its hardware design specifications, so head on over and check it out if you’re looking for ways to stay flexible and ready for the next big business advantage.

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}

Server rooms have been an integral part of IT departments for decades. These restricted-access rooms are usually hidden away in the bowels of a building, pulsing to the rhythm of spinning hard drives and air conditioning systems.

It’s a measure of the internet’s impact on computer networks and website hosting that cloud servers are becoming the norm rather than the exception. Databases and directories are hosted by a third party organization in a dedicated data center – effectively a giant offsite server room. Rather than each company requiring its own cluster of RAID disks and security/fire protection infrastructure, multiple clients can be serviced from one location to achieve huge economies of scale.

Even though 100TB is renowned for the quality of our cloud server hosting services, we recognize that this option isn’t for everyone. In this article, we look at the pros and cons of cloud servers, offering you a guide to determine whether it represents the optimal choice for your business. After all, those server rooms haven’t been rendered completely obsolete yet…

What is cloud server hosting?

Before we explore the advantages and disadvantages of this model, let’s take a moment to consider how it actually works. As an example, the servers powering 100TB’s infrastructure are based in 26 data centers around the world. Having a local center minimizes the time information takes to travel between a server and a user in that country or region, since every node and relay fractionally adds to the transfer time. Delays of 50 milliseconds might not be significant for a bulletin board, but they could be critical for a new streaming service. Irrespective of data request volumes, web pages and other hosted content should be instantly – and constantly – accessible.

There are two types of cloud hosting, whose merits and drawbacks are considered below:

  1. Managed cloud. As the name suggests, managed hosting includes maintenance and technical support. Servers can be shared between several clients with modest technical requirements to reduce costs, with tech support always on hand.
  2. Unmanaged cloud. A third party provides hardware infrastructure like disks and bandwidth, while the client supervises software updates and security issues. It’s basically the online equivalent of having a new server room, filled with empty hardware.

The advantages of cloud server hosting

The first advantage of using the cloud, and perhaps the most significant, is being able to delegate technical responsibility to a qualified third party. Even by the standards of the IT sector, networks are laced with technical terminology and require regular maintenance to protect them against evolving security flaws. Outsourcing web hosting and database management liberates you from jargon-busting, allowing you to concentrate on core competencies such as developing new products and services. You effectively acquire a freelance IT department, operating discreetly behind the scenes.

Cloud computing is ideal for website hosting, where traffic may originate on any continent with audiences expecting near-instant response times. The majority of consumers will abandon a web page if it takes more than three seconds to load, so having high-speed servers with impressive connectivity around the world will ensure end user connection speeds are the only real barrier to rapid display times. Also, don’t forget that page loading speeds have become a key metric in search engine ranking results.

Price and performance

Cost is another benefit, as the requisite scalable resources ensure that clients only pay for the services they need. If you prefer to manage your own LAMP stacks and install your own security patches, unmanaged hosting is surprisingly affordable. A single-website small business will typically require a modest amount of bandwidth, with resources hosted on a shared server for cost-effectiveness. Yet any spikes in traffic can be instantly met, without requiring permanent allocation of additional hardware. And more resources can be made available as the company grows – including a dedicated server.

As anyone familiar with peer-to-peer file sharing will appreciate, transferring data from one platform to another can be frustratingly slow. Cloud computing often deploys multiple servers to minimize transfer times, with additional devices sharing the bandwidth and taking up any slack. This is particularly important for clients whose data is being accessed internationally.

Earlier on, we outlined the differences between managed and unmanaged hosting. Their merits also vary:

  1. Unmanaged hosting is similar to having your own server, since patches and installs are your own responsibility. For companies with qualified IT staff already on hand, that might seem more appealing than outsourcing it altogether. With full administrative access via cPanel and the freedom to choose your own OS and software stacks, an unmanaged account is ideal for those who want complete control over their network and software. This is also the cheaper option.
  2. By contrast, managed cloud hosting places you in the hands of experienced IT professionals. This is great if you don’t know your HTTP from your HTML. Technical support is on-hand at any time of day or night, though there probably won’t be many issues to concern you. Data centers are staffed and managed by networking experts who preemptively identify security threats, while ensuring every server and bandwidth connection is performing optimally.

Whether you prefer the control of an unmanaged package or the support provided by managed solutions, cloud servers represent fully isolated and incredibly secure environments. Our own data centers feature physical and biometric security alongside CCTV monitoring. Fully redundant networks ensure constant connectivity, while enterprise-grade hardware firewalls are designed to repel malware and DDoS attacks. We’ll even provide unlimited SSL certificates for ecommerce websites or confidential services.

The drawbacks of cloud server hosting

lthough we’re big fans of cloud hosting, we do recognize it’s not suitable for every company. These are some of the drawbacks to hosting your networks and servers in the cloud:

Firstly, some IT managers like the reassurance of physically owning and supervising their servers, in the same way traditionalists still favor installing software from a CD over cloud-hosted alternatives. Many computing professionals are comfortably familiar with the intricacies of bare metal servers, and prefer to have everything under one roof. If you already own a well-stocked server room, cloud hosting may not be cost effective or even necessary.

Entrusting key service delivery to a third-party means your reputation is only as good as their performance. Some cloud hosting companies limit monthly bandwidth, applying substantial excess-use charges. Others struggle with downtime – those service outages and reboots that take your websites or files offline, sometimes without warning. Even blue chip cloud services like Dropbox and iCloud have historically suffered lengthy outages. Clients won’t be impressed if you’re forced to blame unavailable services on a partner organization as their contract is ultimately with you.

Less scrupulous hosting partners might stealthily increase account costs every year, hoping their time-poor clients won’t want the upheaval and uncertainty of migrating systems to a competitor. Migrating to a better cloud hosting company can become logistically complex, though 100TB will do everything in our power to smooth out any transitional bumps. By contrast, a well-installed and modern RAID system should provide many years of dependable service without making a significant appearance on the end-of-year balance sheet.

Clouds on the horizon

Handing responsibility for your web pages and databases to an external company requires a leap of faith. You’re surrendering control over server upgrades and software patches, allowing a team of strangers to decide what hardware is best placed to service your business. Web hosting companies have large workforces, where speaking to a particular person can be far more challenging than calling Bob in your own IT division via the switchboard. Decisions about where your content is hosted will be made by people you’ve never met, and you’ll be informed (but not necessarily consulted) about hardware upgrades and policy changes.

Finally, cloud systems are only as dependable as the internet connection powering them. If you’re using cloud servers to host corporate documents, but your broadband provider is unreliable, it won’t be long before productivity and profitability begin to suffer. Conversely, a network server hosted downstairs can operate across a LAN, even if you’re unable to send and receive email or access the internet.

To cloud host or not?

In fairness, connection outages are likely to become increasingly anachronistic as broadband speeds increase and development of future technologies like Li-Fi continues. We are moving towards an increasingly cloud-based society, from Internet of Things-enabled smart devices to streaming media and social networks. A growing percentage of this content is entirely hosted online, and it’ll become unacceptable for ISPs to provide anything less than high-speed always-on broadband.

 

Trusting the experts

If you believe cloud hosting might represent a viable option for your business, don’t jump in with both feet. Speak to 100TB for honest and unbiased advice about whether the cloud offers a better alternative than a bare metal server or a self-installed RAID setup. Our friendly experts will also reassure you about the dependability of our premium networks, which come with a 99.999 per cent service level agreement. We even offer up to 1,024 terabytes of bandwidth, as part of our enormous global network capacity.

{{cta(‘383d7770-0d3d-4f06-af1b-d85925215e21′,’justifycenter’)}}

You’ve probably heard a great deal about the Internet of Things in recent years. Commonly abbreviated to IoT, this panoply of connected devices has been described as a revolution in the making. Some people predict it will be as transformative as the internet itself, liberating us from mundane tasks through automation and machine-to-machine communication.

Yet despite our appreciation of desktop and website security, IoT security issues have remained a perplexingly peripheral topic of discussion. Fortunately, that’s about to change. The rapid rollout of web-enabled devices throughout our homes and workplaces means IoT security solutions are becoming big business. From public key interfaces to semiconductor technology, an entire industry is developing around counteracting security risks or threats.

This article considers why IoT security issues are becoming such a headache. We look at the latest solutions, and look at how these are likely to evolve in future. Finally, we offer practical advice on how to ensure today’s services are ready for tomorrow’s challenges, with a series of steps any IT manager can easily implement.

The IoT Security Problem

An estimated five million web-enabled devices are introduced to the Internet of Things every day, and this already startling number is predicted to increase fivefold by 2020. The majority of devices are aimed at consumers rather than corporate audiences, and every single one is responsible for uploading information about us – from smart TVs to security systems. Much of this data is potentially harmful in the wrong hands; GPS usage data can pin us to specific locations in potentially unwelcome ways, while personal information might be misused by black hat marketing firms in their pursuit of new ways to target specific demographics.

As our awareness of internet threats expands, consumers are increasingly conducting online communications through encrypted peer-to-peer communication platforms like WhatsApp rather than publicly visible forums like Facebook. Yet IoT data is often transmitted insecurely across open Wi-Fi networks. A stranger sitting in a van outside your home or place of work could easily intercept data during transmission, potentially accessing information they have no right to view. Sensitive data about health or personal activities could then be used for identity theft, blackmail or countless other nefarious uses.

Such activities would be easy to prevent if all IoT-enabled devices had a global security standard, but they don’t. Every manufacturer attributes different values to data protection, with proprietary software and varied connection methods. A modern smart office contains dozens of incompatible trust standards and device visibility levels, with reams of largely unrelated data being uploaded and processed in real time. Unsurprisingly, this has attracted the attention of criminals: Gartner recently predicted more than a quarter of enterprise attacks by 2020 will involve the IoT.

There hasn’t been any industry-wide attempt to impose security standards or global protocols across the Internet of Things, in stark contrast to the collaborative and co-operative approach to developing HTML5 security. Since IoT devices are usually fairly simple and intended to require minimal resources, Original Equipment Manufacturers (OEMs) are reluctant to include advanced features that could complicate setup or usage. Expensive protection is frequently unjustifiable on products or services with low price points, in industries where every cent counts. Bolstering security also has the potential to adversely affect battery life on non mains-powered devices, adversely affecting reliability and usability.

Some manufacturers have claimed their IoT devices don’t need robust data protection. It’s been suggested that when smart bathroom scales report to My Fitness Pal, nobody will be interested apart from the owner and their doctor. However, it’s easy to see how a teenager might be embarrassed or even bullied if their weekly weight data was hacked by a classmate and written up on the chalkboard in class. And that scenario pales into insignificance compared to someone’s weight being sent to potential employers during recruitment and selection, or exfiltrated by advertisers to target overweight individuals with junk food ads.

The IoT Security Solution

Individual IoT devices are often modest, carrying limited volumes of data. It’s often when they’re added into a smart office or connected home that the volume of potentially compromising information being transmitted becomes an issue. And while developers have historically been reluctant to incorporate adequate security measures, the tide is turning.

From securing existing networks to embedding security into IoT-enabled devices, below  are some of the ways IoT security solutions are being developed…

  1.     Security credentials. This phrase has been turned into action by Verizon Enterprise Solutions, who have developed a way of overlaying existing security with additional protection. Credentials may involve digital certificates or 2FA tokens, producing an over-the-top layer of protection that can be applied to devices irrespective of their existing features. Since a great deal of IoT communication is between machines with no human input, traditional authentication methods like biometrics are invalid. Instead, devices are secured by repelling network threats detected via vulnerability assessments and URL blacklists. This enables connected devices to transmit information without impediment.
  2.     Embedded systems. Rather than retrospectively adding a security layer over IoT devices, it’s obviously preferable and advisable to have security integrated during the manufacturing process. While that increases costs, it ensures everything from ICS to POS devices transmit data securely. At the same time, in-built analytics can detect threats from malware or hackers. Semiconductor technologies are being used to spearhead the authentication of user credentials, guarding against malevolent activities.
  3.    Protected networks. Before data is distributed across the internet, it can be agglomerated in a local network. Bitdefender has pioneered a security solution that effectively provides a firewall against network flaws such as weak passwords or unsecured communications. Outbound connections are checked for unsafe or unsecure sites, while granular control of individual devices can remotely install OS updates or resolve system issues.
  4.    PKIs. Public Key Interfaces eliminate the need for 2FA tokens or password policies, with SSL encryption ensuring that data is secure during transfer between a device and the cloud. It’s easy to confirm software and settings haven’t been tampered with, while message signatures ensure data can’t be manipulated or copied in transit. Digital certificates can be used on cloud-hosted and on-premise devices alike, though simpler ones might lack the system resources to implement PKIs.

Increasingly, IoT protection involves a larger focus than merely protecting individual devices against hacking or spying.

These are among the industry-wide approaches being undertaken or invested in, to bolster safety among connected devices:

  1.     Machine learning. Today’s critical mass of IoT devices is driving the development of an entirely new security analytics sub-sector, with companies aggregating and normalizing data to identify unusual activities. While big data solutions to IoT issues remain in the developmental stage, firms from Cisco to Kaspersky Lab are developing AI and machine learning models to identify IoT-specific attacks, such as botnets. These may not be identified by traditional network protection tools, which are aimed chiefly at browser-based attacks.
  2.     Pre-emptive troubleshooting. Firms like Trustwave enable IoT developers and providers to assess vulnerabilities in an existing IoT ecosystem, from devices and applications to connections. Through penetration testing and threat analysis, OEMs and software developers can resolve weaknesses in apps, APIs, products and protocols. A more dependable service for consumers ensues.
  3.     Security toolkits. Alternatively, why not get one company to handle every aspect of IoT security issues, from initial design to final beta testing? The open source libsecurity platform is IBM’s one-stop shop for application developers, covering everything from APIs and libraries to encryption and secure storage via password/account management. These IoT security solutions are designed for the restricted runtime environments of today’s applications, removing the burden of coding from developers.

Data hosts also have a role to play in improving this industry’s historically poor security record, by ensuring that the volumes of aggregated data being delivered to their servers can’t be hacked or stolen. This can be achieved by finding and appointing a trusted hosting partner like 100TB. Our data centers are specifically designed to repel DDoS attacks and malware, with the option of a managed firewall. Offline security is taken care of with digital video surveillance and biometric access allied to proximity keycard control, plus round-the-clock security details and restricted access to server cabinets. From San Jose to Singapore, your data will be safe in our centers.

The Future of IoT Security

With an estimated 20 billion IoT-enabled devices expected by 2020, what does the future of IoT security look like? Many believe it will involve significantly more reporting and two-way communications. At present, devices passively upload data into the cloud. In future, analysts expect a degree of machine learning from either the devices or their host servers, identifying unusual data patterns and proactively responding to perceived threats. This will take place behind the scenes, since many IoT devices are designed to operate autonomously without any human input during their operational lifetime. Trusted Platform Modules are among the technologies being tipped to authenticate hardware and software without draining battery life, which remains a valid concern at present.

Another difference will involve standardization. The plethora of processors and operating systems currently being marketed will dwindle to a smaller number of industry-leading protocols, helping to simplify the process of identifying and resolving weaknesses. Regulatory standards for data protection will be agreed upon, possibly at Governmental level, with Certificate Authorities ensuring standards are being met. Consumers will also become better educated about the practicalities of IoT security solutions, though it might take a Wikileaks or Ashley Madison-style data breach to focus the public’s attention on database vulnerabilities.

Finally, developer and manufacturer arguments about cost cutting or simplicity will be rendered moot as economies of scale dovetail with greater industry regulation. Securing the Internet of Things won’t be devolved to aftermarket routers any more – it’ll become a central part of the design, manufacture and installation process. An industry standard for tackling common IoT security issues is almost inevitable, allowing devices to be sold with a seal or notice confirming their adherence to regulatory protocols. In short, IoT encryption will become as ubiquitous as HTTPs, and possibly even more valuable in our daily lives.

What Are the Next Steps?

If you want to ensure your connected home or office isn’t vulnerable to attack, these are some of the key steps to take:

  1.     Secure your router. Routers are the primary gateway for all local IoT content before it reaches cyberspace, yet many people persist in using unsecured connections or default passwords. Ramping up router protection should be your number one priority.
  2.    Keep devices local if possible. Devices often default to an internet connection, but it may be sufficient to keep them within a LAN. Hiding them behind a secure router reduces public exposure, so investigate whether you can prevent port forwarding.
  3.     Ensure devices that authenticate against other systems do so securely, with unique identification details or SSH encryption keys. This might not apply to simpler IoT devices, but it should cover CCTV systems and any satellite-based services.
  4.     Manually check for updates. Because there are no industry standards to adhere to, manufacturers and software developers don’t always promote updates. It’s presently incumbent on end users to check for software updates, security patches and so forth.
  5.     Employ TLS where possible. On-chip memories can be used to encrypt information, preventing so-called ‘man in the middle’ attacks on data in transit. TLS is a logical extension of the end-to-end encryption already used by platforms like WhatsApp.
  6.     Scan for vulnerabilities. Imperva Incapsula’s Mirai scanner investigates every device sharing a TCP/IP address, probing their resistance to the Mirai DDoS botnet. A quick Google search will reveal similar free or open source scanning tools.
  7.      Change default passwords. This is perhaps the simplest and most obvious recommendation of all, yet it’s commonly ignored. Breaching one IoT device may open up your entire network, so why leave passwords set as ‘1234’ or ‘password’?

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}