Currently Viewing: Technology

Technology and business goals collide at the intersection of page load time and conversion rate.

Marketing wants a fully featured page, with lots of images and services to track user behavior.

Page load time has a huge effect on conversion rate and customer happiness:

  • Half of all customers expect a webpage to load in under 2 seconds. If it doesn’t, they lose trust in and patience with the site, and click the back button to navigate to the next search result on Google.
  • User expectations have increased big time. SOASTA conducted an experiment between 2014 and 2015, looking at conversion rates based on page load time. In 2015 they saw a peak conversion rate at for sites loading in 2.4 seconds. This was 30% faster than the peak conversion load time in 2014.  
  • Cedexis found that decreasing load time by just 1 second improves conversion rate by an average of 27.3%, where a conversion is defined as a purchase, download or sign-up.

The necessity of keeping page load time low for a good customer experience means that tech teams need to exercise every option available to them for performance. Effective caching techniques can bring improvements to even the leanest of websites.

Why optimize caching?

Caching is the process of storing elements so that clients can retrieve resources from memory without needing to put strain on the main server. Utilizing caches has three main benefits.

First of all, caching can make web pages load faster, especially if the user has visited before. If you utilize caching to distribute content globally, visitors will see a reduction in latency, which is the time it takes for a request to physically travel from their browser through the network to the server and back again. If your page is cached locally on the user’s browser, they don’t need to download every resource from your server, every time.

Secondly, caching reduces the amount of bandwidth needed. Instead of the server being responsible for delivering resources for every request, it only needs to deliver new content. Everything else can be returned from a cache along the network.

Finally, caching increases how robust your site is. A client can retrieve resources from a cache, even if your server is down or experiencing high traffic. A plan for preparing for volume spikes should include a caching strategy.

Levels of caching

Caching can happen at lots of different checkpoints along the network, right from the browser to the server itself. Every checkpoint has different benefits and challenges associated with them.

Let’s start with the caching options closest to the end user, then move up the chain to the server where the resource being retrieved originates from.

Browser caching – imagine scrolling through the search results on an online shop. You click on an image link to load the product page, decide it’s not quite right, and hit the back button. If your browser had to request the entire search page again, you’d have to wait for all the images to be downloaded to your browser for a second time. Fortunately, browsers use memory to store a version of sites they’ve already visited. Instead of going all the way to the server and back again, your browser just pulls up the version it’s already stored for you. It will also do this for constant pieces of your site, like your logo, for example.

Proxy cache (Web Server Accelerator) – caches can also be shared between many users. ISP use caches to reduce bandwidth requirements by sharing resources. That way, if one user has already requested a static resource (like an image or file) the ISP doesn’t need to request it again from the server – it can provide it instantly.

Content Delivery Network (CDN) – remember how distance between user and server affects load time? CDNs are caches designed to reduce latency by distributing copies of cached files to local servers all over the world. When a user requests a resource, they are connected to their local CDN. Companies with international users should consider using a CDN to reduce latency.

Server side caching/ reverse proxy – if most of your content is static, you can cache it for yourself, so customers won’t need to hit your server to load static content. There  are several tools that do this for you – Redis, Varnish, and phpfm are all popular options.

Database caching – database servers are often separated from the rest of the server. This means that when your server receives a request from a user, they need to request something extra from the database. If a frequent request always returns the same result, you can cache this in a database cache. This prevents the database from crunching the same request over and over again, resulting in better performance, even during busy periods. Search servers  for ecommerce sites also return cacheable queries.

When should you optimize caching?

“I’m not lazy, I’m just efficient” – ever heard that before? Well, think of your servers as the absolute laziest pieces of hardware you own. Never ask them to do something time consuming twice if there’s a way for them to hold onto results in a cache down the line

For example, you sell jewelry online and one of your top link destinations is a list featuring the 20 most popular items. If you didn’t utilize caching, every time a visitor clicked on that link, they’d need to send a new request through their ISP to your server, which would ask the database to calculate the top 20 items and then send back each of the corresponding images and prices. But realistically, you don’t need to compute this full page every time it’s requested. The top 20 items don’t change often enough to require real-time results. Instead, cache the page in a reverse proxy – located in the same country as the customer – and deliver it much faster.

When you start optimizing your caching strategy a good place to begin is by identifying the most popular and largest representations first. You’ll get the biggest benefit from focusing on caching improvements for pages that are resource heavy and requested often. Looking at the waterfall diagrams on the Network tab of your browser can help identify resource intensive modules on the page.

Time To First Byte (TTFB) is a good way to measure the responsiveness of your web server. Improving your caching strategy through reverse proxies, CDNs and compression will help customers experience shorter TTFB, and help your website feel snappier.

However, don’t forget that most customers will have a poorer experience than that seen in testing. They might, for example, be located on the opposite side of the world using a mobile device or an older computer. By utilizing caching best practices, you’ll ensure customers have a great experience, no matter where they are.

When you need to refresh your data

Because we work in a world where everything is frequently updated, it’s important to understand the different methods we have of forcing a cache reset. There are a few ways we can force browsers and other caches to retrieve a fresh copy of data straight from the server.

  1. Set expiration date  – when the site doesn’t need to stay perfectly up to date in real time, but does need to stay reasonably fresh. If you set an expiration date in your header, the browser will dump the cache after that time. If the resource is requested again, a fresh copy will be retrieved.
  2. Set modified-since – the client will download the updated resource only if the server confirms it’s been updated after the modified-since date. Instead of sending everything again, the server can send back a short 304 response without a body, thus saving bandwidth and time.
  3. Clear specific module – you don’t need to refresh your entire blog cache just to display a new comment. Segmenting a page into different modules can help with cache refreshes.
  4. Fingerprinting – caches work by storing and retrieving a specific file when requested. If you change the name of the file, the cache won’t be able to find the file, and the new copy will be downloaded. This is how fingerprinting works to keep assets up to date. By attaching a unique series of characters to the filename, each asset is considered a new file and requested from the server. Because the content is updated every time, you can set an expiration date years in the future and never worry about a stale cache. Many compilers will automatically fingerprint assets for you, so you can keep the same base filename.

Don’t forget that a cache is not long term storage! If you decide to cache something for later, you might find that it’s been invalidated and you need to retrieve the resource again.

Making caching work for you

Determining the perfect solution for your site can be difficult. Rely too much on caching and you might find users have outdated sites, or memory troubles in their browser. Ignore caching entirely and you’ll see page loading times increase and user experience suffer.

By understanding your user’s needs, you can create a great experience from the beginning. If caching is important, it’s worth using a framework that provides out-of-the-box caching optimization. If caching is less relevant because accuracy is more important than speed, then you can make allowances either way.

Caching strategy is a problem to be solved uniquely for each app. Determining where you can utilize caching to save bandwidth is an ongoing learning experience. Keep making incremental improvements and keep it light for your customers.

Does your technology stack help your business thrive? Can a better server infrastructure enable improved business decisions? As AI and big data continue to find their way into our businesses, the technology driving our strategies needs to keep up. Companies that embrace AI capabilities will have a huge influence over firms unable to take advantage of it.

In this post we take a look at Facebook’s latest upgrade, Big Basin, to understand how some of the biggest tech giants are preparing for the onslaught of AI and big data. By preparing our server infrastructure to handle the need for more processing power and better storage, we can make sure our organizations stay in the lead.

Facebook’s New Server Upgrade

Earlier this year Facebook introduced its latest server hardware upgrade, Big Basin. This GPU powered hardware system replaces Big Sur, which was Facebook’s first system dedicated to machine learning and AI from 2015. Big Basin is designed to train neural models that are 30% larger so Facebook can experiment faster and more efficiently. This is achieved through greater arithmetic throughput and a memory increase from 12GB to 16GB.

One major feature of Big Basin is the modularity of each component. This allows new technologies to be added without a complete redesign. Each component can be scaled independently depending on the needs of the business. This modularity also makes servicing and repairs more efficient, requiring less downtime overall.

Why does Facebook continue to invest in fast multi-GPU servers? Because it understands that the business depends on it. Without top-of-the-line hardware, Facebook can’t continue to lead the market in AI and Big Data. Let’s dive into each of these areas separately to see how they apply to your business.

 

Artificial Intelligence

Facebook’s Big Basin server was designed with AI in mind. It makes complete sense when you look at their AI first business strategy. Translations, image searching and recommendation engines all rely on AI technology to enhance the user experience. But you don’t have to be Facebook to see the benefit of using AI for business.

Companies are turning to AI to assist data scientists in identifying trends and recommending strategies for the company to focus on. Technology like idiomatic can crunch through a huge number of unsorted customer conversations to pull out useful quantitative data. Unlocking the knowledge that lives in unstructured conversations with customers can empower the Voice of the Customer team to make strong product decisions. PWC uses AI to model complex financial situations and identify future opportunities for each customer. They can look at current customer behavior and determine how each segment feels about using insurance and investment products, and how that changes over time. Amazon Web Services uses machine learning to predict future capacity needs. In 2015, a study suggested that 25% of companies currently use AI, or would in the next year, to enable better business decision making.

But all of this relies on the technological ability to enable AI in your organization. What does that mean in practice? Essentially, top of the line GPUs. For simulations that require the same data or algorithm run over and over again, GPUs far exceed the capabilities of CPU computing. While CPUs handle the majority of the code, sending any code that requires parallel computation to GPUs massively improves speed. AI requires computers to run simulations many, many times over, similar to password-cracking algorithms. Because the simulations are very similar, you can tweak each variable slightly and take advantage of the GPU shared memory to run many more simulations much faster. This is why Big Basin is a GPU based hardware system – it’s designed to crunch enormous amounts of data to power their AI systems. To get an idea of the power involved have a look at this:

Processing speed is especially important for deep learning and AI because of the need for iteration. As engineers see the results of experiments, they make adjustments and learn from mistakes. If the processing is too slow, a deep-learning approach can become disheartening. Improvement is slow, a return on investment seems far away and engineers don’t gain practical experience as quickly, all of which can drastically impact business strategy. Say you have a few hypothesis that you want to test when building your neural network. If you aren’t using top quality GPUs, you’ll have to wait a long time between testing each hypothesis, which can draw out development for weeks or months. It’s worth the investment in fast GPUs.

Big Data

Data can come from anywhere. Your Internet-of-Things toaster, social media feeds, purchasing trends or attention-tracking advertisements are all generating data at a far higher rate than we’ve ever seen before. The last estimate is that digital data created worldwide would grow from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. A zettabyte of data is equal to about 250 billion DVDs, and this growth is coming from everywhere. For example – a Ford GT generates about 100GB of data per hour.

The ability to make this influx of data work for you depends on your server infrastructure. Even if you’re collecting massive amounts of data, it’s not worth anything if you can’t analyze it, and quickly. This is where big data relies on technology. Facebook uses big data to drive its leading ad-tech platform, making advertisements hyper targeted.

As our data storage needs expand to handle Big Data, we need to keep two things in mind: accessibility and compatibility. Without a strong strategy, data can become fragmented across multiple servers, regions and formats. This makes it incredibly difficult to form any conclusive analysis.

Just as AI relies on high GPU computing power to run neural network processing, Big Data relies on quick storage and transport systems to retrieve and analyze data. Modular systems tend to scale well and also allow devops teams to work on each component separately, leading to more flexibility. Because so much data has to be shuttled back and forth, investing in secure 10 gigabit connections will make sure your operation has the power and security to last. These features can be grouped into the 3 vs: data storage capacity (volume), rapid retrieval (velocity), and analysis (verification).

Big data and AI work together to superpower your strategy teams. But to function well, your data needs to be accessible and your servers need to be flexible enough to handle AI improvements as fast as they come. Which, it turns out, is pretty quickly.

What This Means For Your Business

Poor server infrastructure should never be the reason your team doesn’t jump on opportunities that come their way. If Facebook’s AI team wasn’t able to “move fast and break things” because their tools couldn’t keep up with neural network processing demands, they wouldn’t be where they are today.

As AI and Big Data continue to dominate the business landscape, server infrastructure needs to stay flexible and scalable. We have to adopt new technology quickly, and need to be able to scale existing components to keep up with ever increasing data collection requirements. Clayton Christensen recently tweeted, “Any strategy is (at best) only temporarily correct.” When strategy changes on a dime, your technology stack better keep you.

Facebook open sources all of its hardware design specifications, so head on over and check it out if you’re looking for ways to stay flexible and ready for the next big business advantage.

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}

Server rooms have been an integral part of IT departments for decades. These restricted-access rooms are usually hidden away in the bowels of a building, pulsing to the rhythm of spinning hard drives and air conditioning systems.

It’s a measure of the internet’s impact on computer networks and website hosting that cloud servers are becoming the norm rather than the exception. Databases and directories are hosted by a third party organization in a dedicated data center – effectively a giant offsite server room. Rather than each company requiring its own cluster of RAID disks and security/fire protection infrastructure, multiple clients can be serviced from one location to achieve huge economies of scale.

Even though 100TB is renowned for the quality of our cloud server hosting services, we recognize that this option isn’t for everyone. In this article, we look at the pros and cons of cloud servers, offering you a guide to determine whether it represents the optimal choice for your business. After all, those server rooms haven’t been rendered completely obsolete yet…

What is cloud server hosting?

Before we explore the advantages and disadvantages of this model, let’s take a moment to consider how it actually works. As an example, the servers powering 100TB’s infrastructure are based in 26 data centers around the world. Having a local center minimizes the time information takes to travel between a server and a user in that country or region, since every node and relay fractionally adds to the transfer time. Delays of 50 milliseconds might not be significant for a bulletin board, but they could be critical for a new streaming service. Irrespective of data request volumes, web pages and other hosted content should be instantly – and constantly – accessible.

There are two types of cloud hosting, whose merits and drawbacks are considered below:

  1. Managed cloud. As the name suggests, managed hosting includes maintenance and technical support. Servers can be shared between several clients with modest technical requirements to reduce costs, with tech support always on hand.
  2. Unmanaged cloud. A third party provides hardware infrastructure like disks and bandwidth, while the client supervises software updates and security issues. It’s basically the online equivalent of having a new server room, filled with empty hardware.

The advantages of cloud server hosting

The first advantage of using the cloud, and perhaps the most significant, is being able to delegate technical responsibility to a qualified third party. Even by the standards of the IT sector, networks are laced with technical terminology and require regular maintenance to protect them against evolving security flaws. Outsourcing web hosting and database management liberates you from jargon-busting, allowing you to concentrate on core competencies such as developing new products and services. You effectively acquire a freelance IT department, operating discreetly behind the scenes.

Cloud computing is ideal for website hosting, where traffic may originate on any continent with audiences expecting near-instant response times. The majority of consumers will abandon a web page if it takes more than three seconds to load, so having high-speed servers with impressive connectivity around the world will ensure end user connection speeds are the only real barrier to rapid display times. Also, don’t forget that page loading speeds have become a key metric in search engine ranking results.

Price and performance

Cost is another benefit, as the requisite scalable resources ensure that clients only pay for the services they need. If you prefer to manage your own LAMP stacks and install your own security patches, unmanaged hosting is surprisingly affordable. A single-website small business will typically require a modest amount of bandwidth, with resources hosted on a shared server for cost-effectiveness. Yet any spikes in traffic can be instantly met, without requiring permanent allocation of additional hardware. And more resources can be made available as the company grows – including a dedicated server.

As anyone familiar with peer-to-peer file sharing will appreciate, transferring data from one platform to another can be frustratingly slow. Cloud computing often deploys multiple servers to minimize transfer times, with additional devices sharing the bandwidth and taking up any slack. This is particularly important for clients whose data is being accessed internationally.

Earlier on, we outlined the differences between managed and unmanaged hosting. Their merits also vary:

  1. Unmanaged hosting is similar to having your own server, since patches and installs are your own responsibility. For companies with qualified IT staff already on hand, that might seem more appealing than outsourcing it altogether. With full administrative access via cPanel and the freedom to choose your own OS and software stacks, an unmanaged account is ideal for those who want complete control over their network and software. This is also the cheaper option.
  2. By contrast, managed cloud hosting places you in the hands of experienced IT professionals. This is great if you don’t know your HTTP from your HTML. Technical support is on-hand at any time of day or night, though there probably won’t be many issues to concern you. Data centers are staffed and managed by networking experts who preemptively identify security threats, while ensuring every server and bandwidth connection is performing optimally.

Whether you prefer the control of an unmanaged package or the support provided by managed solutions, cloud servers represent fully isolated and incredibly secure environments. Our own data centers feature physical and biometric security alongside CCTV monitoring. Fully redundant networks ensure constant connectivity, while enterprise-grade hardware firewalls are designed to repel malware and DDoS attacks. We’ll even provide unlimited SSL certificates for ecommerce websites or confidential services.

The drawbacks of cloud server hosting

lthough we’re big fans of cloud hosting, we do recognize it’s not suitable for every company. These are some of the drawbacks to hosting your networks and servers in the cloud:

Firstly, some IT managers like the reassurance of physically owning and supervising their servers, in the same way traditionalists still favor installing software from a CD over cloud-hosted alternatives. Many computing professionals are comfortably familiar with the intricacies of bare metal servers, and prefer to have everything under one roof. If you already own a well-stocked server room, cloud hosting may not be cost effective or even necessary.

Entrusting key service delivery to a third-party means your reputation is only as good as their performance. Some cloud hosting companies limit monthly bandwidth, applying substantial excess-use charges. Others struggle with downtime – those service outages and reboots that take your websites or files offline, sometimes without warning. Even blue chip cloud services like Dropbox and iCloud have historically suffered lengthy outages. Clients won’t be impressed if you’re forced to blame unavailable services on a partner organization as their contract is ultimately with you.

Less scrupulous hosting partners might stealthily increase account costs every year, hoping their time-poor clients won’t want the upheaval and uncertainty of migrating systems to a competitor. Migrating to a better cloud hosting company can become logistically complex, though 100TB will do everything in our power to smooth out any transitional bumps. By contrast, a well-installed and modern RAID system should provide many years of dependable service without making a significant appearance on the end-of-year balance sheet.

Clouds on the horizon

Handing responsibility for your web pages and databases to an external company requires a leap of faith. You’re surrendering control over server upgrades and software patches, allowing a team of strangers to decide what hardware is best placed to service your business. Web hosting companies have large workforces, where speaking to a particular person can be far more challenging than calling Bob in your own IT division via the switchboard. Decisions about where your content is hosted will be made by people you’ve never met, and you’ll be informed (but not necessarily consulted) about hardware upgrades and policy changes.

Finally, cloud systems are only as dependable as the internet connection powering them. If you’re using cloud servers to host corporate documents, but your broadband provider is unreliable, it won’t be long before productivity and profitability begin to suffer. Conversely, a network server hosted downstairs can operate across a LAN, even if you’re unable to send and receive email or access the internet.

To cloud host or not?

In fairness, connection outages are likely to become increasingly anachronistic as broadband speeds increase and development of future technologies like Li-Fi continues. We are moving towards an increasingly cloud-based society, from Internet of Things-enabled smart devices to streaming media and social networks. A growing percentage of this content is entirely hosted online, and it’ll become unacceptable for ISPs to provide anything less than high-speed always-on broadband.

 

Trusting the experts

If you believe cloud hosting might represent a viable option for your business, don’t jump in with both feet. Speak to 100TB for honest and unbiased advice about whether the cloud offers a better alternative than a bare metal server or a self-installed RAID setup. Our friendly experts will also reassure you about the dependability of our premium networks, which come with a 99.999 per cent service level agreement. We even offer up to 1,024 terabytes of bandwidth, as part of our enormous global network capacity.

{{cta(‘383d7770-0d3d-4f06-af1b-d85925215e21′,’justifycenter’)}}

Bitcoin is the world’s first internet-based currency. It’s worth knowing about, but is it worth accepting as a method of payment?

Tech Giants Partner on AI Research; Nvidia Unveils Xavier ‘AI’ Chip; Venture Capitalist Apologies for Women-in-Tech Op-Ed; Microsoft Looks to FPGAs for Data Center Acceleration; IEEE Ratifies New, Faster Ethernet Standard.

100TB brings you the latest in the technology sector to kick start your week: Labour Leader Corbyn Publishes “Digital Democracy Manifesto”; US Department of Defense Should “Embrace Open Source”; EC Demands Apple Irish Tax Repayments; Facebook Releases Zstandard Compression Code; Google Coder ‘Settles’ the Tabs versus Spaces Debate.