Does your technology stack help your business thrive? Can a better server infrastructure enable improved business decisions? As AI and big data continue to find their way into our businesses, the technology driving our strategies needs to keep up. Companies that embrace AI capabilities will have a huge influence over firms unable to take advantage of it.

In this post we take a look at Facebook’s latest upgrade, Big Basin, to understand how some of the biggest tech giants are preparing for the onslaught of AI and big data. By preparing our server infrastructure to handle the need for more processing power and better storage, we can make sure our organizations stay in the lead.

Facebook’s New Server Upgrade

Earlier this year Facebook introduced its latest server hardware upgrade, Big Basin. This GPU powered hardware system replaces Big Sur, which was Facebook’s first system dedicated to machine learning and AI from 2015. Big Basin is designed to train neural models that are 30% larger so Facebook can experiment faster and more efficiently. This is achieved through greater arithmetic throughput and a memory increase from 12GB to 16GB.

One major feature of Big Basin is the modularity of each component. This allows new technologies to be added without a complete redesign. Each component can be scaled independently depending on the needs of the business. This modularity also makes servicing and repairs more efficient, requiring less downtime overall.

Why does Facebook continue to invest in fast multi-GPU servers? Because it understands that the business depends on it. Without top-of-the-line hardware, Facebook can’t continue to lead the market in AI and Big Data. Let’s dive into each of these areas separately to see how they apply to your business.

 

Artificial Intelligence

Facebook’s Big Basin server was designed with AI in mind. It makes complete sense when you look at their AI first business strategy. Translations, image searching and recommendation engines all rely on AI technology to enhance the user experience. But you don’t have to be Facebook to see the benefit of using AI for business.

Companies are turning to AI to assist data scientists in identifying trends and recommending strategies for the company to focus on. Technology like idiomatic can crunch through a huge number of unsorted customer conversations to pull out useful quantitative data. Unlocking the knowledge that lives in unstructured conversations with customers can empower the Voice of the Customer team to make strong product decisions. PWC uses AI to model complex financial situations and identify future opportunities for each customer. They can look at current customer behavior and determine how each segment feels about using insurance and investment products, and how that changes over time. Amazon Web Services uses machine learning to predict future capacity needs. In 2015, a study suggested that 25% of companies currently use AI, or would in the next year, to enable better business decision making.

But all of this relies on the technological ability to enable AI in your organization. What does that mean in practice? Essentially, top of the line GPUs. For simulations that require the same data or algorithm run over and over again, GPUs far exceed the capabilities of CPU computing. While CPUs handle the majority of the code, sending any code that requires parallel computation to GPUs massively improves speed. AI requires computers to run simulations many, many times over, similar to password-cracking algorithms. Because the simulations are very similar, you can tweak each variable slightly and take advantage of the GPU shared memory to run many more simulations much faster. This is why Big Basin is a GPU based hardware system – it’s designed to crunch enormous amounts of data to power their AI systems. To get an idea of the power involved have a look at this:

Processing speed is especially important for deep learning and AI because of the need for iteration. As engineers see the results of experiments, they make adjustments and learn from mistakes. If the processing is too slow, a deep-learning approach can become disheartening. Improvement is slow, a return on investment seems far away and engineers don’t gain practical experience as quickly, all of which can drastically impact business strategy. Say you have a few hypothesis that you want to test when building your neural network. If you aren’t using top quality GPUs, you’ll have to wait a long time between testing each hypothesis, which can draw out development for weeks or months. It’s worth the investment in fast GPUs.

Big Data

Data can come from anywhere. Your Internet-of-Things toaster, social media feeds, purchasing trends or attention-tracking advertisements are all generating data at a far higher rate than we’ve ever seen before. The last estimate is that digital data created worldwide would grow from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. A zettabyte of data is equal to about 250 billion DVDs, and this growth is coming from everywhere. For example – a Ford GT generates about 100GB of data per hour.

The ability to make this influx of data work for you depends on your server infrastructure. Even if you’re collecting massive amounts of data, it’s not worth anything if you can’t analyze it, and quickly. This is where big data relies on technology. Facebook uses big data to drive its leading ad-tech platform, making advertisements hyper targeted.

As our data storage needs expand to handle Big Data, we need to keep two things in mind: accessibility and compatibility. Without a strong strategy, data can become fragmented across multiple servers, regions and formats. This makes it incredibly difficult to form any conclusive analysis.

Just as AI relies on high GPU computing power to run neural network processing, Big Data relies on quick storage and transport systems to retrieve and analyze data. Modular systems tend to scale well and also allow devops teams to work on each component separately, leading to more flexibility. Because so much data has to be shuttled back and forth, investing in secure 10 gigabit connections will make sure your operation has the power and security to last. These features can be grouped into the 3 vs: data storage capacity (volume), rapid retrieval (velocity), and analysis (verification).

Big data and AI work together to superpower your strategy teams. But to function well, your data needs to be accessible and your servers need to be flexible enough to handle AI improvements as fast as they come. Which, it turns out, is pretty quickly.

What This Means For Your Business

Poor server infrastructure should never be the reason your team doesn’t jump on opportunities that come their way. If Facebook’s AI team wasn’t able to “move fast and break things” because their tools couldn’t keep up with neural network processing demands, they wouldn’t be where they are today.

As AI and Big Data continue to dominate the business landscape, server infrastructure needs to stay flexible and scalable. We have to adopt new technology quickly, and need to be able to scale existing components to keep up with ever increasing data collection requirements. Clayton Christensen recently tweeted, “Any strategy is (at best) only temporarily correct.” When strategy changes on a dime, your technology stack better keep you.

Facebook open sources all of its hardware design specifications, so head on over and check it out if you’re looking for ways to stay flexible and ready for the next big business advantage.

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}

Server rooms have been an integral part of IT departments for decades. These restricted-access rooms are usually hidden away in the bowels of a building, pulsing to the rhythm of spinning hard drives and air conditioning systems.

It’s a measure of the internet’s impact on computer networks and website hosting that cloud servers are becoming the norm rather than the exception. Databases and directories are hosted by a third party organization in a dedicated data center – effectively a giant offsite server room. Rather than each company requiring its own cluster of RAID disks and security/fire protection infrastructure, multiple clients can be serviced from one location to achieve huge economies of scale.

Even though 100TB is renowned for the quality of our cloud server hosting services, we recognize that this option isn’t for everyone. In this article, we look at the pros and cons of cloud servers, offering you a guide to determine whether it represents the optimal choice for your business. After all, those server rooms haven’t been rendered completely obsolete yet…

What is cloud server hosting?

Before we explore the advantages and disadvantages of this model, let’s take a moment to consider how it actually works. As an example, the servers powering 100TB’s infrastructure are based in 26 data centers around the world. Having a local center minimizes the time information takes to travel between a server and a user in that country or region, since every node and relay fractionally adds to the transfer time. Delays of 50 milliseconds might not be significant for a bulletin board, but they could be critical for a new streaming service. Irrespective of data request volumes, web pages and other hosted content should be instantly – and constantly – accessible.

There are two types of cloud hosting, whose merits and drawbacks are considered below:

  1. Managed cloud. As the name suggests, managed hosting includes maintenance and technical support. Servers can be shared between several clients with modest technical requirements to reduce costs, with tech support always on hand.
  2. Unmanaged cloud. A third party provides hardware infrastructure like disks and bandwidth, while the client supervises software updates and security issues. It’s basically the online equivalent of having a new server room, filled with empty hardware.

The advantages of cloud server hosting

The first advantage of using the cloud, and perhaps the most significant, is being able to delegate technical responsibility to a qualified third party. Even by the standards of the IT sector, networks are laced with technical terminology and require regular maintenance to protect them against evolving security flaws. Outsourcing web hosting and database management liberates you from jargon-busting, allowing you to concentrate on core competencies such as developing new products and services. You effectively acquire a freelance IT department, operating discreetly behind the scenes.

Cloud computing is ideal for website hosting, where traffic may originate on any continent with audiences expecting near-instant response times. The majority of consumers will abandon a web page if it takes more than three seconds to load, so having high-speed servers with impressive connectivity around the world will ensure end user connection speeds are the only real barrier to rapid display times. Also, don’t forget that page loading speeds have become a key metric in search engine ranking results.

Price and performance

Cost is another benefit, as the requisite scalable resources ensure that clients only pay for the services they need. If you prefer to manage your own LAMP stacks and install your own security patches, unmanaged hosting is surprisingly affordable. A single-website small business will typically require a modest amount of bandwidth, with resources hosted on a shared server for cost-effectiveness. Yet any spikes in traffic can be instantly met, without requiring permanent allocation of additional hardware. And more resources can be made available as the company grows – including a dedicated server.

As anyone familiar with peer-to-peer file sharing will appreciate, transferring data from one platform to another can be frustratingly slow. Cloud computing often deploys multiple servers to minimize transfer times, with additional devices sharing the bandwidth and taking up any slack. This is particularly important for clients whose data is being accessed internationally.

Earlier on, we outlined the differences between managed and unmanaged hosting. Their merits also vary:

  1. Unmanaged hosting is similar to having your own server, since patches and installs are your own responsibility. For companies with qualified IT staff already on hand, that might seem more appealing than outsourcing it altogether. With full administrative access via cPanel and the freedom to choose your own OS and software stacks, an unmanaged account is ideal for those who want complete control over their network and software. This is also the cheaper option.
  2. By contrast, managed cloud hosting places you in the hands of experienced IT professionals. This is great if you don’t know your HTTP from your HTML. Technical support is on-hand at any time of day or night, though there probably won’t be many issues to concern you. Data centers are staffed and managed by networking experts who preemptively identify security threats, while ensuring every server and bandwidth connection is performing optimally.

Whether you prefer the control of an unmanaged package or the support provided by managed solutions, cloud servers represent fully isolated and incredibly secure environments. Our own data centers feature physical and biometric security alongside CCTV monitoring. Fully redundant networks ensure constant connectivity, while enterprise-grade hardware firewalls are designed to repel malware and DDoS attacks. We’ll even provide unlimited SSL certificates for ecommerce websites or confidential services.

The drawbacks of cloud server hosting

lthough we’re big fans of cloud hosting, we do recognize it’s not suitable for every company. These are some of the drawbacks to hosting your networks and servers in the cloud:

Firstly, some IT managers like the reassurance of physically owning and supervising their servers, in the same way traditionalists still favor installing software from a CD over cloud-hosted alternatives. Many computing professionals are comfortably familiar with the intricacies of bare metal servers, and prefer to have everything under one roof. If you already own a well-stocked server room, cloud hosting may not be cost effective or even necessary.

Entrusting key service delivery to a third-party means your reputation is only as good as their performance. Some cloud hosting companies limit monthly bandwidth, applying substantial excess-use charges. Others struggle with downtime – those service outages and reboots that take your websites or files offline, sometimes without warning. Even blue chip cloud services like Dropbox and iCloud have historically suffered lengthy outages. Clients won’t be impressed if you’re forced to blame unavailable services on a partner organization as their contract is ultimately with you.

Less scrupulous hosting partners might stealthily increase account costs every year, hoping their time-poor clients won’t want the upheaval and uncertainty of migrating systems to a competitor. Migrating to a better cloud hosting company can become logistically complex, though 100TB will do everything in our power to smooth out any transitional bumps. By contrast, a well-installed and modern RAID system should provide many years of dependable service without making a significant appearance on the end-of-year balance sheet.

Clouds on the horizon

Handing responsibility for your web pages and databases to an external company requires a leap of faith. You’re surrendering control over server upgrades and software patches, allowing a team of strangers to decide what hardware is best placed to service your business. Web hosting companies have large workforces, where speaking to a particular person can be far more challenging than calling Bob in your own IT division via the switchboard. Decisions about where your content is hosted will be made by people you’ve never met, and you’ll be informed (but not necessarily consulted) about hardware upgrades and policy changes.

Finally, cloud systems are only as dependable as the internet connection powering them. If you’re using cloud servers to host corporate documents, but your broadband provider is unreliable, it won’t be long before productivity and profitability begin to suffer. Conversely, a network server hosted downstairs can operate across a LAN, even if you’re unable to send and receive email or access the internet.

To cloud host or not?

In fairness, connection outages are likely to become increasingly anachronistic as broadband speeds increase and development of future technologies like Li-Fi continues. We are moving towards an increasingly cloud-based society, from Internet of Things-enabled smart devices to streaming media and social networks. A growing percentage of this content is entirely hosted online, and it’ll become unacceptable for ISPs to provide anything less than high-speed always-on broadband.

 

Trusting the experts

If you believe cloud hosting might represent a viable option for your business, don’t jump in with both feet. Speak to 100TB for honest and unbiased advice about whether the cloud offers a better alternative than a bare metal server or a self-installed RAID setup. Our friendly experts will also reassure you about the dependability of our premium networks, which come with a 99.999 per cent service level agreement. We even offer up to 1,024 terabytes of bandwidth, as part of our enormous global network capacity.

{{cta(‘383d7770-0d3d-4f06-af1b-d85925215e21′,’justifycenter’)}}

You’ve probably heard a great deal about the Internet of Things in recent years. Commonly abbreviated to IoT, this panoply of connected devices has been described as a revolution in the making. Some people predict it will be as transformative as the internet itself, liberating us from mundane tasks through automation and machine-to-machine communication.

Yet despite our appreciation of desktop and website security, IoT security issues have remained a perplexingly peripheral topic of discussion. Fortunately, that’s about to change. The rapid rollout of web-enabled devices throughout our homes and workplaces means IoT security solutions are becoming big business. From public key interfaces to semiconductor technology, an entire industry is developing around counteracting security risks or threats.

This article considers why IoT security issues are becoming such a headache. We look at the latest solutions, and look at how these are likely to evolve in future. Finally, we offer practical advice on how to ensure today’s services are ready for tomorrow’s challenges, with a series of steps any IT manager can easily implement.

The IoT Security Problem

An estimated five million web-enabled devices are introduced to the Internet of Things every day, and this already startling number is predicted to increase fivefold by 2020. The majority of devices are aimed at consumers rather than corporate audiences, and every single one is responsible for uploading information about us – from smart TVs to security systems. Much of this data is potentially harmful in the wrong hands; GPS usage data can pin us to specific locations in potentially unwelcome ways, while personal information might be misused by black hat marketing firms in their pursuit of new ways to target specific demographics.

As our awareness of internet threats expands, consumers are increasingly conducting online communications through encrypted peer-to-peer communication platforms like WhatsApp rather than publicly visible forums like Facebook. Yet IoT data is often transmitted insecurely across open Wi-Fi networks. A stranger sitting in a van outside your home or place of work could easily intercept data during transmission, potentially accessing information they have no right to view. Sensitive data about health or personal activities could then be used for identity theft, blackmail or countless other nefarious uses.

Such activities would be easy to prevent if all IoT-enabled devices had a global security standard, but they don’t. Every manufacturer attributes different values to data protection, with proprietary software and varied connection methods. A modern smart office contains dozens of incompatible trust standards and device visibility levels, with reams of largely unrelated data being uploaded and processed in real time. Unsurprisingly, this has attracted the attention of criminals: Gartner recently predicted more than a quarter of enterprise attacks by 2020 will involve the IoT.

There hasn’t been any industry-wide attempt to impose security standards or global protocols across the Internet of Things, in stark contrast to the collaborative and co-operative approach to developing HTML5 security. Since IoT devices are usually fairly simple and intended to require minimal resources, Original Equipment Manufacturers (OEMs) are reluctant to include advanced features that could complicate setup or usage. Expensive protection is frequently unjustifiable on products or services with low price points, in industries where every cent counts. Bolstering security also has the potential to adversely affect battery life on non mains-powered devices, adversely affecting reliability and usability.

Some manufacturers have claimed their IoT devices don’t need robust data protection. It’s been suggested that when smart bathroom scales report to My Fitness Pal, nobody will be interested apart from the owner and their doctor. However, it’s easy to see how a teenager might be embarrassed or even bullied if their weekly weight data was hacked by a classmate and written up on the chalkboard in class. And that scenario pales into insignificance compared to someone’s weight being sent to potential employers during recruitment and selection, or exfiltrated by advertisers to target overweight individuals with junk food ads.

The IoT Security Solution

Individual IoT devices are often modest, carrying limited volumes of data. It’s often when they’re added into a smart office or connected home that the volume of potentially compromising information being transmitted becomes an issue. And while developers have historically been reluctant to incorporate adequate security measures, the tide is turning.

From securing existing networks to embedding security into IoT-enabled devices, below  are some of the ways IoT security solutions are being developed…

  1.     Security credentials. This phrase has been turned into action by Verizon Enterprise Solutions, who have developed a way of overlaying existing security with additional protection. Credentials may involve digital certificates or 2FA tokens, producing an over-the-top layer of protection that can be applied to devices irrespective of their existing features. Since a great deal of IoT communication is between machines with no human input, traditional authentication methods like biometrics are invalid. Instead, devices are secured by repelling network threats detected via vulnerability assessments and URL blacklists. This enables connected devices to transmit information without impediment.
  2.     Embedded systems. Rather than retrospectively adding a security layer over IoT devices, it’s obviously preferable and advisable to have security integrated during the manufacturing process. While that increases costs, it ensures everything from ICS to POS devices transmit data securely. At the same time, in-built analytics can detect threats from malware or hackers. Semiconductor technologies are being used to spearhead the authentication of user credentials, guarding against malevolent activities.
  3.    Protected networks. Before data is distributed across the internet, it can be agglomerated in a local network. Bitdefender has pioneered a security solution that effectively provides a firewall against network flaws such as weak passwords or unsecured communications. Outbound connections are checked for unsafe or unsecure sites, while granular control of individual devices can remotely install OS updates or resolve system issues.
  4.    PKIs. Public Key Interfaces eliminate the need for 2FA tokens or password policies, with SSL encryption ensuring that data is secure during transfer between a device and the cloud. It’s easy to confirm software and settings haven’t been tampered with, while message signatures ensure data can’t be manipulated or copied in transit. Digital certificates can be used on cloud-hosted and on-premise devices alike, though simpler ones might lack the system resources to implement PKIs.

Increasingly, IoT protection involves a larger focus than merely protecting individual devices against hacking or spying.

These are among the industry-wide approaches being undertaken or invested in, to bolster safety among connected devices:

  1.     Machine learning. Today’s critical mass of IoT devices is driving the development of an entirely new security analytics sub-sector, with companies aggregating and normalizing data to identify unusual activities. While big data solutions to IoT issues remain in the developmental stage, firms from Cisco to Kaspersky Lab are developing AI and machine learning models to identify IoT-specific attacks, such as botnets. These may not be identified by traditional network protection tools, which are aimed chiefly at browser-based attacks.
  2.     Pre-emptive troubleshooting. Firms like Trustwave enable IoT developers and providers to assess vulnerabilities in an existing IoT ecosystem, from devices and applications to connections. Through penetration testing and threat analysis, OEMs and software developers can resolve weaknesses in apps, APIs, products and protocols. A more dependable service for consumers ensues.
  3.     Security toolkits. Alternatively, why not get one company to handle every aspect of IoT security issues, from initial design to final beta testing? The open source libsecurity platform is IBM’s one-stop shop for application developers, covering everything from APIs and libraries to encryption and secure storage via password/account management. These IoT security solutions are designed for the restricted runtime environments of today’s applications, removing the burden of coding from developers.

Data hosts also have a role to play in improving this industry’s historically poor security record, by ensuring that the volumes of aggregated data being delivered to their servers can’t be hacked or stolen. This can be achieved by finding and appointing a trusted hosting partner like 100TB. Our data centers are specifically designed to repel DDoS attacks and malware, with the option of a managed firewall. Offline security is taken care of with digital video surveillance and biometric access allied to proximity keycard control, plus round-the-clock security details and restricted access to server cabinets. From San Jose to Singapore, your data will be safe in our centers.

The Future of IoT Security

With an estimated 20 billion IoT-enabled devices expected by 2020, what does the future of IoT security look like? Many believe it will involve significantly more reporting and two-way communications. At present, devices passively upload data into the cloud. In future, analysts expect a degree of machine learning from either the devices or their host servers, identifying unusual data patterns and proactively responding to perceived threats. This will take place behind the scenes, since many IoT devices are designed to operate autonomously without any human input during their operational lifetime. Trusted Platform Modules are among the technologies being tipped to authenticate hardware and software without draining battery life, which remains a valid concern at present.

Another difference will involve standardization. The plethora of processors and operating systems currently being marketed will dwindle to a smaller number of industry-leading protocols, helping to simplify the process of identifying and resolving weaknesses. Regulatory standards for data protection will be agreed upon, possibly at Governmental level, with Certificate Authorities ensuring standards are being met. Consumers will also become better educated about the practicalities of IoT security solutions, though it might take a Wikileaks or Ashley Madison-style data breach to focus the public’s attention on database vulnerabilities.

Finally, developer and manufacturer arguments about cost cutting or simplicity will be rendered moot as economies of scale dovetail with greater industry regulation. Securing the Internet of Things won’t be devolved to aftermarket routers any more – it’ll become a central part of the design, manufacture and installation process. An industry standard for tackling common IoT security issues is almost inevitable, allowing devices to be sold with a seal or notice confirming their adherence to regulatory protocols. In short, IoT encryption will become as ubiquitous as HTTPs, and possibly even more valuable in our daily lives.

What Are the Next Steps?

If you want to ensure your connected home or office isn’t vulnerable to attack, these are some of the key steps to take:

  1.     Secure your router. Routers are the primary gateway for all local IoT content before it reaches cyberspace, yet many people persist in using unsecured connections or default passwords. Ramping up router protection should be your number one priority.
  2.    Keep devices local if possible. Devices often default to an internet connection, but it may be sufficient to keep them within a LAN. Hiding them behind a secure router reduces public exposure, so investigate whether you can prevent port forwarding.
  3.     Ensure devices that authenticate against other systems do so securely, with unique identification details or SSH encryption keys. This might not apply to simpler IoT devices, but it should cover CCTV systems and any satellite-based services.
  4.     Manually check for updates. Because there are no industry standards to adhere to, manufacturers and software developers don’t always promote updates. It’s presently incumbent on end users to check for software updates, security patches and so forth.
  5.     Employ TLS where possible. On-chip memories can be used to encrypt information, preventing so-called ‘man in the middle’ attacks on data in transit. TLS is a logical extension of the end-to-end encryption already used by platforms like WhatsApp.
  6.     Scan for vulnerabilities. Imperva Incapsula’s Mirai scanner investigates every device sharing a TCP/IP address, probing their resistance to the Mirai DDoS botnet. A quick Google search will reveal similar free or open source scanning tools.
  7.      Change default passwords. This is perhaps the simplest and most obvious recommendation of all, yet it’s commonly ignored. Breaching one IoT device may open up your entire network, so why leave passwords set as ‘1234’ or ‘password’?

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}

Big data sounds like a big deal. But is it?

In the beginning, there was data. And it was good. So good, in fact, companies demanded more of it. Data could be used to identify past trends, shape current policies and predict future events. Eventually, the volumes of information being generated became so large and complex that a grander term was needed. It had to reflect the need for powerful mass-scale number-crunching algorithms, instead of traditional data processing software.

There are millions of apps available for the three leading mobile platforms of Android, iOS and Windows. But how do go about creating an app for your own company?

Is the world big enough for both tech giants?

We’ve had tech long enough to know that as it improves two things happen, speed goes up and size goes down. We may be a little way off Williams Gibson’s cyberpunk dystopia yet, but until we get there… it’s a safe bet our most powerful computers will be in our pockets.

The big question then is what are we going to do with all this portable processing? The race to miniaturize our lives has begun and it looks like there are only two players.

Bitcoin is the world’s first internet-based currency. It’s worth knowing about, but is it worth accepting as a method of payment?