Currently Viewing: Technology

Radio has been a longstanding media institution across countries, continents and cultures. The media form rewards the essence of storytelling by putting it front and center, and the result has been a surprisingly noble ability to withstand changing tides.

Infrastructure As Code

New Year’s Eve is notorious for being amateur celebrity night. Millions of people around the world descend upon restaurants, clubs, and bars willing to spend highly inflated prices for one of the most popular nights of decadence and celebration all year. Money is bandied about left,right and center, with premiums placed on nearly everything: New Year’s Eve menus, New Year’s Eve entry charges, New Year’s Eve cocktails, New Year’s Eve parties, even New Year’s Eve transportation.

It’s that last one – transportation – that some credit with the inception of what we now call the on-demand economy. After having parted with $800 with friends for a taxi on New Year’s Eve, the wheels started turning for one particular entrepreneur in San Francisco. Rather than pay exorbitant prices for black car services, Garrett Camp started to brainstorm ways in which riders could pay a lower price by sharing the ride with multiple people.

And thus, Uber was born.

Since Uber was founded in 2009, the number of on-demand companies has exploded. This new type of economy goes by many names – gig economy, sharing economy, on-demand economy, peer economy, platform economy – but the idea is the same: offer products and services delivered incredibly fast and at a low price, and all with just a few taps on a mobile screen. Some of the more notable on-demand companies include TaskRabbit, “an online and mobile marketplace that matches freelance labor with local demand, allowing consumers to find immediate help with everyday tasks,”; Wag, an on-demand dog walking and dog sitting marketplace; Airbnb, “an online marketplace and hospitality service, enabling people to lease or rent short-term lodging including vacation rentals, apartment rentals, homestays, hostel beds, or hotel rooms”; Instacart, an online grocery delivery service; and Postmates for on-demand local delivery.

It’s apparent that the on-demand economy can be hugely profitable for businesses, but what are the common elements that lead to success? And can the gig economy be applied to all businesses, or are there some products and/or services that could actually benefit from delayed gratification?

What does work mean to you?

The on-demand economy has already changed the face of the U.S. workforce. There were approximately 55 million freelancers in the U.S. in 2016, making up over a third of the population. In a country that has in recent history heralded the 9 to 5 schedule as the gold standard in workdays, the rise in freelancers signals not only a shift in workforce statistics, but also in the collective mindset. Sara Horowitz writes for the Monthly Labor Review, U.S. Bureau of Labor Statistics, “Online work platforms, such as Uber, Airbnb, Etsy, and Elance, that connect workers directly to consumers and clients are completely reimagining the work relationship.”

If we were to use the chicken-and-egg argument, it could be hard to pinpoint which came first, on-demand companies or the rise in freelancers. Either way, the concurrent increase of each shows that not only are there more opportunities for freelancers in the modern economy, but there is also a surplus of workers who prefer this type of employment.

Global investment experiences recent boost

The number of investors allocating capital to on-demand companies took a dive in the first half of 2017, but with 87 deals in Q2 alone, the on-demand marketplace is experiencing a resurgence, as shown in the graph below by CB Insights. This most recent boom was led by Chinese ride-hailing startup Didi Chuxing, which received $5.5 billion in investment. Other companies that led the funding rise include GO-JEKEle.me, and US ride-hailing company and Uber rival Lyft.

Investments in new companies, however, have started to decline. CB Insights reports, “Looking at on-demand global deal share by quarter, there is a clear decline in seed and angel deals to the space, falling from 45% of all deals in 2016 to only 39% in H1’17. This is indicative of a maturing industry, in which later-stage companies are increasingly receiving more investor attention and dollars.” Startups in the on-demand space will have a harder time getting investment, as well as facing significant competition from companies that have been around for some time. Companies that wish to start afresh may find it more difficult to gain traction in a market that is starting to become saturated, especially in certain industries.

Leading the charge

Certain types of on-demand businesses are more popular in the on-demand market, forcing recent startups to become more innovative with their concepts. The Harvard Business Review writes that the more popular types of on-demand businesses is reflected in the amount that consumers spend, the majority of which is in online marketplaces, which on average totaled $35.5 billion per year in the U.S. This is followed by transportation with $5.6 billion, food grocery/delivery at $4.6 billion, and the remainder of the on-demand economy bringing in $8.1 billion.

These numbers should be a clear signal to entrepreneurs who have the twinkle of an on-demand business in their eyes. Unless backed by a large amount of funding or corporate partner, it will be very difficult to infiltrate the parts of the on-demand economy in which consumer spending is already concentrated. Instead, it would be better to focus on new ideas that are still in their infancy, as these stand a better chance at becoming profitable. James Paine writes for Inc that the B2B space has been an interesting place to watch, with a few startups coming to market to fill a variety of the needs of businesses, including Spiffy, an on-demand company that will wash your car while you’re at work, and ezCater, a catering company with a network of 55,000 restaurants that can handle anything from two people to thousands. Whatever the value proposition, a unique idea will be paramount to starting a business in the on-demand world.

How to build an effective on-demand business

Looking at the bumpy roads of some on-demand startups, a few things rise to the surface as necessary ingredients for a winning on-demand business. First, your business must offer as many options for the product or service as possible, and spare no exceptions. The reason for this is, as mentioned earlier, the on-demand market has started to become saturated, and if a consumer cannot fulfill all their needs with your company, they will quickly search for another one where they can. Establish brand loyalty in the early stages of the customer journey by giving the customer everything they require and then some – give the consumer what they want before they know what they want, and leave no stone unturned.

Second, when building your software, make sure to prepare for future scalability by layering your back end. This will allow for quick enhancements and build-outs to occur if your company starts to expand quickly. Whenever opportunity comes knocking, have your digital infrastructure ready to answer.

Next, research your competition and make sure you’re doing everything better. The on-demand world has already had a few years to find its feet and it’s running at a pace now, so there is a good chance that there’s another company offering something very similar to what you are. Find them, get a clear picture of what they’re doing, and then do all of it better.

When considering the ways in which customers will interact with your business, it is paramount to be excruciatingly precise and consistent with every step, and make everything happen as fast as humanly possible. Again, with a saturated market, if a customer doesn’t receive your product or service in as much or as little time as they expect to, they will go to a competitor. Don’t give them time to consider another company – give them exactly what they want, every single time, and do it faster than they realized was possible. On-demand means now.

Taking this need for speed a step further, make your payment system fast and painless. There are lots of companies out there that offer excellent payment systems, from pocket-sized credit card readers to online wallets. Find which type of system works best for you, then integrate it into your business so that this step happens in the blink of an eye.

As you continue to grow your customer base, keep a constant eye on your analytics trends and implement changes as necessary. It will take a while to aggregate enough data to discern patterns, but you should do this as soon as you can in order to start refining the way your business works. Data is the name of the game in digitally focused businesses, so put it to use to continually improve your business model.

On-demand does not work for everything

One thing upon which online marketplace Etsy has capitalized is the recent demand for handmade goods. Etsy found that despite the fact that you can get almost anything in the blink of an eye, there are certain things that consumers are willing to wait for. The allure of having something made by a person rather than a machine is a recent trend, and although these goods are offered via an online marketplace and thus technically part of the on-demand world, many of them also require time to produce before shipment.

For instance, say you wanted to give someone a handmade quilt. Part of the value in this product is that it is custom made, so no one else will have the same quilt. The other part of the value is that it is made by a person, not a machine, and that the craftsperson invested one of today’s most valuable assets into making this gift: time. Not all products and services are going to benefit from an on-demand model, in some cases simply because it is not feasible to produce them quickly enough. It will be obvious the types of businesses that simply do not belong in this sphere; marketing these types of businesses with a focus on the time and personal approach they have will balance the inability to turn them around in a short timeframe.

The on-demand economy has taken off like a storm for many modern businesses, and can prove very lucrative when it’s done right. However, there are right and wrong ways to go about it, and it’s certainly not for everyone. Consider what your business has to offer that differentiates itself from other on-demand businesses and if the time is right, make the on-demand economy work for you.

{{cta(‘4a541ee9-b52f-48ef-9cf9-c46aa0989c43’)}}

Technology and business goals collide at the intersection of page load time and conversion rate.

Marketing wants a fully featured page, with lots of images and services to track user behavior.

Page load time has a huge effect on conversion rate and customer happiness:

  • Half of all customers expect a webpage to load in under 2 seconds. If it doesn’t, they lose trust in and patience with the site, and click the back button to navigate to the next search result on Google.
  • User expectations have increased big time. SOASTA conducted an experiment between 2014 and 2015, looking at conversion rates based on page load time. In 2015 they saw a peak conversion rate at for sites loading in 2.4 seconds. This was 30% faster than the peak conversion load time in 2014.  
  • Cedexis found that decreasing load time by just 1 second improves conversion rate by an average of 27.3%, where a conversion is defined as a purchase, download or sign-up.

The necessity of keeping page load time low for a good customer experience means that tech teams need to exercise every option available to them for performance. Effective caching techniques can bring improvements to even the leanest of websites.

Why optimize caching?

Caching is the process of storing elements so that clients can retrieve resources from memory without needing to put strain on the main server. Utilizing caches has three main benefits.

First of all, caching can make web pages load faster, especially if the user has visited before. If you utilize caching to distribute content globally, visitors will see a reduction in latency, which is the time it takes for a request to physically travel from their browser through the network to the server and back again. If your page is cached locally on the user’s browser, they don’t need to download every resource from your server, every time.

Secondly, caching reduces the amount of bandwidth needed. Instead of the server being responsible for delivering resources for every request, it only needs to deliver new content. Everything else can be returned from a cache along the network.

Finally, caching increases how robust your site is. A client can retrieve resources from a cache, even if your server is down or experiencing high traffic. A plan for preparing for volume spikes should include a caching strategy.

Levels of caching

Caching can happen at lots of different checkpoints along the network, right from the browser to the server itself. Every checkpoint has different benefits and challenges associated with them.

Let’s start with the caching options closest to the end user, then move up the chain to the server where the resource being retrieved originates from.

Browser caching – imagine scrolling through the search results on an online shop. You click on an image link to load the product page, decide it’s not quite right, and hit the back button. If your browser had to request the entire search page again, you’d have to wait for all the images to be downloaded to your browser for a second time. Fortunately, browsers use memory to store a version of sites they’ve already visited. Instead of going all the way to the server and back again, your browser just pulls up the version it’s already stored for you. It will also do this for constant pieces of your site, like your logo, for example.

Proxy cache (Web Server Accelerator) – caches can also be shared between many users. ISP use caches to reduce bandwidth requirements by sharing resources. That way, if one user has already requested a static resource (like an image or file) the ISP doesn’t need to request it again from the server – it can provide it instantly.

Content Delivery Network (CDN) – remember how distance between user and server affects load time? CDNs are caches designed to reduce latency by distributing copies of cached files to local servers all over the world. When a user requests a resource, they are connected to their local CDN. Companies with international users should consider using a CDN to reduce latency.

Server side caching/ reverse proxy – if most of your content is static, you can cache it for yourself, so customers won’t need to hit your server to load static content. There  are several tools that do this for you – Redis, Varnish, and phpfm are all popular options.

Database caching – database servers are often separated from the rest of the server. This means that when your server receives a request from a user, they need to request something extra from the database. If a frequent request always returns the same result, you can cache this in a database cache. This prevents the database from crunching the same request over and over again, resulting in better performance, even during busy periods. Search servers  for ecommerce sites also return cacheable queries.

When should you optimize caching?

“I’m not lazy, I’m just efficient” – ever heard that before? Well, think of your servers as the absolute laziest pieces of hardware you own. Never ask them to do something time consuming twice if there’s a way for them to hold onto results in a cache down the line

For example, you sell jewelry online and one of your top link destinations is a list featuring the 20 most popular items. If you didn’t utilize caching, every time a visitor clicked on that link, they’d need to send a new request through their ISP to your server, which would ask the database to calculate the top 20 items and then send back each of the corresponding images and prices. But realistically, you don’t need to compute this full page every time it’s requested. The top 20 items don’t change often enough to require real-time results. Instead, cache the page in a reverse proxy – located in the same country as the customer – and deliver it much faster.

When you start optimizing your caching strategy a good place to begin is by identifying the most popular and largest representations first. You’ll get the biggest benefit from focusing on caching improvements for pages that are resource heavy and requested often. Looking at the waterfall diagrams on the Network tab of your browser can help identify resource intensive modules on the page.

Time To First Byte (TTFB) is a good way to measure the responsiveness of your web server. Improving your caching strategy through reverse proxies, CDNs and compression will help customers experience shorter TTFB, and help your website feel snappier.

However, don’t forget that most customers will have a poorer experience than that seen in testing. They might, for example, be located on the opposite side of the world using a mobile device or an older computer. By utilizing caching best practices, you’ll ensure customers have a great experience, no matter where they are.

When you need to refresh your data

Because we work in a world where everything is frequently updated, it’s important to understand the different methods we have of forcing a cache reset. There are a few ways we can force browsers and other caches to retrieve a fresh copy of data straight from the server.

  1. Set expiration date  – when the site doesn’t need to stay perfectly up to date in real time, but does need to stay reasonably fresh. If you set an expiration date in your header, the browser will dump the cache after that time. If the resource is requested again, a fresh copy will be retrieved.
  2. Set modified-since – the client will download the updated resource only if the server confirms it’s been updated after the modified-since date. Instead of sending everything again, the server can send back a short 304 response without a body, thus saving bandwidth and time.
  3. Clear specific module – you don’t need to refresh your entire blog cache just to display a new comment. Segmenting a page into different modules can help with cache refreshes.
  4. Fingerprinting – caches work by storing and retrieving a specific file when requested. If you change the name of the file, the cache won’t be able to find the file, and the new copy will be downloaded. This is how fingerprinting works to keep assets up to date. By attaching a unique series of characters to the filename, each asset is considered a new file and requested from the server. Because the content is updated every time, you can set an expiration date years in the future and never worry about a stale cache. Many compilers will automatically fingerprint assets for you, so you can keep the same base filename.

Don’t forget that a cache is not long term storage! If you decide to cache something for later, you might find that it’s been invalidated and you need to retrieve the resource again.

Making caching work for you

Determining the perfect solution for your site can be difficult. Rely too much on caching and you might find users have outdated sites, or memory troubles in their browser. Ignore caching entirely and you’ll see page loading times increase and user experience suffer.

By understanding your user’s needs, you can create a great experience from the beginning. If caching is important, it’s worth using a framework that provides out-of-the-box caching optimization. If caching is less relevant because accuracy is more important than speed, then you can make allowances either way.

Caching strategy is a problem to be solved uniquely for each app. Determining where you can utilize caching to save bandwidth is an ongoing learning experience. Keep making incremental improvements and keep it light for your customers.

Does your technology stack help your business thrive? Can a better server infrastructure enable improved business decisions? As AI and big data continue to find their way into our businesses, the technology driving our strategies needs to keep up. Companies that embrace AI capabilities will have a huge influence over firms unable to take advantage of it.

In this post we take a look at Facebook’s latest upgrade, Big Basin, to understand how some of the biggest tech giants are preparing for the onslaught of AI and big data. By preparing our server infrastructure to handle the need for more processing power and better storage, we can make sure our organizations stay in the lead.

Facebook’s New Server Upgrade

Earlier this year Facebook introduced its latest server hardware upgrade, Big Basin. This GPU powered hardware system replaces Big Sur, which was Facebook’s first system dedicated to machine learning and AI from 2015. Big Basin is designed to train neural models that are 30% larger so Facebook can experiment faster and more efficiently. This is achieved through greater arithmetic throughput and a memory increase from 12GB to 16GB.

One major feature of Big Basin is the modularity of each component. This allows new technologies to be added without a complete redesign. Each component can be scaled independently depending on the needs of the business. This modularity also makes servicing and repairs more efficient, requiring less downtime overall.

Why does Facebook continue to invest in fast multi-GPU servers? Because it understands that the business depends on it. Without top-of-the-line hardware, Facebook can’t continue to lead the market in AI and Big Data. Let’s dive into each of these areas separately to see how they apply to your business.

 

Artificial Intelligence

Facebook’s Big Basin server was designed with AI in mind. It makes complete sense when you look at their AI first business strategy. Translations, image searching and recommendation engines all rely on AI technology to enhance the user experience. But you don’t have to be Facebook to see the benefit of using AI for business.

Companies are turning to AI to assist data scientists in identifying trends and recommending strategies for the company to focus on. Technology like idiomatic can crunch through a huge number of unsorted customer conversations to pull out useful quantitative data. Unlocking the knowledge that lives in unstructured conversations with customers can empower the Voice of the Customer team to make strong product decisions. PWC uses AI to model complex financial situations and identify future opportunities for each customer. They can look at current customer behavior and determine how each segment feels about using insurance and investment products, and how that changes over time. Amazon Web Services uses machine learning to predict future capacity needs. In 2015, a study suggested that 25% of companies currently use AI, or would in the next year, to enable better business decision making.

But all of this relies on the technological ability to enable AI in your organization. What does that mean in practice? Essentially, top of the line GPUs. For simulations that require the same data or algorithm run over and over again, GPUs far exceed the capabilities of CPU computing. While CPUs handle the majority of the code, sending any code that requires parallel computation to GPUs massively improves speed. AI requires computers to run simulations many, many times over, similar to password-cracking algorithms. Because the simulations are very similar, you can tweak each variable slightly and take advantage of the GPU shared memory to run many more simulations much faster. This is why Big Basin is a GPU based hardware system – it’s designed to crunch enormous amounts of data to power their AI systems. To get an idea of the power involved have a look at this:

Processing speed is especially important for deep learning and AI because of the need for iteration. As engineers see the results of experiments, they make adjustments and learn from mistakes. If the processing is too slow, a deep-learning approach can become disheartening. Improvement is slow, a return on investment seems far away and engineers don’t gain practical experience as quickly, all of which can drastically impact business strategy. Say you have a few hypothesis that you want to test when building your neural network. If you aren’t using top quality GPUs, you’ll have to wait a long time between testing each hypothesis, which can draw out development for weeks or months. It’s worth the investment in fast GPUs.

Big Data

Data can come from anywhere. Your Internet-of-Things toaster, social media feeds, purchasing trends or attention-tracking advertisements are all generating data at a far higher rate than we’ve ever seen before. The last estimate is that digital data created worldwide would grow from 4.4 zettabytes in 2013 to 44 zettabytes by 2020. A zettabyte of data is equal to about 250 billion DVDs, and this growth is coming from everywhere. For example – a Ford GT generates about 100GB of data per hour.

The ability to make this influx of data work for you depends on your server infrastructure. Even if you’re collecting massive amounts of data, it’s not worth anything if you can’t analyze it, and quickly. This is where big data relies on technology. Facebook uses big data to drive its leading ad-tech platform, making advertisements hyper targeted.

As our data storage needs expand to handle Big Data, we need to keep two things in mind: accessibility and compatibility. Without a strong strategy, data can become fragmented across multiple servers, regions and formats. This makes it incredibly difficult to form any conclusive analysis.

Just as AI relies on high GPU computing power to run neural network processing, Big Data relies on quick storage and transport systems to retrieve and analyze data. Modular systems tend to scale well and also allow devops teams to work on each component separately, leading to more flexibility. Because so much data has to be shuttled back and forth, investing in secure 10 gigabit connections will make sure your operation has the power and security to last. These features can be grouped into the 3 vs: data storage capacity (volume), rapid retrieval (velocity), and analysis (verification).

Big data and AI work together to superpower your strategy teams. But to function well, your data needs to be accessible and your servers need to be flexible enough to handle AI improvements as fast as they come. Which, it turns out, is pretty quickly.

What This Means For Your Business

Poor server infrastructure should never be the reason your team doesn’t jump on opportunities that come their way. If Facebook’s AI team wasn’t able to “move fast and break things” because their tools couldn’t keep up with neural network processing demands, they wouldn’t be where they are today.

As AI and Big Data continue to dominate the business landscape, server infrastructure needs to stay flexible and scalable. We have to adopt new technology quickly, and need to be able to scale existing components to keep up with ever increasing data collection requirements. Clayton Christensen recently tweeted, “Any strategy is (at best) only temporarily correct.” When strategy changes on a dime, your technology stack better keep you.

Facebook open sources all of its hardware design specifications, so head on over and check it out if you’re looking for ways to stay flexible and ready for the next big business advantage.

{{cta(’68f927d6-7481-4e94-a56a-df00ed83576c’,’justifycenter’)}}