Facilis EBook: All Network Storage is Not the Same

The Choice Between Ubiquity and Purpose
by Jim McKenna

An Introduction to Facilis

Founded in 2003, Massachusetts-based Facilis Technology, Inc. designs and builds affordable, high-capacity, turnkey shared storage and data management solutions for collaborative media production networks in the film, television, education and content creation markets. Production-proven Facilis solutions are fast and intuitive, making it easier for creative professionals to collaborate and work efficiently. Flexible, scalable and compatible with industry-standard creative workflows, Facilis’ shared storage products blend seamlessly into any boutique, mid-size or large facility and have been installed in more than 3,000 locations worldwide.


The computer workstation for non-linear editing of video was a revolutionary technological development that had a seismic impact on the wider broadcast industry, most notably for this discussion, the post-production craft. Video editors were provided with career-changing production tools, and nothing would ever be the same. It was only a few years after the wide adoption of computer-based video editorial that these facilities started looking for a way to centralize the major consumable resource in the workflow, hard drive storage.

This was the dawn of facility networking, but these networks were too slow to deal with large video data transfers, so with each workstation came its own dedicated disk-based storage system. After working like this for some time, it becomes apparent that each workstation is essentially an island. Connecting these various islands was starting to become necessary for larger projects that were duplicating gigabytes of data across multiple workstations.

Shared storage was born. In it’s infancy, these systems used heavy and cumbersome LVD SCSI, with thick spools of cable and failure-prone multi-pin connectors. Along came Fibre Channel, which was more elegantly attached through thin fiber optical cable with a longer reach.  Each solution was connected to a central hub, and fanned out to multiple workstations. Software on the workstations would determine who had the ability to write and read to a given partition, and all the data was now on every desktop. Stubbornly, collaborative editorial had arrived.

The evolution continued through various speeds of Fibre Channel, and then to Ethernet as facility networks got faster. Facilities now had a choice of technologies, but it was hard for a facility owner to find an un-biased opinion on the best direction for their infrastructure. A schism developed between traditional Fibre Channel-based systems and NAS (Network Attached Storage) ethernet-based systems. There were plenty of opinions, and discussions would sometimes degenerate into arguments. The simple fact is that when NAS got faster and cheaper, Fibre Channel started to wain in popularity.

The decline of Fibre Channel wasn’t due to a fundamental flaw or a suitability problem, it was IT convergence. Enterprise networks, including the systems that drive them and the people that assemble and administrate them, are big business. The content creation market is the primary user of Fibre Channel as a client connectivity, and is by comparison very small. Due to the laws of supply and demand, big market technology will progress more quickly, and costs will reduce as compared to small market technology. The last decade has seen considerable progress in NAS systems, and these systems are now available from many vendors targeting the content creation space.

However, all that glitters is not gold – many NAS systems are true to their initial design as common connectivity for business machines, and possess critical flaws and system shortcomings in the content creation space. These may not become apparent until after a significant investment has been made. This eBook looks at the issues surrounding network attached storage systems in an approachable, easy to understand way. It de-bunks the techno babble and identifies the key questions that broadcasters and facilities need to ask before committing to purchase their next generation storage architecture.

Introduction – all we want is an easy life

You want a good storage system that can be seen and used by all the workstations in your facility. The reason for this is obvious, so all your incoming file-based content lives in one place. Attach your laptop or workstation to the plug in the wall, and there it is.

It is not, nor has it ever been, that easy. Before exploring the “gotchas” involved in deploying such a system, let’s first break down what exactly it is that you want.

Good storage system – Protected, reliable, high performance

Can be seen and used – Multi-platform compatible, application interoperable

By all the workstations – Scalable, accessible


What makes a good storage system?

All RAID systems should have levels of protection against drive failure, and properly maintain your data long-term. This is one of the reasons to get off the desktop hard drives you’re currently using. Reliability is an issue that must account for all the aspects of your system. A server that has a few hours of downtime a couple of times a year is better than a system with routine failures to access data or play a video stream due to basic incompatibilities or complex management requirements. Chasing daily issues that keep people from being productive is hard on the facility.

High-performance storage is easy when you’re attaching it to a single workstation. When it’s attached to 10 or 20, that performance doesn’t hold up. With network storage (NAS), administrators trying to increase performance often end up just moving the log jam downstream. For example, start by adding more drives. That will distribute the load better, and the data will be read and written with lower latency. At this point you may find out that your server or “NAS head” is not up to the task. Let’s assume it’s been configured well, and it’s chugging right along. Now, the bottleneck has moved to the backbone.

Building a bigger “backbone” from the server to your network switch may include going from a 1Gb to 10Gb or even 40Gb uplink. Assuming the NAS head is compatible and data is flowing properly through the 40Gb connection, you hit the next bottleneck - the switch. Or maybe it’s the protocol, the buffers, the frame size, version of TCP-IP, network file system? When you start considering the process of getting data to the desktop and how you could make that faster, the tweak points are endless. Many a NAS system has been “repurposed” to backup duty soon after being built because the latencies and inefficiencies just couldn’t be tweaked away.

This is your first chance to reconsider a generic NAS and find a turn-key network that eliminates all the tweaking, and gives you back the hours that you’d be spending. Performance is systemic, the right answer is a turn-key and fully qualified environment. But since you’ve gotten this far, let’s continue.


Serving the clients

Let’s again assume that you have overcome the first few bottlenecks and have the system performing well, and producing great benchmark results on a few client workstations. Now, let’s get into the workflow. Rarely are all systems in a facility the same or even similar in construction. Various workstation types, operating system (OS) flavors and revisions will be necessary to pull off the jobs that come through the door. Your graphic artists like their 10.10 OS Mac Pros, but the new color grading system is an HP Z840 on Windows 10. The archive server is CentOS Linux, and the new iMacs are all running 10.12. Don’t forget about the Avid department on Windows 7.

That’s not cross-platform, that’s multi-platform, and multi-revision. Herein lies the next problem – client-side support of your network file system. Your NAS head, what is it? Probably not a Mac, maybe Windows, or likely Linux. Don’t know Linux? You had better learn it because it’s the best way to build an effective NAS based on standard protocols. Let’s talk about these protocols. NFS (network file system) is common in enterprise Windows and Linux organizations. Apple has some support, but many integrators prefer using a separate application to access NFS on OSX. Many reports of failure to mount/browse, and a lot of time spent to make things work properly are all over the forums. Windows has support for NFS but ACLs and POSIX must be configured properly for the user account to get anywhere.

AFP (Apple File Protocol) is slowly being phased out, and SMB (Server Message Block) is gaining more support from Apple in the latest versions. This is good because it had been downright broken a few times in prior versions. You should pay attention to that for your artists that likes their 10.10 Macs, you may be forced into an awkward conversation about upgrading (or downgrading). Linux has support with Samba, but once again be aware of the specific dependencies of Kernel version and distribution.

If you decide to build the network that will be everything to everyone, and share multiple protocols to multiple workstations, good luck with that. Authentication, ownership, and permissions range from “a little different” to “wildly variable” across these protocols. This may eventually put a hard stop on your workflow and leave you scratching your head instead of creating content. Of course, a career systems administrator (sysadmin) will read this and have solutions for everything, or dismiss these concerns summarily based on their experience assembling heterogeneous networks. Hire that person, and he or she will be responsible for your company’s ability to generate revenue. They’ll want to be paid as such, and once embedded they will have substantial power to make that demand, when you consider the alternative.

We haven’t even started talking about application compatibility, which is a piece of magic that some of the most well-trained sysadmins battle with daily. Application compatibility cannot be measured by benchmarks. It is not a matter of upload/download speeds. The content creation application was likely designed generations ago to work best on a local hard drive, and the artists you hire to work in your facility will want the application to work as designed. This means the target, or the source of the files the application uses must behave more like a local hard drive than a network share. Herein lies the magic.

What the heck is TCP and why should I care?

In the file system protocols listed previously, TCP is the method by which data is transmitted across an IP (Ethernet) link. TCP was developed in the 70s to provide a reliable data stream. The design was focused on reliability, because network wiring can be very unreliable, and data integrity was important. So, TCP provides a way to send data packets that can be re-tried repeatedly in the case of a network failure, to continue the stream without loss once the network connection returns. Networking speed 30 years ago was abysmal, you’d be lucky to get a 10Mb/sec corporate link, or about 1/100th the speed of the standard 1Gb/sec link we all enjoy on our laptops. So, performance was not a priority.

Presently, we enjoy relatively screaming-fast network link speeds, but still TCP is the dominant method for transmitting data to your desktop. This is because backward compatibility is critically important. The world network is already built, and can’t be overhauled even if it means a quantum leap forward. So, any advancements in TCP suite of rules for IP networks will be very slight and incremental. Therefore, a TCP network drive will never behave like a local hard drive. The technology is available, but you must go outside TCP to get it. This isn’t to say that every application will refuse to work on a network drive, some have been adapted well, others simply do the best they can. You’ll rarely get to the optimal application functionality using a network drive as compared to a local hard drive, or highly optimized non-TCP network storage. Optimized, non-TCP network storage can act like a local hard drive, both in performance and in appearance to the client operating system.

Just to touch on application interoperability, or the ability to work with all the features of the application available. In some cases, even if a TCP network drive is compatible, it won’t be interoperable. Apple’s Final Cut Pro X (FCPX) uses libraries to hold project metadata and files that have been imported into the project.  FCPX can’t save libraries to drives with the most common network file systems. So, you can work in FCPX on a local hard drive, and have some of your source files on a network drive, but that’s neither efficient, nor interoperable. Avid Media Composer requires that storage appears as a local drive, or Avid storage. Network storage is not allowed in collaborative (shared) environments. Optimized, non-TCP network storage can hold FCPX libraries and emulate Avid storage for full interoperability.


Does size really matter?

You don’t know how big your company will get, and you don’t know what project may walk through the door next. When would you like to prepare to take on the bigger jobs and scale-up the workgroup - when the job is about to start, and the client count is about to double? You could just throw money at the problem and over-engineer from the start, but that’s a slippery slope, especially if the big job takes a while to come around.

Start with a logical investment, but be sure that whatever system you chose has the capability to handle your best-case growth pattern. Successful scaling of capacity and client count relies upon the architecture.

Your encoding and backup automation is set to use a certain path. Ensure that when you add capacity, that capacity can be used immediately on current jobs without adding a second path.  Adding capacity should always increase bandwidth, and you should have the ability to utilize old and new drives in the same volume to increase speed. Avoid systems that limit your client count, or have you buying additional seat licenses or server hardware to manage more user connections. After the big job is over, extra licenses and hardware will only serve as a constant reminder of your higher lease payment. Look for systems that have multiple methods of connectivity. Simple 1Gb Ethernet may suffice for the offline work you do today, but that big job looms somewhere in the future. Some systems offer client connection speed up to 32Gb/sec, good for over 3000MB/sec of data transfer speed to the desktop. Chose a flexible storage network that can accommodate more speed than you think you’ll even need.


It's all about accessibility

Accessibility, as shown before, can be a challenge based on the workstation OS and revision you chose. Many systems will force everyone onto the same OS revision for compatibility. This usually means that they’re using some features of the OS to provide the connection, so they’re not completely agnostic. If the shared storage system relies upon the OS, it will be the OS that determines whether the system works reliably. Try to find a system that has compatibility with multiple OS revisions, because this ensures that a change in the OS is unlikely to cause a problem, and you won’t have the fire-drill OS upgrades across the facility to simply update your server software.

Access to the system can also be unreliable if additional components are required to manage user accounts and permissions. Many NAS systems require directory services (Active Directory, LDAP) which must be configured, often on a separate server, for the NAS to provide access. This is another level of management that you may choose to deploy for your own purposes, but should not be forced to deploy just to provide access. The resiliency of the network connection is also important when considering reliable access to the storage. If there is only one possible path to the storage, or if the system requires multiple active paths to function, it only takes a wiggle of a wire to disconnect your clients. Look for solutions that have connectivity failover, to provide another path to the storage in case of an issue with the primary, and systems that only require a single connection to the storage to lower complexity.

In conclusion… the really important stuff
When you decide on a turn-key shared storage network, demand that the architecture is supportable for at least 5 years, and get evidence of that with systems in the field today (if the vendor has no systems that are over 5 years old, that’s a problem). Planned obsolescence, and quickly-moving product architecture will leave you left out, and frozen in time.

Don’t let the hardware fool you, all network storage is not the same. The important aspects of your requirement have been broken down and analyzed, and I hope we agree on the result.

      • Performance is systemic, and bottlenecks will migrate downstream
        • Talk to someone that qualifies and guarantees the performance of the entire system
      • Building a functional network on dated protocols is challenging
        • Find a system that uses an optimized, non-TCP sharing method.
      • Managing a network is tough, and if you farm out the work, be prepared to pay.
        • Buying a turn-key system provides you a direct line to the experts on your system
      • Over-engineering is costly, but the ability to grow and adapt is critical
        • Look for a product that satisfies your growth pattern, and buy the size you need now.

Facilis’ multi-platform shared file system and complementary workflow solutions power thousands of post-production and content creation workflows in facilities worldwide – allowing professionals to store, share, edit and archive a diverse array of media at an accelerated pace.

The Facilis Shared File System is included with every TerraBlock system and easily accommodates the most complex multi-platform network environments and enables collaboration among diverse image formats and application including Avid Media Composer, Adobe Premiere Pro/Creative Cloud, FCP/FCPX, DaVinci Resolve and others. From 4K film color grading to HD craft editorial, TerraBlock Network shared storage provides the performance and collaboration tools to get the job done.

Supported by a strong internal team comprising industry veterans with extensive backgrounds in product development, broadcast facility engineering and creative video editing, Facilis continually innovates its solutions to enable superior integration with the latest creative applications and technology, as well as support for cutting-edge connectivity methods. The company’s world-class technical support and meticulous in-house integration of each turnkey solution have earned Facilis a loyal customer base and esteemed reputation in the industry. Its consultative selling process ensures that all Facilis solutions will fit seamlessly into facility environments of all shapes and sizes and provide unmatched stability for years of uninterrupted production.

Facilis Technology (USA)
108 Forest Avenue, Hudson, MA 01749
T.: +1.978.562.7022
F.: +1.978.562.9022
E-Mail: info@facilis.com

Facilis Technology is represented by several distinguished resellers across the US, EMEA and APAC who are available to assist you with on-site setup, installation and ongoing support.