Business In Archviz
Business in Arch Viz. Vol. 8 - IT Infrastructure & Networking (Part 1 of 2)
Welcome to the seventh installment of our new RebusFarm Business in Arch Viz series. Over the next year we will be featuring two articles every month. Each new article will discuss the business side of working in and running businesses in the visualization industry. We will feature articles from some of the top studios in the world and have in-depth answers to questions that every studio and artist in the industry should know.
The goal of this series is to provide a long-term resource for not only new artists and business owners entering the industry, but also long-time industry veterans. The topics will range from contracts and IT infrastructure to hiring and business strategy.
Studios participating in this series include: 2G Studio, ArX Solutions, Beauty and the Bit, Cityscape, DBOX, Designstor, Digit Group, Inc., Factory Fifteen, Kilograph, Luxigon, MIR, Neoscape, Public Square, Steelblue, The Neighbourhood, Transparent House, Urbansimulations and many more. Collectively these companies generate hundreds of millions of dollars a year in revenue, and have decades of experience running some of the most successful businesses in the industry.
We hope you enjoy the series!
We would like to also like to sincerely thank RebusFarm for supporting this series. Through their support they are helping better our industry and contribute significantly to future generations of visualization businesses in our field. If you are looking for one of the best rendering farm companies in the world, we highly recommend checking them out here
Image Courtesy Beauty & the Bit
How different is it to manage a network for visualization professionals and visualization departments vs other service based companies (architects, designers, photographers etc.)?
ArX Solutions: We handle really large number of files and data. Regular networks can collapse pretty easy if they are not planned. Not all of the IT professionals understand the amount of information we can generate in a year.
Beauty and the Bit: In some strange way it is much more specialized. You probably will need much more horsepower for some tasks. And the half life of the equipment could be less than in other service based companies as you mention.
Factory Fifteen: It’s more akin to a small vfx house if you do a lot of animation. So we would be considered super users. Our needs far surpass that of any other architects service company in terms of equipment, server speed, data, storage and computing.
Kilograph: The biggest difference between managing a network for visualization professionals vs other service based companies is mainly keeping the data pipeline and I/O from the server consistent as possible because one slight hiccup in the network from dropped packets to a small interruption in the network and whole animations could be wasted.
Neoscape: All service based companies face technology challenges, although I would say that visualization companies have some difficult ones. The most prominent element of our data room is the render farm, with between 80 and 100 1U rack mounted blade workstations, we face power/heat/noise loads larger than most.
There needs to be special consideration made for all three. The power units (UPS’s) maintain redundancy of power (we have a 15-20 min “rundown” window in case of failure) the AHU (air handling unit) is an 8 ton unit which we are currently evaluating for upgrading. When re-building our office recently (we moved from the 5th floor of our building to the 7th) we had to carefully design the wall construction so there is minimal sound pollution into the main office space. Our largest investment is not our farm but our storage pool, we have clustering storage from Isilon (EMC) this storage pool is “scale up / scale out” as we add nodes to make the pool bigger we get more dynamic connections to the cluster. We currently have about 600 TB of storage, dynamically served over 24 10GB fiber connections to the company. We also have a 10Gb point to point connection between our office in NYC and Boston. This allows the NYC artists to access the storage pool and render farm at near-local connection speeds. For our more remote users we establish hardware VPN tunnels for secure traffic. No matter how fast the connections, or render power, or machine speed, we always seem to saturate it, so we constantly strive to refine pipelines so that artists can work as fast as possible.
The Digit Group: Visualization needs high performance machines in order to produce high quality deliverables and will be a major driver of how profitable you are. Other network infrastructures are not using the same amount of power. I am keeping my eye on Cloud-based solutions, but have not been overly impressed with the offerings at the moment.
PixelFlakes: The big difference with our industry is storage requirements and all the challenges that come with working with large files. These files need to be served quickly to all our individual PCs, they need to be backed up locally and offsite without slowing down our network and they also need to be stored locally. We often transfer around 150GB per day over the network, ensuring there are no bottlenecks across the system is crucial for efficiency. The same goes for the architectural industry, large plans and files need to be distributed / printed quickly and efficiently.
Public Square: Not much different other than we use space much faster.
Pure: more machines, faster machines, faster network for heavy data transfer and redundant backup system
2G Studio: I think it's not about how different we manage a network for archviz vs other service. It's about the size of the company itself. For example, when we were still 4-5 artist, we only used low end storage system and low end switches. Since everything is low end, the bandwidth is not big enough but we still were able to manage it. When we became bigger, like 10 artists, we needed bigger bandwidth because bigger bandwidth meant more speed to access the server, rendering time, etc. The most important thing is how you can limit your artists from accessing a specific folder. and how you can make sure they cannot get any files from the server. We can use windows servers, but it can cost a lot of money, or you can use linux based storage which are free.
Ricardo Rocha: The large file sizes are a very big contributor to differentiate between normal office and home networks, also the larger packages need to be available to all users and servers from the same location in the network so NAS is a must in this configurations.
Steelblue: Depending on the size of the company the primary difference is the data load and throughput requirements.
Transparent House: At TH I would say it is very close to what it in others companies, we have same routers, same cables, what we have done a little differently, is just doubled it in mind. All cards are Gigabit speed, drives on NAS all not less than 7200 rpm. Because size of the projects almost doubling in size of Gb every year, we have to be ahead of collapse. Even this summer we a going to double our file server from 50tb to 100tb and with new technologies coming projects are going to be larger and larger.
Urban Simulations: Quite different, more of the archviz companies work with still renderings but we are working with a huge render farm in-house dealing with tons of TIFs and rendering elements throughout the network feeding realtime postproduction of sequences that can require a ultra fast broadband network. Our goal is having different networks and servers to split 3d data and frames and elements avoiding traffic collapses our network.
Image Courtesy Factory Fifteen
How hard was it to learn all of the nuances of a network built around rendering?
ArX Solutions: We were in this industry for so long that this item was based on an organic growth. We currently have in ArX specific professionals dedicated to this matter that know exactly how to handle our IT necessities.
Beauty and the Bit: It is a never ending process. Each year there are new installments, novelties, upgrades. I guess the best way to act is not being overwhelmed by that. Sometimes analysis is paralysis so you have to use what works for you.
Designstor: It has taken us years to learn about all the nuances of a visualization network. Experimenting and gathering data takes a lot of time, and it is combined with production needs.
Factory Fifteen: Very hard. You can do it alone to a certain extent, but there comes a time when you need support. The trick is that support is rarely full time so you can’t justify the cost of a dedicated I.T. person. We are lucky as we have a 3D artist who has experience with networking who freelances remotely for us. He does everything from Manchester.
Neoscape: Well, it’s been 23 years, and I still don’t know all the nuances. I would say that using an established 3rd party render management system takes much of the guesswork and problems out of the system. There were times, that if a job was submitted to the farm, it could crash the network as all the render slaves competed for the same bandwidth to open the file and all the links contained. For the most part those challenges have gone away, but there will always be challenges about managing how machines are traded between distributed rendering and frame-based rendering. Getting all users to participate in the rendering when they leave for the evening. These are among the nuances, of which there are too many to mention.
Kilograph: The nuances of building and maintaining a network for rendering was extremely hard. Due to the niche knowledge that is needed to implement specific programs that require specific protocols and license servers needing the correct MAC and IP addresses from multiple machines and trying to make sure all the ports were open, secure and were not tied to any other programs or protocols.
MIR: We have never had any issues with the network. Our systems are very basic, just a couple computers and a network.
The Digit Group: I am not sure if it was hard, as much as it was frustrating at times. It felt like one step forward then two steps back at time. We have good solutions now, but will always look to improve as the cycle time speed directly affects our profits.
PixelFlakes: Marvin (Founding Partner) has a strong set of IT skills (not bad at table tennis either), therefore this came quite naturally and wasn’t that hard to do. We enlisted the help of another IT friend where necessary and quickly learned this ins and outs. We also received advice from the companies we purchased our hardware from like Synology and DELL. These guys have dedicated teams to help you find the best solution for your business.
Public Square: You are basically just adding some extra computers to the network which are accessible by everyone at the office. In my opinion it’s pretty basic.
Pure: ….not too hard. We always tried to build up something stable and for the future
2G Studio: For someone who does not understand networks, this is a very complicated problem. I was lucky that I learned about networking in the past. Although i am not that expert, it help sme to know how to setup a network in a basic way.
Ricardo Rocha: When you are inclined and like to learn the specifics not much, but I can imagine it can be overwhelming.
Steelblue: Coming up as a renderer myself I would say that I learned as I go. By the time I was required to manage a network for our company I had the past experience of seeing how operations at my previous employment could be used for our company. This removed a lot of the potential errors that could have occurred.
Transparent House: Not really hard, the hardest part was only investment into it. We find out that speed of network is the most important for us, and then quickly found the solution.
Urban Simulations: It was kind of trying to identify where and when the network is collapsing and from there trying to organize several networks and servers to store and deliver different sorts of data and traffic in the right way. That was a test-failure process over the years.
Image Courtest Factory Fifteen
How do you go about deciding which workstations to purchase? Is it the biggest machine within budget or do you have specific requirements that you meet?
ArX Solutions: This has been changing in our company. Based on our experience we prefer to have multiple computers rather than a few very expensive ones. Based on our experience giving to our 3D Artists super computers always killed common sense and smart optimization because it was always easier to use brute force. But at the end, a few months later they were once again complaining that the computers were not fast enough. We started several years ago when computer power was a problem, counting polygons and keeping that count low was a priority. Nowadays I feel that artists are not paying enough attention in optimization.
Beauty and the Bit: Generally is not big deal, just machines that make your technical everyday life easier. Obviously horsepower is important, but using your brain is much more important. We have a geeky component but far from being obsessed with the latest available processor or workstation.
Designstor: It’s important for us that all machines are of relatively equal capability. Therefore, we determine a budget and work towards prioritized specifications, maximizing what we can get within that budget.
Factory Fifteen: We have a standard check box and every year update it to the latest CPU and GPU. We hardly think about it now. We ring up PC specialist, and order two more per year on average.
Kilograph: I feel as time has gone on, learning the power necessary for the artists to perform at their highest capabilities, there was a balance that had to be achieved between building the most efficient PC and budget. Since we get our machines custom built, there is flexibility that allows us to achieve the balance between efficiency and budget rather than buying pre built computers.
MIR: We buy the most powerful workstations we can find (within reason) in order to render as much as possible locally and to not weigh down the network unnecessarily. We always stack our work stations with as much RAM as possible.
Neoscape: We usually go with “good” but not “great” the final 5% bump in speed can often cost 50% more, so we don’t get the absolute top of the line, we go a notch or 2 down. We do tend to get as much memory as we can. These days it makes sense to get well reviewed graphic card, although we may be beyond the days of needing “Workstation” class cards. The top of the line gaming cards have gotten so good, we tend to use those. We try to keep generations of machines the same so we can have generic disk images that can roll out to the entire organization.
The Digit Group: We have recently moved from a machine in a budget resource restriction to specific requirements to best meet the needs of our clients. It took many years to mature to this model.
PixelFlakes: We haven’t yet made the step to explore GPU rendering, so save for a few computers that our media guys use we like to bulk up the horsepower of a PC as much as we can. Most our PCs have either strong i7s CPUs or dual Xeons. We even have two PCs which have very high-end spec dual Xeons which we use for any 3D intensive projects.
Public Square: We typically get a workstation with a fast graphics card that can handle heavy models, and keep the processors fast but not loads of cores. The render farm does the rest. Plus soon everything will be GPU based anyways.
Pure: We used to buy always the second best. But it turned out to buy the best (even if it´s just 10% more speed) makes sense. The biggest costs go to the artists who wait. Therefore it makes sense to pay for machines instead of waiting artists.
2G Studio: Lately I always choose xeon. The i7 is just too slow. When I choose Xeon processor, I usually count the price per core. Usually, the latest build always have the most expensive price. I choose several series and count the price per core. Once I see there is a series that is too expensive and not worth the speed (usually the latest build) I will not choose that series. Budget...this is quite tricky, and obviously the most irrelevant thing when we are discussing about investment. I just don’t understand why people keep saying budget budget budget and budget. If the workstation is too expensive for you right now, then save some money and wait for couple months until you can get it. Yes you need to sacrifice your personal needs for a couple of months, but, investment is investment. Investment is always for long term, never for short term. A lot of people don't want to sacrifice their comfort zone until they know what will happen when they stick around in their comfort zone too long.
Ricardo Rocha: First we decide the budget we can work with, then considering the direction of the company we decide on the hardware, like if we’ll invest on CPU only or combine CPU and GPU or go full GPU, etc.
Steelblue: Our minimum requirements are pretty high and we max out our budget just hitting those internal needs.
Transparent House: Here is a very tricky situation.Uusually machine last about 4-5 years, trying to predict what you will need in this time frame very hard, but by analyzing speed of innovation in PC industry we definitely can be sure that small upgrades are visible in this time frame. There is no way you spend 20k on PC and it will last 10 years, you will be out from upgrade in 4-5 years, because different connectors, chips and sockets will not be compatible with newer processors, old processors will not be compatible with newest GPUs. It is better to pay twice by 10k than for one.
Urban Simulations: Our specs are really defined based on the graphic card, network connection and quick hard disk and CPU as everybody else. But we do prefer to build our own workstations with a local dealer to achieve the right specs.
How involved are the visualization artists in your management of the network and the machines?
ArX Solutions: Not much. We have in house employees who are in charge of this specific area.
Beauty and the Bit: Artists in our company are artists, more worried about the images than in render cores.
Designstor: Little to no involvement. We have a dedicated IT specialist and a Technical Director that manage our pipeline and network.
Factory Fifteen: Very little aside from asset management and resourcing.
Kilograph: At Kilograph, it is a team of senior visualization artists, the IT director, and management that coordinate the effort to manage our network where communication and execution are paramount to minimize downtown and maximize efficiency. Management of a sophisticated network environment cannot be done with only one person, and as we grow as a company, we learn how to manage our network as a team.
MIR: A group of our artists run the network and machines. We use an outside partner for setting up the hardware.
Neoscape: As little as possible. We have a 3 or 4 month cycle of disk image roll-outs, we build a “test” machine will have all the software, plugins, upgrades, versions, and then invite a select group to test everything. We make sure that we have licensing for all the software and that everything will play nicely on the network and render farm. These “Builds” are made for Macs and PCs. Once the builds have been tested on both platforms by the select stakeholders, (Motion graphic, Film, 3d, Graphic design, Project management) then we “Image” all the machines, everyone is given warning to make sure any personal data is backed up or copied off to a safe place on the network, and over the span of a few days all machines (including render units) are overwritten with a full clean install, from the OS to all the software. Then the users can log into their machines, copy any data back and keep working. We typically keep a few generations of software so that projects don’t have to switch mid-stream.
The Digit Group: Our visualization artists are part of the business process team as well as the technology team when it concerns capital expenses for IT.
PixelFlakes: Not at all. We want our artists to be focused on the work and not have to worry about infrastructure. If there are suggestions for improvements from technical artists we of course welcome the feedback. For now, we maintain the approach that all network and technical issues be handled by our talented I.T. guys, who basically, turn it off and on again.
Public Square: Depends on the artist really. If they are comfortable with IT, they can problem solve some things on their own which is great.
Pure: Not really. They concentrate on their work. But of course we listen to their preferences in terms of monitors etc.
2G Studio: Same as any render engine developer. They need their customer to give them feedback. The more they can get, the more they know what is the problem on their render engine, and what kind of features that their customers want. So, I can say my team has the biggest influence when I make a decision. I want to hear what kind of problem they are facing on the current network. What is their wishlist? How can we make it work?
Ricardo Rocha: Very. Although server class hardware is stable by design, we need the capacity to deploy changes, fixes or upgrades ourselves and not waiting for a IT company to understand our requirements and do the work, saving time and money.
Steelblue: Being a small studio our artists are heavily involved. When updates are required we send out links with patches to be run to all artists. This saves the time from an admin running all updates or requiring more expensive software management tools.
Transparent House: Not much, but definitely most of them understand all aspects and can find out what cause any issues, and explain to IT personal.
Urban Simulations: Since keeping the files small and saving space to improve renderings is a must in our industry to get renders quick. Every artist in the office knows about the importance of having clean the network of useless traffic.
How do you plan the network to accommodate a visualization pipeline? What sort of hardware is required and how do you specify the individual components? (i.e. bandwidth, network topology, storage etc.)
ArX Solutions: We use Windows Storage Servers which are perfectly fitted to handle large amount of files and data. Our network is a double gigabit network. Storage is a constant problem, we double our storage capacity every two years.
Beauty and the Bit: For us a good internet connection is obviously vital and also a nice rack of NAS´ since they run out of capacity in short time due to our production. We are really prolific so we need a nice storage system. Also backup system is important.
Designstor: This is a question worthy of several pages of answers. The short answer is, we plan for maximum efficiency rather than brute force power. Our strategy is to have fewer powerful pieces of hardware with an intelligent management system (rather than many less powerful pieces). Storage is something that can become a runaway train, so we try to analyze storage needs and place priorities in terms of speed, volume and redundancy. Backup is probably the greatest challenge and it’s very important to build a system that can handle not just storage volume but backup volume as well.
Factory Fifteen: 3D work and rendering is massively demanding of IT resources, it is a matter of buying the best you can afford and balancing the kit to avoid bottlenecks. The price performance ratio is far higher from ‘gaming’ rigs than it is for a specialised 3D rendering system from the likes of HP or Dell. In the long run the hardware in the XEON based workstations will be more reliable, but even so it is hard to justify the additional expense unless you have a large budget and/or no onsite IT support.
Workstation spec is fairly straightforward, buy a custom high end gaming/3D rig, specify as much RAM as you can afford (64GB ideally) and the CPU with the highest GHz / £ (normally Hex core i7)
Investing in reliable server hardware, (XEON and RAID 10) offers disk redundancy and performance benefits. We use bonded gigabit connections to a pair of Cisco switches to ensure there are no network bottlenecks on the server.
FF invested in laying CAT6 cable in the new studio running to a dedicated patch panel, this increases reliability and reduces confusion.
We load balance 2 fibre internet connections on a dedicated hardware firewall.
And don’t forget good chairs! A vital piece of hardware.
Kilograph: Planning for a network to accommodate a visualization pipeline is hard to gauge. It is a moving target. Growth and future projections play a huge role in the planning process. We do not want to overbuy as that greatly affects the firm financially, and we do not want to under buy as that greatly affects the firm’s efficiency. The moving target that is the visualization pipeline can only be described as future proofing the network as much as possible through smart utilization of every component from server network cards, RAID cards, switches (1G or 10G), and establishing a network hierarchy that is least detrimental to the visualization artists work flor.
MIR: I don't think that anyone in our office (including the people in charge) can answer this question. We are not that tech-oriented. We just buy what our service provider suggest and then it works.
Neoscape: Our network grew like most I think, organically, we didn’t have a 5 or 10 year plan, we would add some hardware, and then support it, as best we can. Our switching infrastructure is based on 2 Cisco 4500 series switch chassis, with 10 gb line cards, 48 port 1Gb cards and a handful of other specialized hardware. This switching gear provides more than enough backplane speed. This along with our EMC Isilon storage cluster allows us to serve up maximum speed to the workstations on the floor in both Boston and NYC (through our 10Gb point to point connection). We also utilize VLANs to organize the network into logical sections, ie. the render farm is on it own “subnet” the Boston PCs, the Macs, the NYC office, the servers, monitoring equipment, guest wifi, company wifi, all on separate subnets on the network, both to separate traffic but also keep everything straight.
PixelFlakes: A pipeline is a good way of thinking about it. A bottleneck anywhere along the pipeline can bring everything to a screeching halt so we must think of performance from start to finish. We’ve found the two most important aspects are network bandwidth and storage I/O performance. In terms of network performance this means elements as simple as ‘well installed Cat 6 cabling’ (a lot of poor network performance comes from bad cables) to complex elements such as the correct RAID configurations and SSD caching. Our servers also plug into the network via 10Gb ethernet connections. Storage is also a big challenge as we always need more, and the amount we need is ever increasing. We used to run all our storage in a RAID 10 configuration for speed but over time this became prohibitively expensive. As technology has improved we have moved to a hybrid RAID system where our workstations talk to a bank of SSDs that sit in front of a RAID 6 partition. This gives huge cost savings at a similar performance.
Public Square: Loads of storage.
2G Studio: hm... I guess I always start with the storage size, then the storage speed, then the switch, the bandwidth, then the network topology.
Ricardo Rocha: It’s a very straight forward plan in a small user count office, get the fastest most reliable connection you can afford, manage user access and permissions and share on network drives, this way we centralize all information and can manage backups and security in one place.
Steelblue: Our network grew organically as the company grew so did the network components. We started with a couple workstations networked to a local drive share. That grew into small business server with a direct attached storage. Followed by multiple virtual servers networked with a 10GB backplane and a Dell Compellent SSD hybrid data storage.
Transparent House: Simply saying - every project has to be done on the server, that farm and every artist and PM can have access to it right away. There is probably a single situation when we have local NAS for very first initial cut of filmed material, because to transfer 3-6tb to the server just impossible to transfer quickly. But in most cases as mentioned earlier every machine has 1Gbit cards, to have comfortable speed to the server.
Urban Simulations: Splitting data and traffic in a key to success… 2 servers, 3d data and frames and elements and 2 networks to avoid collapsing
Do you design all of the IT and system infrastructure yourself or bring in outside consultants?
ArX Solutions: We use consultants, but at the end our experience is really what matters.
Beauty and the Bit: Outside consultants. We prefer to leave that to professionals and reserve ourselves to the creative part of the process.
Designstor: Mostly in-house.
Factory Fifteen: We had a base and the built everything from scratch with outside consultants when we could afford it. We had so many issues when we were doing it ourselves. It really pays off in the long run if you invest in proper systems.
Kilograph: The IT and system infrastructure is an effort between me, an outside consultant, and a computer builder with years of experience in the industry.
Neoscape: We only use outside consultants only when we are implementing a new technology or component, and we include it with the purchase of the equipment. For the most part, the core components and support systems of our business continuity are designed in house. Outside consultants tend to underestimate our needs, and it takes so long to bring them up to speed about our requirements that we just do most everything ourselves. We also occasionally bring in consultants from time to time to pitch new technologies, workflows, or systems. We have adopted more web-based workflow technologies, sales tracking, timesheets, project tracking, mail, conferencing, there are many reasonable SaaS systems that have been a big help in running our business, these tend to be asset light, as we don’t want too many assets living outside of our control, these are support systems that don’t take up much bandwidth.
The Digit Group: Outside consultants along with an internal process team.
PixelFlakes: We do it all ourselves.
Public Square: A mix of both. We do most of the renderfarm and workstations ourselves, and also work with an outside IT company to build some of the storage systems and such.
Pure: Outside. We used to do it in-house. But we are too small to have a permanent IT person and too big to do it ourselves.
2G Studio: First I design on my own. Most of the time in my company I always try to find the information first so at least I know the basic logic. Then I bring in outside consultants. I am lucky that my IT consultants are also the one who supply all my computer needs, and he is very passionate about IT. So now he is in charge on all my IT and system infrastructure. I can say, I am at the right hand.
Ricardo Rocha: For simplicity sake we do all ourselves.
Steelblue: Up into last year, internally. Last year, we brought on a consultant.
Transparent House: I would say we are designing the main idea and then discuss it with an IT company.
Urban Simulations: As with lawyers, it's quite difficult to bring in somebody not used to working with archviz to the point of understanding the deep and different behaviour of the network performance in our industry. We do prefer to ask our IT external guys for specific features of the hardware and get them advising us with the right one instead a whole view of the infrastructure.
When building out a render farm what are the specific challenges you run into and how do you address them.
ArX Solutions: There's always the choice between a big computer with many cores (like Boxx) or the option to have more computers with less processing power. Lately we have been testing second generation servers with exceptional results.
Beauty and the Bit: We haven't got a huge renderfarm for the moment compared to other companies but the most challenging part was the noise level. In the end the best solution was to soundproof a whole room of the office.
Designstor: The single greatest challenge is harnessing all the available power in an efficient way. We use Deadline and have customized it completely to make our farm as efficient as possible. Other challenges include physical environment (cooling, power, security), component issues and software versioning.
Factory Fifteen: Main issue is bandwidth space so you can actually save your renders iles from 30 machines at the same time at 4k with 20 passes in each render. You simply need super fast cables and a dedicated server in full RAID to manage that. The second biggest issue is software, updates and maintenance, solved through taking the time to create software rollouts for the basics and smart shortcuts to specific spaces on the server accessible from all machines quickly. Also using some form of remote desktop so you can access each machine from a central computer or even from another city.
Kilograph: The challenges that I run into every time I spec out a render farm is threefold: life span, efficiency, and integration in the future. Life span of a render farm is important because making one wrong move could cripple a firm for months financially and not getting the efficiency that we thought we paid for. Efficiency is dictated by the types of components that are being spec’d out because as we all know time is money, and with more time being utilized to perfect that render or animation and less time twiddling your thumbs waiting for a render to finish, it’s a win win for everyone. Lastly, integration for the future is the most important, because these things need to last for years, and with new technology coming out daily it seems, forecasting new tech and specifying components correctly to integrate later on will add years to the life of a render farm.
MIR: We have never built a renderfarm. We just use all the downgraded workstations for this purpose. The main challenge is that the newer machines have more RAM, hence the renderfarm can not deal with newer and heavier scenes. Since we have a very basic setup, it can be a pain in the ass to update things, like Windows or other software since it is not automated.
Neoscape: Power/Heat/Noise, a render farm is nothing more than a bunch of dumb workstations in a convenient form factor so that they may be put in a closet or room to do their work and (hopefully) be left alone. The problem arises when calculating the amount of power needed for the farm under full load, the amount of heat that it will create when working, and the noise it will create when many machines close to each other. Rendering is a specialized process that is highly tuned to use 100% of the CPU 100% of the time possible. This is great for getting lots of rendering done, but terrible for heat management, the power it consumes and the wear and tear on the computer as it works under this great load. These computers tend to have components fail, more heat will shorten their lives even shorter.
PixelFlakes: The main challenges we found were storage space and efficient cooling. We have 12 render nodes spread out over 8U of rack space. Each of these nodes have dual Xeons and onboard graphics. We knew that for the foreseeable future we would need a CPU loaded render farm, so invested our budget towards business grade CPUs as opposed to GPU. These nodes sound like a small plane taking off when turned on, so we needed to ensure they were sound insulated and cooled, two things which don’t really go hand in hand. Moving into our new office this summer will allow us to have an independent server room which should help us solve both problems. In the interim we use the APC acoustic server racks which great. We also have a portable aircon unit pointed straight at it, which we turn on in the summer!
Public Square: The next challenge will probably be - when should we start investing in a GPU based farm.
2G Studio: The challenge is always about picking the right processor and the RAM and then the bandwidth to transfer the file on each render farm and the other workstation. As I mentioned previously, I always count the price per core. As for the ram, we need to know the RAM management. On workstation, we usually install lots of things, and lots of startup, and we also need to know that open an empty 3dsmax file can consume 3 GB of ram alone. Opening a big scene mostly with tons of foliage and lots of detailed object, it can consume more than 32 gb. not mentioning the rendering. I love to do some research. and I did lots of tests back then. Another example, on V-ray, you can choose to see only the small preview of your image while rendering, this is also to save lots of RAMs if you have a very limited RAM. So most of our workstation, we use 128 gb of RAM, and for our render farm, we use 64 GB of RAM, because a render farm doesn't need to open any 3dsmax files.
Ricardo Rocha: Upgradeability, power management and noise. We solved this by not using a local render farm.
Steelblue: The first issue is power load. Our office had enough open breakers on the electrical panel so this was overcome by bringing in an electrical consultant to run more dedicated circuits in the office. However, we are now utilizing the amount of juice possible in this particular office without more significant investment.
Urban Simulations: Low energy consumption, low noise and easy refrigeration. We moved three years ago from a well-known international company to a local one to address these issues and get them solved we cut the energy a 75%, no noise and easy refrigeration with the new CPUs and even more without any case and just attached to glass plates
When building storage array what are the specific challenges you run into and how do you address them?
ArX Solutions: Redundancy is the key. You can't think about having a serious company with redundancy and hot spare drives.
Beauty and the Bit: As I said we feel really prolific nowadays…we are like Geeky Beatles in Rubber Soul era ☺ So the main concern is to have a good storage system which does not run out quick. Each year we upgrade our system we think it will last a lot but happily it doesn't (which means the business is growing).
Designstor: The greatest challenge when building storage arrays is the required performance. Most arrays are designed for database-related performance (many small bits of data) but our needs are for speed in dealing with very large sized data. Maximizing performance is expensive, and implementing often exposes bottlenecks in the rest of a system (cabling, switches, etc.).
Factory Fifteen: Keep everything simple, Microsoft works well with Microsoft, so build a MS server and administer all network shares and security updates through a Windows domain environment. It will be more reliable and secure than running a NAS, although admittedly more challenging to set up.
Ensure you have speed, reliability, and enough network bandwidth.
Kilograph: Storage arrays are tricky because right off the bat the question runs through my head, what RAID do I need to use for this specific array. RAID 6 and you get more storage, but less I/O. RAID 10 and you get less storage, but more I/O. Addressing this, I coordinate with the senior artists, and management to determine the use of the array and through that coordination, determine the necessary RAID for that array. The hard drive choice and the SAN/NAS housing those drives are the most challenging because the hard drives dictate the I/O as much as the components that the SAN/NAS comes with because as building a render farm, life span, efficiency, and integration are paramount those files being stored will need to stay there safe and sound for years. Lastly, Integrating and creating protocols for artists is the last challenge, because as we all know, when artists see more storage, they jump for joy. File management and protocols are key to keeping a clean and efficient array and the only way that can be done is in depth and thorough coordination with the artists and communication is the only way that can be accomplished.
Neoscape: Our Isilon is an expensive solution but it is proof that you get what you pay for, The ability for speed, scalability, along with redundancy at a OS level makes a very powerful system.
PixelFlakes: We must achieve a balance between performance and redundancy. This isn’t unique, except that our performance requirements are much greater than a typical office network. In our early days, it made sense from both perspectives to build an array configured in RAID 10, with high specification enterprise drives. This results in fantastic performance, however as you grow, over time, it becomes prohibitively expensive. The problem is that we have become used to (and require for smooth workflow) this high performance. Eventually we decided to trial using new NAS boxes with most storage running in RAID 6, fronted by a small bank of high performance SSDs running in RAID 10. This sort of SSD caching has really made a big difference. We also only use high end consumer NAS drives. They are reliable and perform exceptionally well at a much lower cost than traditional enterprise drives.
Public Square: How much storage can we get before it gets too expensive, and let's make sure there is enough redundancy. We have had a server fail in the past - not fun.
2G Studio: The challenge is to decide how big is your storage size is. It’s going to dictate what type of your storage hardware (Synology or QNAP). The hardware itself is very expensive, so we decided to use two storage systems. One is using high end storage hardware for the main files, and second is using lower end storage hardware for the secondary files such as video tutorials, entertainment (music and videos), references images and videos. Modeling projects, etc.
Ricardo Rocha: Transfer Speed, Redundancy, capacity and backup, user security and user management and configuration. We use enterprise grade hardware and software like RAID storage, internet redundancy, firewall, and gateway services. Also we rely on a domain network setup for user management and configuration. Other stuff is virtualization to test and deploy.
Steelblue: The biggest issue is maintaining enough space. Dumping to tape and a secondary array has allowed us to keep files accessible while maintaining enough active project space. 40+ TB for active projects
Transparent House: Storage is the most important piece. If you run out of space, your studio is slowing down every day, not enough space, projects age getting more expensive, while you spending time on figuring out how to solve it. The speed of network is slowing down, because all system is dependent on the server, because you can outsource rendering, you can outsource modeling, but no way to outsource storage. The internet is not there where you can work this remotely.
Urban Simulations: it was difficult to decide between quick response and large storage for the same budget, finally we decided to score them, 75% quick and 25% larger storage system
About this article
We talk to top studios about their IT infrastructure and networking.