Many articles have been posted about the advantages of PLM in the Cloud. Topics of continuity, synchronization and availability from any platform instantly come to mind.
Not intending to burst your bubble, but guess what, "No Virginia, there is no cloud". The term 'computing in the cloud' is merely a metaphor for someone else's hardware, employing economy of scale, executing queries and stored procedures against your metadata. In short, you push data to a secure site running servers from your web client. Apple, PC, Linux, doesn't matter. It's just data. Data that you live and breath by.
That said, (sorry if you're feeling let down somehow), we tend to take this 'cloud' for granted, but it is NOT necessarily an all powerful, all seeing, all knowing mega processor in the sky. It's normal server architecture that someone provides access to for a fee.
Since we know now that this cloud may have some limitations, let's see how that might apply to Autodesk's Fusion Lifecycle.
Anyone who've read any of my blog postings, or other social media blips might know that I'm a data junkie. Data is king. Data runs the world. For every simple task from starting my car to drive to the corner store there is data in the mix. My car knows the proper fuel/air ratio because it's been programmed using data to make automatic samples and adjustments. The cash register at the market knows how much to charge me for the gallon of milk by scanning the barcode on the label. More data...
While simple metadata is for the most part simple bits and bytes, we humans need some way to interface with that data. So, we create tables to organize that data, use images to represent the data, and run code to automatically calculate data into meaningful output.
All these 'extras' eat bandwidth. Pages have to load, images get downloaded and cached, numbers get crunched. It all takes processor cycles. There are three main components that come into play. Network speed, (on both ends), transfer of data, both up and down, and actual processing of the data.
As a Fusion Lifecycle integration specialist I'm always looking for ways to speed up the flow of data and optimize the user experience.
It's important to consider things such as how many images are utilized within the Fusion Lifecycle, (FLC), workspaces. Do you really need your company logo plastered everywhere? Chances are the majority of your users are from within your organization so they don't need a reminder of who's site they're on.
Page loading time can sometimes be reduced by setting workspace sections to be collapsed by default.
Finally, the number one burden when we're talking performance is scripts. There are Condition scripts which verify whether an action can take place, or if a user is a member of a particular group. Next are Validation scripts do things like ensuring prerequisite data is complete, or that a required file has been attached.
Perhaps the most utilized type of FLC script is the Action script. Action scripts can do truly amazing things from automatically creating records based on workflow actions, generate emails containing advanced print views, etc. When tasked with a creating a complex action script it's important to know that FLC imposes a time-out on the script execution, typically 9 seconds. The reason for this timeout is primarily two-fold; expected interface response and server load balancing. Nobody wants to stare at the screen for 45 seconds waiting for the records to update. With today's bandwidth we've come to expect near real-time response. Also, remember that I said this 'cloud' is merely a server cluster running somewhere else in a secure data bunker? Well, those server admins need to ensure that everyone gets equal access and you really don't want to wait for your screen to refresh while 20 users from some other company just fired off a script that's taking 45 seconds to run...
So, being a self-proclaimed 'clever' implementer I said, "I'll just string together a bunch of scripts and export everything I can to library functions." In theory, that's a sound practice, right? Wrong! While theoretically each script gets ~9 seconds of CPU time, each script chews up time loading resources like those other scripts and library functions, effectively shooting yourself in the foot. Clever...right.
A better approach is to split the scripts up by operation. We can trigger scripts by calling them according to the operation such as:
- Using an On_Create script when an item is created
- Firing an action script when item details are modified
- Triggering an action script when a workflow is transitioned
- Manually running an On Demand Script
Manually running an On Demand script is the ace up your sleeve. True, it does require user interaction, but it also provides the freedom of NOT being tied to any other ongoing process. It's important to realize that every FLC implementation/integration is unique due to client requirements. In short, there is no tried and true process for ensuring things will work as desired right out of the box. With a highly configurable product like Fusion Lifecycle you can count on a certain amount of head-scratching to get it to jump through those specialized hoops.
I hope that you found this posting informative.
Happy Coding,