Workspot is the first multi-tenant, cloud-native solution to deploy both VDI and Windows applications across multiple sites, including on-premises and public clouds. Our micro-services architecture and agile development model has resulted in rapid innovation. We question everything and we try to look forward to anticipate what customers need - that's how innovation happens. We are excited to share some key new features with you.
Connector-less Azure deployments
In our latest release we are extending our cloud control plane - Workspot Control - to communicate with cloud providers directly instead of relying on connectors at every tenant location. Our primary goal is to reduce components for every new deployment in the cloud for even faster cloud deployments. Time-to-value is always about questioning what can be simplified next.
John Maeda calls this philosophy "Thoughtful Reduction", and the Workspot engineering team lives by it.
We need a connector for on-prem deployments because there is no direct inbound communication path from the cloud to the customer datacenter. The connector is used for outbound communication to the cloud control plane. However, most customers are now looking at cloud-first, multi-site, hybrid deployments. Companies want to leverage multiple Microsoft Azure regions globally to put the desktops closer to the end users. Is the connector important for cloud deployments? In this case, "Thoughtful Reduction" entails the ability to think ahead 24 months to what a common deployment looks like and envision the problems that could occur due to all the moving pieces in the stack.
To illustrate, let's consider a customer looking at doing VDI/App-Publishing and putting VMs at three different locations globally on Azure. The customer wants to run desktops in US West, UK South and in Japan. What would it take to simplify such a deployment? What's the point of six connector instances in such a deployment?
We solved the problem by completely removing connector instances for cloud resource locations. We achieved this in multiple phases over the last year:
- All the agents (VDAs) in the Workspot architecture talk to the cloud directly. Standard agent business logic like registration, cloning, health-checks, performance and load data is sent using an optimized bi-directional protocol over WSS/TLS. The protocol is designed to have no compatibility issues between the agent and the cloud.
- For cloud tenants, Workspot Control directly communicates with Azure (or other clouds) and does not need a connector in the middle. All operations related to provisioning, storage, networking, security-groups, etc are managed using scalable services in Workspot Control.
- The connector is only required for AD management. So for the customer deployment mentioned above, the customer can have three managed regions without any connectors. We may have a connector on-prem if the customer wants to use a domain controller for user discovery.
- The customer can also use Azure AD/Azure AD Domain Services for simplifying Identity Management in the cloud. In such a case, the deployment does not require any additional management components in the Azure tenant or on-premises.
All the virtual machines are provisioned directly from Workspot Control using APIs provided by the cloud vendors. Workspot cloud services can scale easily to handle all customers and provide a 100x better availability model for all the tenants on our platform.
Even faster DaaS deployments. More reliable DaaS deployments. Fewer moving pieces.
It's not far-fetched to imagine DaaS deployments in less than 60 min with the right architecture!
Legacy VDI vendors continue to use a client-server topology where the connector is a traffic proxy and creates a virtual "long cable" to connect existing on-prem components to cloud controllers. From an availability standpoint, the nested hub-spoke architecture of proxying traffic via a connector at every location is a scalability bottleneck.
The real test of a cloud-service is not the frequency of new releases every month. The real test is how the foundational architecture is designed and whether the product and services can easily scale to thousands of customers and millions of users.
In-app help and support
One fundamental problem with legacy VDI solutions is the enagement model between the vendor and the end users. The VDI vendor is primarily engaged with IT, and the feedback loop is based on a traditional product management annual sync-up model.
Cloud products need a faster and more streamlined approach to make the end users happy. The end user is king. If the end user is not happy, word spreads fast.
We are excited to add an in-app FAQ and direct support interface right in our Workspot Client. We leverage the industry's fastest growing in-app support product. The Workspot support team fields all the questions that come in from the end users. Some of these questions are relevant to our product, and we try to help the end users with those. Some of the questions are forwarded to the right owner at the customer for follow-up.
The key advantage is that this model really keeps our entire organization tuned in to what end users are experiencing, and if there is a problem we know about it right away. This helps Workspot continually improve our customer success competency. SaaS vendors are successful long-term when customer engagement is the #1 priority across the organization - not just within the support department.
Direct feedback loop with the end users, leading to continual product optimization.
We care deeply about addressing the most important painpoints customers experience with legacy VDI solutions. This is yet another way we stay close to our customers. The result is significant innovation in VDI that makes our customers more successsful!
Uptime and reliability
We've made numerous enhancements to our blue-green deployment model for pushing new releases. We have automated almost every aspect of our software stack for dealing with cloud infrastructure issues. The Workspot architecture is now cloud-agnostic.
All the components used in our cloud micro-services (databases, no-sql stores, caching store, linux containers, load balancers, etc) can run on any cloud infrastructure provider with fully automated failover support. We run our service across multiple availability zones for redundancy. The result is continous improvement in our service reliability over the last 2 years. We offered 99.95% uptime to our customers in 2016.
We are excited about hitting 99.98% uptime in H1, 2017!
We'll continue to question everything, listen carefully to customers, innovate for simplicity and ensure customer success.