News & Latest Works

Hyperconnected: Mental Shortcuts or Data Governance Shortfalls

Data Governance Heuristics

Think about your typical morning routine. Maybe it involves coffee, working out, getting ready for the day, checking your phone, etc. Consider the heuristics or mental shortcuts your brain introduces to expedite your ability to make judgments and solve problems with minimal mental effort. You are probably doing it right now as you read this on a smart mobile device!

In this hyperconnected world, smart mobile devices are omnipresent in our living and working spaces. These spaces are progressively being adorned with more and more technological gadgets to help us take even more mental shortcuts. Where does one begin to address what is and is not acceptable within their home? How does one determine the necessity of preserving or deleting digital clutter? Technical engineering and the design of intelligent, wearable technology has nestled its way deep into our everyday lives suggesting that its existence is a need perhaps more than a want. Do we fully understand the lifelong impacts of generating digital exhaust?

"Circle of Trust"

Emotional intelligence coupled with critical thinking can be a strong foundation to draw upon initially and take inventory of what exists in our lives. Advice is often sourced from those within our "circle of trust." Although it may not be the correct information sometimes, we gravitate towards this "good" advice nevertheless because it comes from those we routinely interact with. This impacts children the most as many of them have yet to develop their emotional intelligence and mature, critical thinking skills. So the onus is on parents, thought-leaders, executives and those with responsibility to serve as a role model for making sound decisions regarding data governance.

Data Governance is everyone’s business AND responsibility.

What would a co-worker or friend reply with if you were to tap them on the shoulder and simply ask, “How do you KNOW if you are securing yourself at home?” What would their reaction be? They might look at you puzzled and begin with some form of “Well, I have a lock and deadbolt on my door, use timed lights at night when away, and have installed a decal on a window alerting would be intruders to a security alarm." But that is not achieving what is needed.

Identity Awareness, Protection, and Management encompasses much more. Insecure and poorly managed smart devices, computers, and social networking accounts layered with publicly available information (PAI) represent rich data sets that can cripple the brand integrity of a business or sideline aspiring students from future employment should they become compromised. Remote working environments are now the norm. Consider the opportunity costs of losing intellectual property or inadvertent disclosure of proprietary data simply because everyone thought someone else was tackling security. What digital footprints do you leave?

How Victory Helps

VictoryCTO can help. We have trained and educated some of the most prestigious academic institutions, medical firms, Fortune 500 companies and, in some cases, everyday citizens on how to live confidently in a hyperconnected, digital world. The question becomes how this influences our decision-making and warrants consideration for taking active steps in managing business models, inspiring everyone to actively contribute towards data governance in healthcare, banking, and other verticals amidst a global pandemic and the 4th Industrial Revolution (4IR). Never heard of the 4IR, reach out to VictoryCTO and we can help!

VictoryCTO is a leading Hypernetics solutions company and delivers tailored, virtual and on-premise consultancy and training. Check us out to learn more about our e-CAP (Executive Cyber Awareness Protection) and turnkey SOC1 and SOC2 offerings and take a hold of data governance and increase resilience across your team. Stand up. Act. Build a digitally secure tomorrow!

Hyperconnected: Electronic Ethics & Data Governance

How do we maintain ethics in a hyperconnected world?

Constant internet connectivity is increasingly becoming a necessity in today’s fast paced, hyperconnected world. This state of hyperconnectivity extends far beyond personal living spaces and seamlessly accompanies us in vehicles, airplanes, hotels, retail shopping centers, and our favorite eateries. For those fortunate enough to have school-age children, the reality of COVID-19 has highlighted the undeniable need for everyone to exercise ethical judgement in their use of electronics and re-envision what a home office or classroom should look like.

Many of the “golden rules” instilled from generations past are foundations for our decision making. However, the emergence of the 4th Industrial Revolution (4IR) and near instantaneous access to data has challenged how information is processed, understood, and acted upon. Data in context yields great insights — what are the ideal ways to approach this on an individual, family, and professional level?

Down the Digital Exhaust Rabbit Hole

The opportunity before all of us is to take an active role in how our total online identity is managed. In order to understand the "digital exhaust" we produce 24/7/365, we must begin down the pathway of education and purposefully and morally guide our technology decisions. No one company nor individual holds all the answers and author Lewis Carroll, Alice in Wonderland, captured it remarkably:

“If you don’t know where you are going any road can take you there.”

The convergence of policy and technology is where this lifelong journey begins. In our homes and public spaces, we decide each day with how to share or restrict our digital exhaust. This will never be a sprint, but rather a marathon where the winners are determined by how they run the race and by whom they run alongside. It is important to recognize that everyone has a place in this 4th Industrial race. Realizing the opportunity cost of sacrificing individual privacy and, in most cases, security for instantaneous access to data, discounted products and services can best be summarized by “If the product or service is free — YOU are the product.”

Without knowing the common ways our data is collected, who is collecting it, and where it can end up, safeguarding our information becomes difficult. Fortunately, there are specific steps everyone can and should undertake to better protect themselves, family members, friends and colleagues, and employers to elevate their ethical use of electronics and actively manage their data governance.

How Victory Helps

As the world’s leading Hypernetic Solutions consultancy, Victory CTO personifies this mantra to deliver business evolution through technology, data-driven process, and cross-disciplinary experience to create valued change and opportunity without sacrificing ethics. We are practitioners first and chart our course ahead from this vantage point.

Victory CTO achieves this by first constructing a team of Doers — they are talented, genuine, accountable, and passionate to enable others to succeed and have a great time along the way doing it! Have you paused to consider how you actively manage data governance in your personal and professional life? Or how your company actively and ethically accomplishes this?

Check us out at VictoryCTO.com

Coronavirus/Covid-19 Crisis: Management and Communication For WFH Teams

Project Management and WFH

Project management is key for success in a distributed work environment. Maintaining clear lines of communication between remote resources and keeping teams upstream well informed of progress, risks, and timelines are among a successful project manager’s main goals and focus. Set your project managers up for success by arming them with tools that facilitate project organization and structured communication.

The Right Project Management Platform

Project management has always been core to Victory’s success. Many companies struggle with the basics. In the past, Victory encountered scenarios where multiple project management tools were used in a single department (not to mention company wide), or not used at all. Synchronizing the status of any given project was a regrettable (and expensive) manual process in both cases, and confusion regularly reigned supreme.

Selecting and properly implementing a project management platform is incredibly important, there are numerous project management tools, which is best for your organization? It’s tempting to jump at the first seemingly attractive solution if you are scrambling, but this can be detrimental and dangerous. Victory has seen tremendous success from organizations that optimize their platform selection prior to implementation, avoiding multiple expensive change management cycles in the future. Here are some best practices for platform selection:


  1. Engage stakeholders company-wide to find the functionality that increases productivity and communication for everyone
  2. Check integration points (APIs or native) that allow project management to automatically send data to other platforms already in use, keeping everyone in sync and improving efficiency. Example integrations include:
  • CRM,
  • Payroll
  • Business Intelligence
  • HR & Staffing
  • Resource Scheduling
  • Project Estimation and Timelines
  • and more...
  1. Create a matrix based on criteria collected from stakeholders and rate each platform under consideration, have multiple people rate each platform. This is valuable for discussion, tracking requirements, and ultimately decision making.

Communication

Internal communication becomes increasingly challenging as an organization grows. Removing the proximity that everyone takes for granted at a physical office for quick answers and your newly remote team is already at a disadvantage. Here are some best practices we’ve found to ease the transition for team members who are suddenly remote work from home for the first time.

“What should I work on now? What should I work on next?”

Projects & Tasks As mentioned, having a well defined and organized project management platform facilitates threaded communication for specific tasks and projects. This provides a single source of truth for what needs to happen next and answers the eternal question “What should I be working on now?”

Business Continuity Teams having traditionally relied on proximity for interaction and direction to date will suddenly require more coordination. Over communication at this juncture is not a bad thing and methods like daily "stand up" meetings work for more than just teams of engineers.

Team Member Sanity Team members who have not worked remotely for extensive periods of time will require additional interaction until they nail down their "remote routine". Extensive anecdotal research indicates that it takes 1-2 weeks for first time remote workers to mentally get into the groove and be productive when working remotely for the first time.

Video Conferencing If not already in place is critical for your team, this is the time to review the organizations needs and solutions. In addition to improving overall communication, systems that have screen sharing and other features enable efficient communication in meetings by keeping everyone on the same page.

Chat / Instant Messaging There are many options here too. As organizations grow, we find that more and more communication happens in the context of the project management platform. The PM platform keeps questions and conversations contextually relevant, captures information for people not included in the immediate conversation, and does not lose information as part of an overloaded thread.


Victory has over a decade of experience helping distributed teams stay securely connected, operational, and productive. Victory’s Remote Team / Work from Home security solutions help clients navigate and seamlessly deploy flexible and secure remote work environments. Victory’s all domestic team combines operational savvy with strategic and tactical experience to transform businesses globally.

Coronavirus/Covid-19 Crisis: Security For Your Suddenly WFH Team


3/22/2020 UPDATE: The Victory Consortium is working on a series of how-to videos to help businesses set up their own secure remote worker policy and VPN. So stay tuned, but in the mean time the following is a quick run down of best practices.


Best Practices: WFH Security

For over a decade prior to Coronavirus/Covid-19, Victory has been enabling remote teams for SMB and Enterprise companies. Based on our experience here are some best practices that have historically lent themselves to a successful evolution to an all remote workforce, or Work From Home (WFH) team.

It’s not an emergency until it IS…

The United States took COVID-19 seriously on a Friday - and even then most people thought they would go to work as usual on Monday. Many businesses did not plan for their team to be entirely remote but here we are. Later we will showcase a Victory client that made the switch from fully on site to fully remote as the crisis was hitting home.

What’s next?

The office is a walled garden - IT controls what comes in and out at the perimeter and keeps things reasonably safe. Outside of those walls, however, it becomes more challenging. Your team members’ homes are not necessarily any more secure than a public hotspot. The following two technologies allow teams to securely connect to business networks from home, the right solution depends on they size and nature of your business.

Virtual Private Networks (VPN)

A Non-Technical Definition of VPN:

VPNs create a secure connection to external networks, like to the one at your office, across the internet. Connecting to the internet on public wifi (think coffee shop) without a VPN, means that when your computer sends and recieves messages they are unencrypted and could be intercepted, which is a security risk.

There are a lot of public VPN services out there which will protect you when on public or untrusted networks; here is a list. Configuration for your specific security needs is critical. As a baseline, make sure that your VPN includes local network isolation (often called local firewall) so that your computer is fully isolated from the local network.

“Everyone should have a VPN on every device - period.“

  • Every IT Professional Ever

But that's not the point of this article - this is about allowing your people to work on corporate systems securely. Many companies rely on whitelisted IP addresses or local networks, especially to secure legacy systems. To allow your people to log in you need a corporate VPN - which can be very expensive and complex. Victory can help you navigate OpenVPN, Algo and other solutions that not only meet your needs but also your budget. Victory can implement solutions in a short time on commodity hardware and allow your employees to log in to the office network and be in the office virtually.

Secure Remote Desktops (SRD)

First, as a light case study: Victory was recently called on to mobilize a 50+ team member Suddenly Remote Workforce mentioned at the beginning of this article. For security’s sake to date, everyone was required to work on premises because the company handles sensitive financial data. However, due to coronavirus, the state mandated they close the office immediately, leaving the company stranded without any course of action to restore productivity.

Victory was engaged to implement an already-in-a-disaster recovery & backup plan, and as always, time was literally money. Victory accomplished this by creating a Secure Virtual Office using Amazon Web Services (AWS) Workspaces. In a controlled rollout on Monday, 10% of the workforce logged into Secure Remote Desktops from home. On Tuesday, the office was closed and the remainder of the employees migrated to SRDs from home. With seamless continuity, the business didn’t skip a beat.

Secure Remote Desktops (SRD) are one of the most secure ways to address security concerns for a remote team.

There are some distinct advantages to SRDs:

  • SRDs can be managed by the same Directory service as the main office - user roles, permissions and even drive contents will transfer seamlessly
  • Connection to the SRD is a secure encrypted tunnel
  • Files on the SRD cannot be downloaded to the user’s device
  • You can connect from a Windows or Mac Laptop, iPad, iPhone, Android Phone or Tablet
  • The desktop itself is connected to one of the fastest and most durable connections possible

Note: SRDs dovetail with Victory’s Azure Active Directory offering. If you have Active Directory on premise we can migrate to the Cloud and add SRDs at the same time, and they are available in AWS or Azure cloud. With Azure AD security can be implemented down to the document.


Victory has over a decade of experience helping distributed teams stay securely connected, operational, and productive. Victory’s Remote Team / Work from Home security solutions help clients navigate and seamlessly deploy flexible and secure remote work environments. Victory’s all domestic team combines operational savvy with strategic and tactical experience to transform businesses globally.


[Announcement] Data Science Services | VICTORY

Is your data drowning you - or is it being ignored?

Companies are sitting on a tremendous amount of valuable data from many different sources, but are intimidated by the prospect of turning that data into actionable insights or innovative data-driven products.

Victory is announcing a new offering: Data Science as a Service

Our team of senior data scientists will work with you to:

  1. Understand the data you have
  2. Assess current and future business needs
  3. Begin creating a prototype pipeline of internal and external analytics products and data models that address those needs

The possibilities are endless: From marketing and sales data, to internal or external application user data, to social data, or any combination of those.

How It Works

Understanding the data you have requires only a few samples of all the different sources of data you currently store. Our team will examine each of the fields, detailing how we might merge or enhance each of those with additional data.

The team will work with you to assess your current business needs, and discover where new data models and products could help fill gaps in efficiency or even to create completely new revenue streams.

The Outcome

Our data scientists will use the latest scalable data science approaches to rapidly prototype analytics deliverables based on your data, which can take the form of reports and visualizations for customers or internal web applications that can empower your internal teams to work faster and create value for you and your customers.

The end result is clear and actionable insights that can drive value for your business starting immediately.

Learn more about our Data Science Services or contact us at data.science@victorycto.com.

Getting Started with Azure Cost Management

This information is provided by our friends at Agile IT – specialists in Cloud Migration in Azure.

Cloud software brings many advantages, but it presents the challenge of tracking and managing their usage and cost. Over time, a business finds itself using many services and applications, with multiple cost centers. There are always ways to use the cloud more efficiently, but finding the best ways to optimize is complicated.

Azure Cost Management gives businesses the tools to track and optimize their cloud spending. It shows cost and usage patterns for Azure services and third-party Marketplace applications, and it suggests ways to optimize spending. Over 70% of Azure enterprise customers use Azure Cost Management, and it can even be used with non-Microsoft cloud services.

Azure Cost Management

The product is a suite of cloud tools for centralized management of the costs of Azure applications and services. It's included automatically with the Microsoft Enterprise Agreement and pay-as-you-go plans. In fact, there's no extra charge for using it within Azure.

Key terms in Azure Cost Management are visibility and accountability. It is easier to determine how much in both the short and long run is spent. It shows where the costs are coming from within the company. The tools use the information they gather to generate recommendations for configuring the services more economically.

The suite can be used with AWS and Google Cloud Platform in addition to Azure. This feature is currently available in preview at no cost; later there will be a charge tied to the use of the cloud platform.

Azure Cost Management is similar to an earlier offering, called Cloudyn. The latter was originally called Azure Cost Management by Cloudyn, which can be confusing. Cloudyn is still offered, and it covers some cases which Azure Cost Management doesn't as yet. The long-term plan is to replace Cloudyn with the newer product.

Cost Management Tools

All the tools are available from the Azure portal. They let the operator get an overall view or focus on specific aspects of the company's cloud deployment. The information can be in the form of graphic analyses, numbers, recommendations, or alerts. Intelligent use of the tools can determine where costs should be allocated and where savings are possible.

Cost analysis

The Cost Analysis tool can show current and cumulative costs as well as making forecasts. Four built-in views are provided, based respectively on accumulated cost, daily cost, service, and resource. Customized views can use specified date ranges and group data by common properties. Big one-time costs can be amortized.

Many grouping and filtering options are available. Grouping determines how the data is broken down; filtering selects which costs the analysis includes. Use of these options lets management see which departments are spending the most and what types of services account for the greatest costs. Cost analysis views can be shared for later use.

Recommendations

Many Azure account types support recommendations in cost management. This is a feature under Cost Analysis. Recommendations identify inefficiencies or recommend purchases to save money. For example, if so many VMs are allocated that most of them almost always sit idle, a recommendation will propose shutting down or deallocating some of them. An alternative is to downgrade them to a less expensive class. Conversely, a recommendation may suggest buying reserved machine instances to reduce pay-as-you-go costs.

Following recommendations is always a judgment call, of course. If usage levels are subject to major swings, paying for VMs that are usually idle may be worth the cost.

A recommendation is based on 14 days of analysis. It will show the potential yearly savings of taking the suggested actions.

Exporting and Downloading

unnamed

Cost information often needs to go to accountants, be copied into databases, or be processed by other software. Azure Cost Management can export data in CSV or Excel format. It can create one-time reports or generate them on a regular schedule. Each run of a scheduled export creates a new file, leaving old exports untouched.

Exports can cover the past seven days' data or the month to date. They can align with invoicing periods, even if they aren't the same as calendar months.

Exported data can be brought automatically into other financial systems or made available for viewing as a spreadsheet.

Budgets

As the name implies, budgets in Azure Cost Management let managers compare expected costs with actual ones. The feature issues alerts or takes automated actions when a cost threshold is exceeded. Budget thresholds never stop services from running or throttle them; they just call attention to overruns. Not all Azure account types support budgets.

Filters can delimit the categories of data which a budget includes. The same filter types are available as with cost analysis. Further, the reset period, which determines the time window the budget analyzes, can be monthly, quarterly, or annual.

Cost thresholds are specified as a percentage of the budget. For example, if a 90% threshold is designated, alerts are issued when spending reaches 90% of the budget. A budget can have as many as five thresholds. Threshold notifications are sent to the email addresses which the budget specifies. The budget can also designate action groups, triggering automated actions when a threshold is reached.

Use With AWS

While Azure Cost Management can't be as tightly integrated with AWS as it is with Azure services, it can still provide valuable information. It can link AWS consolidated accounts.

Setting it up requires actions on both the AWS and Azure accounts. It involves setting up a cost and usage report (CUR) integration in Azure and creating a CUR in AWS. AWS delivers reports into an S3 bucket, where Azure Cost Management picks them up.

Creating a management group for all cross-cloud providers allows an overall view of all Azure and AWS costs. It's also possible to set up separate management groups for each provider.

This feature is free during the preview period. Afterward, Azure will bill 1% of the AWS monthly costs.

Learn More About Azure Cost Management

Microsoft provides rich resources for learning how to use Azure Cost Management. The best place to start is with the Microsoft documentation. It includes quickstarts, tutorials, how-to guides, resources, and reference materials. There's also a downloadable PDF which contains most of the essential information in one document.

The technical overview on YouTube is worth the half-hour it takes to watch it. There's also a playlist of informational videos, most of them under five minutes.

A Quick Study: Instinct vs. Data-Driven Marketing

While marketers pay lip service to the science of marketing, many still treat it like an art.

For marketers, data-driven marketing and analytics is the biggest trend of the last few years. With so much of marketing moving to digital channels, it’s easier than ever to track, test, and measure at-scale. You can measure how many people saw a digital billboard… but not how many saw one on the side of the interstate.

But what I still see is many companies and marketers going with their gut, and either refusing to test and measure, or outright ignoring the data that contradicts their efforts.

After almost two decades of testing and measuring marketing campaigns and funnels, I still don’t go with my gut or with best practices - because these don’t lead to the objective truth.

Marketing needs a mindset change

Your gut isn’t always right. Even with decades of marketing experience, with platforms, channels, and audiences constantly changing, your gut can inform - but shouldn't guide - marketing activities.

Jeroen Kuppens, writing for MarTech Advisor, says:

“The reality is that businesses will not be able to survive digitization with the same old approach of pushing content, or serving an ad, or posting on social media, and hoping it sticks. Using data to inform new approaches will be vital. Marketing needs a mindset change, and it’s going to get one whether marketers drag their feet to get there or not. With 80% of marketers still valuing their own opinion over what their data tells them their customers want, this will prove to be no small feat.”

The upside of course, is that this data is readily available and there is every tool imaginable to help you make sense of that data. Data isn’t useful if it’s not actionable.

Your gut is blind: there are other factors influencing the success of your marketing.

One of the companies I founded was in education technology, whose customers were university students. This was before social media had really taken off, so email was our best digital marketing channel opportunity.

The universities had incredibly strong spam filters - we didn’t have much wiggle room to get it right. If the emails we sent wouldn’t be opened, the spam filter would sweep us into spam Siberia.

Instead of sending batch emails, we decided to test in very, very small groups to ensure we got it right, before we went to scale.

Even though we knew our market well, and knew how to speak their language, the subject lines we thought would win were the worst performing.

After retooling, testing, measuring, and repeating that process tens of times, we got the right ones.

The result: an 80% open rate, 75% click through rates, and most importantly, our company’s engaged customer base drew dramatically.

Are you testing and measuring the right thing?

A company approached us with the simple ask: “Can you help with SEO and email marketing?”

Even though the answer was yes, this wasn’t the right question for them to be asking.

They are in the business of buying businesses, and trying to find these through email and SEO / SEM is a long, expensive, and fruitless effort. Their market simply doesn’t use those channels in a buying capacity.

We took an entirely different approach: we set up a separate campaign with its own domain name and landing page, and used a few channels, including postcard marketing.

That’s right. Good old-fashioned paper postcards.

Reaching a business owner is already difficult, and to reach the owner of a non-digital business like a convenience store or bar is nearly impossible. They simply won’t see your digital noise.

This campaign was much cheaper than the digital tactics they were trying before, and the results were clear: their qualified inbound traffic spiked dramatically.

At one point, we had to pause our operations because they couldn’t handle the lead volume - a problem every business owner wants to have!

Even if you optimize to the greatest degree, if it’s the wrong channel, it’s like rotating your tires when your car just needs gas.

No, it doesn’t have to be pretty (or, why best practices might be completely wrong)

People become good marketers through experience, so they learn to rely on that past experience to inform what they do going forward. They go by ‘gut’ feel, or by best practices.

Best practices are simply shorthand for what’s worked for the most people the most times… but it doesn’t mean it will work for your specific business in this specific instance. It’s not a guarantee.

And this is blasphemy to a lot of marketers, but it’s true: your materials and assets don’t have to be pretty to work. If you’re not making any money, it doesn’t matter.

For Revival Cycles, a specialty motorcycle and accessory company, their ecommerce was performing far beyond their expectations. They wanted to maximize their conversions, and while we were at it, try to “make the site prettier”. We found, among other things like “creative” reporting from popular ecommerce platforms, that making the site “prettier” had a slightly averse effect on conversions.

Turns out, their customers didn’t trust something that looked and felt too polished.

If we’d gone with best practices, without testing and measuring, their conversion rate would have steadily degraded over time. There’s no one answer for every target audience.

The benefits of a data-driven marketing approach

Overall, when it comes to marketing, use your instincts and experience to guide, but not prescribe, your activities. Testing and measuring is easier than ever, and these metrics and results will lead you down the path of most return.

The good news: when a team member, client, or superior questions why a certain approach was taken, you have more than just your experience to back up your argument - you’ll have clear, measured data. Another benefit of this approach - you learn more, and about different techniques, than if you’d followed the typical modus operandi. It expands your skillset as a marketer.

And most importantly, your efforts become the most efficient and effective they can possibly be. Improving ROI is a core tenet of good marketing.

Start small… but start!

Even if your marketing is doing well, there are still hidden opportunities for improvement. Try a few different tests on what you have out there - both optimizing slight changes, and a wildcard that goes against your normal way of doing things. I promise you will be (pleasantly) surprised by the results.

If you need a few ideas on what to test and measure, call us and we can come up with 3 actionable tests to try in your marketing.

Podcast with John Cunningham, CTO of Victory: Dave Albert's CTO & Co-Founder Talk

From clients being taken advantage of, to mass team walkouts, to frantic phone calls in the middle of the night, to AWS accounts being held hostage - hear some of the war stories Victory CTO and Co-Founder John Cunningham has experienced in his 25+ year career.

He also discusses the true role of a CTO and what makes the job great and the next big thing that will drastically change the tech landscape going forward.

Listen here.

Interview with Angela Arnold, CMO of Victory: Going Ahead with Gage

Marketing blog Going.Ahead. with Gage interviews Victory CMO Angela Arnold about her approach to leadership, creative thinking, and her favorite marketers.

Here's an exerpt:

What are some of the biggest challenges you see in Marketing today?

Marketing has become so broad that it’s starting to suffer from its own weight. Most job descriptions for marketing leadership positions ask too much from one person.

The Harvard Business Review article “The Trouble with CMOs” explains it much better than I can, but expecting marketing to do an increasingly varied amount of work can easily set a person or department up for failure.We also assume that the newest and latest also means it’s the greatest. After all, we’re marketers! We love flash! Older marketers have confessed to me that they’re worried about their skills being relevant in today’s age – this is only true on a tactician level.

A marketer’s job is to serve your market, and at the end of the day, people are people, no matter what channels they’re using. Just because it’s new doesn’t mean it will really resonate with someone. Behind each impression or site session is a real person.

--

Read the full interview here.

Why you need to think about alignment when you set up your business

Often when senior technology executives come to work with Victory, they are burned out in a big way. I’ve heard this feeling referred to as “corporate PTSD,” conveying the seriousness of the impact some top-level executive jobs are having on high-performing individuals and their general wellbeing -- and it’s important to understand why.

Valuable executives are being frustrated and worn down by roles mired in politics and stale models for compensation. It’s so common that we’ve formalized a decompression course for executives we onboard at Victory, the company John Cunningham and I most recently founded in Austin.

The question is: why are smart, high-performing individuals having such a negative experience at the upper echelons of American business? It seems clear to me that traditional business models and traditional organizational structures are simply failing to empower high performers, and in many cases are making their jobs far more frustrating than they have to be.

From the first day, Victory set out to try a new approach to defining true internal organization alignment, and build a more tenable framework for business leadership.

Re-Imagining The Relationship Between Talent and Business

What if we removed obstacles so that top talent could be empowered to do what they do best, and reward them well for doing it? Seems pretty straightforward, right? Here’s another crazy idea: What if alignment could be ensured by hard numbers, instead of by subjective assessments skewed by corporate politics? These are the questions that pressed me to try something new with my latest business.

Ultimately, I found that re-imagining how teams and leaders engage with the businesses was key to addressing these challenges.

At Victory, we break our leaders into four classes, not unlike a university system: Freshman, Sophomore, Junior, and Senior. As leaders advance toward Senior, their pay rate increases but their billable expectations lessen, so they can focus on business development, closing sales, team management, and mentorship.

Pay is also structured thoughtfully, since it is an important motivating factor for most people. Leaders get a base salary, plus billable hours that are weighted so as to not only incent working on expensive clients, plus a commission. Aligning individual pay to what’s actually best for the broader business and its clients ensures that everyone is truly working toward the same goal.

The Victory model isn’t something that can only work for one company or in one area of the business. For example, we first applied the Victory model in the CTO space, but have since expanded our offerings to include multiple practices, including CMOs, CIOs, etc. Each type of leader may have a completely different focus, but each benefits from the Victory model as they are able to tap the specialized skills from other practices to attract larger, more diverse clients.

Compensation reinforces this cross-functional approach: executives get paid no matter what they’re working on, so they aren’t only incentivized to work within their specific practice or silo.

By the team, for the team

In many ways, I view entrepreneurship similarly to nation building. Forming a healthy, productive group at a large scale is no easy task. The organizational structure, the people chosen to lead, and mechanisms for accountability and prosperity are all critical considerations in either case. So if entrepreneurs are essentially nation-builders, why can’t we take a cue from America’s founding fathers and build something by the people, for the people.

When you do, great things can happen -- I can attest to the benefits of freeing smart people from outdated structures. At Victory, once executives have time to recover from their experiences at traditional enterprises, they thrive in the hypernetics model, where they are no longer encumbered with countless hours of corporate drudgery and can concentrate on their actual work. Really, this shouldn’t be that radical an idea.

It also shouldn’t be too radical to allow people to put in the amount of hours that they feel comfortable with, and pay them accordingly, instead of forcing people to pretend to show up to meet an arbitrary number of working hours per day.

Perhaps what is radical is the mindset required to walk away from how things have been done in the past, and embrace not just new digital tools, but the processes and models they can unlock. Technology is changing the rules for business by helping us reimagine what’s possible for tomorrow.

We can harness the promise of technology to build a better future of work, both for businesses and the people who work there, building reasonable companies by reasonable people. Let’s leave the inefficiencies of corporate politics and bloated organizational structures in the past.

Should you have a customer support handle on social media?

Where social media fits for businesses

It’s a question many companies grapple with at any stage of growth: “How do we best use and manage social media?”

In the grand scheme of things, social media is still pretty new - just 15 years young for the oldest networks. Trends and “best” practices have cropped up, grown like weeds, pollinated to the point of allergic reaction, blossomed and died in this time, but there is still no standard operating procedure for how companies should use social media.

Nor is there a clear understanding of where it best fits into their business. Structure and fit becomes a substantial question to answer. Let’s say you’re Coca-Cola. Do you have social media handles for @CocaCola, @CocaColaIndia, @Sprite, etc.?

When social media was new and exciting, it was an unlimited marketing channel with no algorithms to manipulate feeds and hide content, and users were compelled to engage with someone - anyone! - on this new frontier.

Over time, as social media became more integrated in daily life, it moved from simply being a marketing channel to an ongoing conversation with customers. People started using social media as a customer support channel, moving away from phone calls or emails. Now, 63% of millennials begin their customer service interactions online.

To deal with this new influx of customer issues, many brands created separate support handles (remember you’re still Coca-Cola so @CocaColaSupport) designed to collect all customer service inquiries in one place.

It seems logical enough. Customers know exactly where to go for their issues, and everything unpleasant gets separated from the fun marketing messages of the main brand handle.

A separate @support handle is outdated and reinforces silos

However, having a separate support handle adds complexity and builds silos between the customers and the brand as well as employees from customers.

How a company handles their customer support on social media is a representation of how they interact with their market on a larger scale.

Moving away from a separate @support handle can reorient a brand toward more nimble, responsive, and ultimately profitable operations and customer relationships.

This new marketing landscape means higher expectations

A major goal of a company should be to reduce friction and frustration in the purchase and post-purchase process. This leads to increased customer lifetime value and a better Net Promoter Score, while saving the company on internal costs associated with customer service.

Companies should think carefully about if a @support handle is continuing to serve them and makes sense for their business given the current market landscape.

Luckily, if they make the decision to deprecate it, there are some steps to follow to make this transition relatively painless for them and the customer.

I experienced this very situation when I took over digital marketing for the North American headquarters of an international carsharing company. Just before I was hired, the North American marketing department had made the decision to create a separate @support handle, as a way of addressing customer support issues.

The social media setup for the company was already quite complex:

  • Since it was a location-based business, there were handles for each operating country and city i.e. @Company, @CompanyCOUNTRY, @CompanyCITY.
  • The @CompanyCOUNTRY and @CompanyCITY handles were monitored by multiple stakeholders, with nobody solely responsible for answering either marketing messages or customer service inquiries. For the customer, this meant inconsistent responses and response times.
  • North American customer support inquiries were supposed to be handled through the North American headquarters… but there was no process for routing issues from local handles to HQ.
  • The solution: Centralize customer support issues to @CompanySUPPORT, which was monitored by the North American headquarters (still the marketing department, but at least in the same building).
  • My job was to take a look at the above situation and determine if the @support handle was a valuable investment or a misinformed move.

Spoiler alert: After investigating, evaluating, and debating, we decided to deprecate the dedicated @support handle - a decision that better served the customer and the company.

What’s best for the customer?

Keep it simple

If you’re traveling from New York to Boston, you simply show up to Grand Central Station and get on the train. From there, the conductor switches tracks to head the right direction - you don’t worry about doing it yourself. It’s the same with social media handles. Customers just want to contact the conductor without worrying about how to route their issue appropriately.

tim-gouw-YIpQikbJ1v8-unsplash

When a customer has an issue, they want that issue resolved as frictionless as possible. Brands can have too many social media handles, phone numbers, email addresses, or contact forms as it is.

If customers are already upset about the product or service, they don’t want to do a lot of searching.

Your company may have a structured path and organization for certain types of issues and contact paths, but customers rarely follow a set path. They will reach out wherever they can find first.

We’re all guilty of this - raise your hand if you’ve repeatedly mashed ‘0’ through a phone tree, ignoring the nice and neat pre-recorded phone options someone at that company worked on for weeks.

Likewise, for a company with a dedicated @support handle, marketing inquiries come in to the @support handle and vice versa, the @support inquiries come into the marketing handle, no matter how clearly the company defines the purpose of each. It’s just natural!

Look out for Imposters

In addition to simplicity for the customer (and not fighting against nature), there’s another risk in fragmenting a brand’s online identity: Imposters.

It’s been great sport for people to create a fake customer service handle and troll a brand’s page, responding to customers in outrageous ways. Customers aren’t used to vetting an online identity and so they assume it’s the actual brand responding:

target-troll-5a

Of course, one rogue tweet won’t cause that much damage at scale, but to that individual customer, that can be quite the negative experience.

Additionally, it’s more likely for a single main handle to be verified, which leads to less confusion and helps safeguard against impostors.

Less room for user error

Ideally, well-structured marketing and customer care organizations have smooth operations between them, so no matter where an incoming inquiry comes in, it can be handled by the correct department. There are multiple great software platforms that enable back-end systems and programs to speak to each other and route appropriately and easily.

For the carsharing company, adoption of the new @CompanySUPPORT handle was high, but an unexpected consequence happened: adoption was too high.

Customers found the number of handles to be overwhelming, so they took the ‘spray-and-pray’ approach: They reached out to @Company, @CompanyCITY, @CompanySUPPORT , all at the same time.

We ended up having to re-route those issues anyway, which added complexity to the process and increased the likelihood of a message going unresponded.

This shone a harsh light on the inefficiencies in our systems and processes. We had to adjust our systems to make sure all incoming messages, regardless of channel, were being tracked to the same customer in the CRM… not an inexpensive effort!

Clear communication is good communication

What about keeping a company feed or wall ‘clean’ of customer complaints? Any marketer reading this gets twitchy thinking about their carefully planned campaigns and obsessively-produced graphics being surrounded by customer complaints.

This fear is mostly unfounded. In reality, it’s very rare that a user will go to a company’s profile directly and read through a feed - Facebook barely shows others’ wall posts anymore and Twitter no longer displays @ mentions on a user’s feed either. The option to directly DM a company on Twitter has been around for five years, negating the need to publicly tweet at them.

One of the fears I’ve had to mitigate with C-level executives is that customers or competitors can see all the times there was an issue or outage - all the dirty laundry is out in the open, as it were.

Refocusing on the customer behavior is important here. When things hit the proverbial fan, most of the time, people go straight to the company profile to get an update.

It’s best if you proactively announce that your product has an outage or is unavailable, otherwise your feeds will be flooded with the same question, or customers will turn to other sources for this information. The other sources then control your narrative, not you.

For example, here’s Cloudflare, which had an outage as I was writing this article:

Screen Shot 2019-07-02 at 12.06.36 PM

Even though they have a dedicated support handle, they still posted updates about the outage on their main handle.

This is just good communication. It’s always better to be more honest with customers rather than trying to hide it when things go wrong. Most of the time, people want verification that yes, something is wrong and it’s not just them.

For the executives who fear of exposing the company’s weaknesses, remind them that social media is short-lived. The half-life of social media posts is pretty short: A tweet is ~18 minutes, a Facebook post is ~30 minutes, and Instagram about 19 hours.

This means that even when a company explicitly states they are having an issue, it disappears from feeds relatively quickly. It’s always better to over-communicate than not communicate at all - that’s the whole point of social media.

If you can get your leadership team over the fear of ‘showing your worst’, they’ll find that there aren’t really any negative consequences to doing so.

Just like beauty, good customer service comes from within

Empowered employees and good process is critical

Social media’s change from a purely marketing channel to a customer service-focused one happened relatively quickly; it really started picking up in the last couple of years.

With customer service inquiries continuously shifting to social media or owned digital channels, it’s imperative that a company’s back-end processes and their employees can be responsive and adaptable to whatever comes in.

Most companies should have a social media tool (or ideally one that can take input from multiple sources) and have rules in place to route appropriately, so having separate handles on the front-end is redundant.

Streamline and make it consistent

Any reduction in the number of handles a company has to maintain and publish content on will lead to more efficiency and cost savings. And besides, your customers should care about what you have to say, on any handle.

Few people follow a @support handle, so operational announcements (like Cloudflare’s outage referenced above) have to go on ‘brand’ channels anyway.

When I was researching this article, I looked at Twitter itself. Contrary to the arguments made here, Twitter has a @TwitterSupport handle as well as @Twitter.

The @TwitterSupport handle puts out some really useful content, like product updates and asking for feedback on ideas:

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">We&#39;re raising the bar on how you search! When you search for accounts, we&#39;ll show you a little more info, like if they have recent Tweets and how they connect to your broader network. We&#39;re rolling this out on iOS, Android, and <a href="https://t.co/AzMLIfU3jB">https://t.co/AzMLIfU3jB</a> over the next few days. <a href="https://t.co/rSI2S2PKa6">pic.twitter.com/rSI2S2PKa6</a></p>&mdash; Twitter Support (@TwitterSupport) <a href="https://twitter.com/TwitterSupport/status/1144001040284450817?ref_src=twsrc%5Etfw">June 26, 2019</a></blockquote>

The @Twitter handle…. Not so much:

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Social experiment: If you come across this Tweet, you’re in the real social experiment</p>&mdash; Twitter (@Twitter) <a href="https://twitter.com/Twitter/status/1144246050846384128?ref_src=twsrc%5Etfw">June 27, 2019</a></blockquote>

But here’s the thing: the @Twitter handle also publishes useful information about the product, albeit in the more ‘fun’ voice of @Twitter versus the more factual one of @TwitterSupport:

<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Skip the sign in, sign out hassle. Now you can switch back and forth between handles faster on <a href="https://t.co/AIUgyCj4rs">https://t.co/AIUgyCj4rs</a>, Twitter Lite, and Twitter for Windows. Just tap the drop down from your profile photo. <br><br>Your stan account thanks you.</p>&mdash; Twitter (@Twitter) <a href="https://twitter.com/Twitter/status/1135609713218007040?ref_src=twsrc%5Etfw">June 3, 2019</a></blockquote>

This is a lot of effort to duplicate information. Centralize and streamline!

Simplifying where communications come and go also benefits a company’s employees, too.

“What the heck was marketing thinking?” is a phrase many customer service teams have uttered. Unless there is great communication and close coordination in the campaign planning phase, customer service tends to be the last to know about new initiatives. Customer service is very familiar with the common problems and pitfalls customers face - and they can anticipate how customers will react to change.

Because marketing is often removed from customer service, they don’t feel the pain and cannot anticipate the impact their campaigns can have to current customers.

By centralizing communication, marketing will become more familiar with common customer service issues. They can adapt their messaging and/or materials to ease this friction.

A single sentence changed in the sign-up process or an added illustration in the tutorial can reduce inbound customer service inquiries by a handful of percentage points. Each case costs money and for every customer that writes in, 26 others stay silent and/or simply stop doing business with the company.

Use the right tools

But isn’t having separate handles easier to monitor and route questions appropriately? If someone writes into @support, customer service can be in charge of monitoring that handle. Easy!

As stated earlier, this is a nice, clean process in concept that customers in the real world typically don't follow. Creating an easier internal process shouldn’t come at the expense of increasing customer friction.

Having a separate @support handle trains your internal employees to mentally separate customer service and marketing as unrelated. It subsequently becomes easy to write off everything that comes into one handle as ‘not my job’ or not pay attention to it.

Any real difficulty with monitoring is typically from trying to do it natively on the platforms themselves, which is not recommended except for very small companies or ones where social media isn’t one of their main (or even secondary) channels.

Using one unified tool across marketing and customer service, specifically one that connects to your CRM, saves time, effort, and can ultimately be cheaper.

These tools also take care of metrics tracking and maintaining SLAs. It can be a big investment, but one I’ve always found to be worth it.

With the carsharing company, when it was time to purchase a social media management tool, one of the core requirements was integration and routing. The tool we purchased enabled customer service inquiries to be routed directly into the case management system.

When we made the investment and restructured the handles, we also restructured the team and their responsibilities. Instead of having 14 people in locations ‘kind of’ monitor social media with inconsistent responses, we centralized to a team of 2 in HQ who did it with depth of expertise and who worked closely with customer service.

Orientation Toward a Better Future

The best ideas come from your customers

“Serving millions of customers today doesn’t guarantee you will be serving even a thousand tomorrow, and learning to appreciate the individual customer truth is probably the most fundamental mind shift leaders and marketers alike need to make.”
Leonid Sudakov, Adweek

What happens when a customer doesn’t quite have a complaint, but instead has a suggestion?

If they have an idea for a feature on the app that makes things easier, should that go to @support or to the @main handle? If they have to hesitate in making that decision for even a second, it’s likely they’ll just say “forget it” and move on.

Customer feedback, requests, and ideas are critical to making and improving great products - so reduce the friction to get this input.

Employees + customers = a stronger company

Great customer service (and in fact, great business practice) comes from meeting the customer where they are and explicitly addressing their needs.

The landscape of customer service is changing drastically. Challenger brands’ entire USP is built on great customer service.

For traditional brands, Adweek states that “major brands across 14 product categories lost market share and 90 of the top 100 CPG brands have experienced declines.” Things are changing fast, and having an internal divide between customer service and marketing won’t serve companies anymore.

Customer expectations have changed and it’s no longer enough for customers to get stock responses and generic platitudes from low-paid, poorly-trained employees.

mhZidL2

Nobody knows your product better than your employees so by centralizing and removing silos, you’re orienting their thinking to be more holistic, instead of transactional.

A customer’s relationship with a company is bigger than a singular marketing message or customer service issue - it’s the sum of all of the touchpoints they’ve ever had with you.

By centralizing customers and employees around a main handle, this creates a community. It gives improved visibility, both internally and externally, and enables everyone to take more ownership of the product and this community.

Everyone can see (and work on) the problems together

When complaints (and compliments!) are centralized to one handle, a company’s mistakes or weaknesses becomes more visible to the community. This is an argument for keeping a separate @support handle, but one that is rooted in a fear or shame mindset - not a collaborative one.

The benefit of having a @main handle, with issues more out in the open, is that customers can see and talk to each other, especially when one of them is having an issue. Other customers might already know the fix, and they can work with each other to help fix issues. A customer service agent may not even need to be involved at all.

Additionally, having these issues visible to your customers, detractors, and competitors keeps your company accountable.

It also orients your company and your employees toward serving your customer community the best you can throughout their lifecycle. There’s more to business than the bottom line - it’s about serving a need in the market.

What was the outcome for us at the carsharing company? Once we deprecated the @support handle and made the mindset and process shifts, we saw engagement rates for our over 1 million social media followers leap 15%, social care response times decreased by 50% YoY, and turnaround/case resolution rates far above industry standards.

As more fans, followers, and customers saw that they could get a fast, personable response on social media, they started using it for customer support inquiries, too.

The overall proportion of customer service issues shifted from phone and email to social. This led to substantial cost savings - it cost us >$6 for phone call, about $3 per email, and less than a dollar for social media. For a company that received thousands of inquiries per day at peak times, this was quite significant.

How to Implement

If you have a customer @support handle, what now?

You as a company need to prepare internally. This means discussing how calls will be routed, how marketing and customer service can all look at the same feed.

How long will it take? The timeline surprised us. Even though we planned for a 8-16 week transition, the incoming customer inquiries and behavior patterns changed in less than 1 month.

Below you’ll find a quick start guide to help navigate you through the steps. If you’re looking for more hands-on advice for your unique situation, contact us and let’s talk it out.

It’s a big move, but the payoffs will be worth it.

__>> Download the checklist here. << __

Embrace your new world

Once we deprecated the @CarsharingSupport handle, we never looked back. Customers were happier, and marketing and customer service were coordinating closer than ever. Even the company’s developers and engineers started paying attention to what customers were saying, and adapting their sprints based on the feedback.

Maintaining a separate @support handle on social media leads to increased customer confusion, internal employee silos, and holds a company back from becoming more service-oriented, nimble, and responsive.

Break down the silos and embrace a higher-level approach to your customers.

Defining Your Brand Before It Defines You: Total Team Clarity in Just 2 Hours

Taking a few hours to get your team on the same page for your brand(s) reduces friction, lends clarity, and gets the whole team behind the same statements.

If your internal team can’t get behind your brand with clarity and belief, how can your customers?

Capital-B Brand is esoteric

When you think of the word “Brand”, how do you define it?

Is it a personification, like Flo from Progressive? A mascot, like Ronald McDonald? A logo, like Apple? A phrase, like Nike? Or an attitude, like Wendy’s Twitter?

Is a brand the differentiator between your two pretty similar products?

If you were to ask a few people in your company right now what your company’s brand is, you’re going to get some different answers. Try it!

Send this message to a few randomly selected people (not just in the Marketing department) and see what you get back: “What would you say [our Company’s] brand is?”

If they give you hesitation or ask for clarification, state that you’re being purposefully vague and have them respond however they think is appropriate.

Take a look at the answers you got - I guarantee they not only used a different definition of brand, but defined your company completely differently. That’s okay! Even the most communicative companies need to reinforce the messaging and positioning of their brand regularly.

Perspectives differ

And it’s not even the definition of the term brand that can differ, it’s how your brand is defined by your own employees (not even mentioning your customers, which I won’t cover here).

“Strategic planning should be more about collective wisdom building than top-down or bottom-up planning.” ― W. Chan Kim

One of our clients is an Entertainment & Music company with multiple brands, some of which had been around since the inception of the company, and some of which had yet to be launched.

I gathered the stakeholders of the company for a brand Workshop. Most had been with the company for many years, or were intimately familiar with the plans for these new brands.

At the beginning, half the room was disengaged, the other half only mildly engaged. A workshop like this was new for many of the participants - some responsible for managing physical stores, and some focused only on digital systems.

Once we started digging into questions about their umbrella of brands like what their purpose as a company is, the brand architecture, who the target market is, and how the brands are expressed, they started understanding that brand is more than a loose, hand-wavy concept.

They were shocked to hear each other’s answers to some of these branding questions. The room went from passive and leaned-back with little discussion to lots of cross-talk, leaning forward, and laughter.

Each person individually had a clear-to-them idea as to what the brand was, but they hadn’t articulated it to each other. It was new for them to discuss as a team. “We’ve never said it out loud before!” was a common refrain.

The result of the end of this: They had a clearly defined brand architecture document, which delineated parent and sub-brands. For each of their 6 brands (they initially had 10 and we consolidated at the workshop), they had: Brand purpose and definition Target audience demographics and psychographics Brand attributes - fun/funky/progressive, not efficient/traditional/etc. What the brand sounds like (specific copy examples) Everyone was on the same page. They subsequently distributed this information to their entire team, down to servers at the restaurants.

The entire team has to be on board. Not just leadership or individual department like marketing or engineering.

Another case: an industry-leading video gaming company came to us to help them migrate their servers from hardware to the Cloud (a very scary proposition when you have 125,000 active users every minute).

At the same time, they asked us to redo their website. It was outdated, barely functional, and served no real purpose.

In an ideal world, there would be a project team made up of representatives from each department of the company, where each has a voice in how the website functions and fits into the overall vision for the company. The goal: ensure consistency in messaging, from customer service to product, to operations, to marketing.

Unfortunately, that doesn’t always happen. One group is tasked with the project (or they just take it on) and they go full-bore, without bringing in others to slow down the process. This is natural.

The scenario that we’ve seen play out over and over is that if marketing or other stakeholders aren’t involved from the start, they will step in at the last 10% and add scope creep, require re-examination and re-visiting of decisions, and cause general internal misalignment.

Worst case is it brings everything to a grinding halt, causes contention amongst the team, and wastes one of the most valuable resources: time.

Before any work begins, it’s important to define not just how the website will function, but why it exists, who it exists for, and what stakeholders across the company need from it.

It is imperative to make sure everyone is aligned on what it does and why it exists. Even if other departments decide to leave the purpose, narrative, and content up to your department, they should be clear in what they are handing over.

A website is more than a place to store information; it is the public representation of your brand and company, and where all business units are represented. It should be the shared foundation for content strategy, marketing strategy, future growth, product, product marketing and more.

Consistency is key and defining + distributing is important.

Once you have the brand defined and agreed upon by stakeholders, the next milestone is getting everyone onboard with the idea. Every. Single. Employee. Full-stop.

From the product packer in the warehouse to the VP of Engineering, all company representatives need to understand what the brand is about, who it’s for, and why it exists. If everyone gives a different answer to “So what does [your company] do?”, it weakens the consistency and the overall power of the brand.

You know it’s important, but it really is critical - a person has to hear a message an average of seven times before they really ‘hear’ it or understand it.

“Most of us know the marketing concept of good communication. To make a message stick in the head of a future consumer, you need to deliver the message seven times using seven different channels. Why do executives think that to make a strategy stick, a boring speech delivered once will be enough?” - Jeroen De Flander

So, you’ve had this big mysterious brand workshop where all the top-level people and important stakeholders (e.g. company influencers) come out laughing and talking about how clarifying this workshop was. That’s great. But if a tree falls in the forest, and nobody talks about it at the all-hands, did it really happen?

The key to all this effort is to make sure this gets disseminated to the rest of your company! Do a summary or write-up, or talk about it at the next town hall.

Then make a little brand cheat-sheet for front-line employees in social media, front desk or retail counters, and especially for communications and customer service. This tactic will be instrumental in helping focus their communications and keep everything in line with what your brand represents down to how it should sound.

For a car-sharing company with an extensive and dispersed customer service organization (we’re talking internal email and call centers at headquarters in Texas, internal call centers in Vancouver and Montreal, and third-party call centers in Nebraska) brand cheat sheets were distributed and physically stuck on the wall by their desks.

Does that sound extreme? It wasn’t! There was a lot of positive feedback from employees who felt empowered to represent the brand without sticking to a prescribed script.

So, what does a brand workshop look like?

We recommend a working session where we bring stakeholders from marketing/communications, development/technology, and leadership into a room to ensure that the ultimate business purpose of the brand, what it is and how it is defined, is agreed upon by all parties.

Overall though, it varies for each company based on their needs. For the Entertainment & Music client mentioned previously, they also needed to determine their brand architecture. For the car-sharing company, there was only one brand, but global implementation and country-specific adaptations had to be considered as well.

It’s best to do this workshop all in one session or over a two-day period, while people are warmed up and thinking about brand in a new way and open to new ideas.

Generally speaking, a workshop should cover:

Business Purpose and Vision Messaging, Positioning, and Target Audience(s) Primary messages and benefits Positioning relative to other offerings - not just medical, but lifestyle (Strengths, Weaknesses, Opportunities, Threats) Target Audience(s) - Informed by market research and your experience. Defining who they are, how they think, and what their true needs are. Brand Attributes and Voice and Tone E.g. “Our brand is Cool, Innovative, and Elite. Our brand is not Traditional, Eclectic, or Broad. We have an inspirational tone, and never use the passive voice.”

Who to invite?

While the go-to is the C-Suite or sometimes just Marketing, there should be a variety of attendees - obviously the C-Suite is important, and even those in Finance or IT are important. Also bring a few influencers from different departments, as they are closer to the customer and closer to the employees and will have valuable and unique insights.

Context Matters

Positioning and messaging can’t be done in a vacuum. It’s a collaborative process between everyone on your team to define and realize your company’s true vision.

Whether you lead this yourself, or bring in a team like Victory to run it, the result will be that investing a few hours of your team’s time will lead to much better company and brand cohesion, both internally and externally to your market.

[CASE STUDY] Revival Cycles sees a 300% increase in e-commerce conversion rate

In This Article:

Victory’s CMO Practice in partnership with Victory Data Science Group (vDSG) produced a +300% increase in e-commerce conversion rate for Revival Cycles while split-testing two ecommerce platforms and a proprietary content management solution.

Client: Revival Cycles

Framing the Problem:

To many consumers, online shopping is the best thing since sliced bread. No longer do we have to make the tiresome trek to a store just to wait in checkout lines - instead we’re able to simply click and be done (usually from the sofa or from the desk, if it’s a particularly slow afternoon in the office).

Consumers can thank companies like Amazon and eBay largely for leading the way with the radical change to e-commerce. However, in the wake of these online behemoths dramatically altering how we buy, all manner of smaller retailers were left with the common question: What do we do now?

How does a company move forward from the “Brick and Mortar” business model that has stood for centuries before while only able to look towards a handful of e-commerce platforms as guidance? Most business owners, especially small businesses, generally lack the technical know-how required in scaling to a “virtual storefront” model.

Early platforms like Etsy rose up to meet the needs of some of these small business owners; and, now platforms like WooCommerce, Shopify, and BigCommerce lead the way in e-commerce integration.

pie chart ecommerce

Source: BuiltWith

A Note on E-Commerce:

E-Commerce, just like in any physical storefront, is all about completing the sale and doing this over and over again. It is pretty straightforward in theory. However, the problem with e-commerce is that it is fighting with every other distraction online to convert “browsers” into “buyers.”

Physical stores have spent millions of dollars and hired research agencies to best optimize for purchasing. Online shopping is still rapidly developing, and doesn’t have decades of research history behind it to find the most optimal layout, signage, and even scent to make the sale.

While e-commerce growth is far outpacing retail, the vast majority of retail purchases still happen in physical stores:

graph

Source: DigitalCommerce360

That is to say, for the relatively small percentage of sales that happen online, it is critical that you better optimize as much as possible.

E-commerce platforms, at the end of the day, must prevent against the abandoned cart problem. It obviously doesn’t do businesses any good to drive traffic to a site where nobody buys anything. Based on this very simple concept it is reasonable to say that the core functionality to judge any e-commerce platform is its ability to attract visitors and convert them into customers, or in other words to optimize sales conversions.

Case Overview:

Revival Cycles approached Victory with the question of whether or not their current e-commerce platform, Shopify, was best the best for the continued growth of their business. They had over $1 million in online sales, and a ton of original organic content, so they were doing several things right already.

However, would they perform better if they made the move to another provider, like BigCommerce? Were they using their media assets to their full potential? And was their e-commerce platform optimizing sales conversions?

It seems counter-intuitive but there isn’t one e-commerce platform that is simply The Best. The correct platform for a given business or industry in highly dependant on a myriad of factors. The simple fact is there is NO way of instantly "knowing" which is right for any given business.

This is why Victory is different: we do not pretend to “know” the answer to every client question, making decisions based on what sounds right, or blindly following whatever solution is currently trending. Our years of experience have taught us one thing time and time again: the only way to arrive at the correct solution is to find the right answer. Data is objective and the true equalizer - creating a hypothesis to test and measuring the results is the only way to provide true insight and deliver effective solutions for clients.

The most objective way to compare two platforms is to, well, compare them. And that doesn’t mean just making a list of features that each platform claims to have, it means a real world test with real customers. We set out to perform an A/B split test to determine which platform was best able to optimize and convert.

At first we set out to test the difference between the two most relevant e-commerce platforms, using Shopify as the control and BigCommerce as the test.

It is critical to be able to compare the platforms in an apples-to-apples manner, meaning stats had to be calculated identically for both platforms or the results would not be of any value.

After several rounds of back and forth with customer service on both sides we discovered that they both calculated the “Conversion Rate” in their dashboards differently (read “creatively”), and neither was particularly worried about matching the other’s criteria.

As a workaround to the conversion rate calculation problem and to ensure business continuity during the test we attached both platforms to a common inventory management system. This prevents one platform from selling a product that had already been sold on the other and vice versa. This would also allow us to easily crosscheck the conversion rates that our test framework reported for each platform.

Using Content to Drive Traffic

Let’s talk about leveraging content to drive traffic and conversion for a minute. If you have a site that has beautiful content and an optimized shopping experience, but no traffic, it doesn’t do the business any good. There are multiple channels to consider to drive traffic including social, email, paid advertising, and Search Engine Optimization (SEO) for organic search.

SEO is the most critical component to leverage content for traffic on e-commerce sites. And since Revival had a ton of organic content it was critical to squeeze every drop of SEO juice out of that content. While both e-commerce platforms have some version of a Content Management System (CMS) included, neither really measured up to the challenge of both delivering the client’s desired experience and optimizing the SEO value of the content. So we decided to include a third hybrid solution in the experiment: our own.

revival

GoldenEgg is a proprietary content delivery platform developed at Victory Labs in cooperation with the Victory Data Science Group. We chose Contentful as the backend Content Management System (CMS) to primarily be used as a content and media management platform integrated with Amazon’s Elastic Beanstalk.

So, as a third variation in our experiment we used GoldenEgg frontend with Contentful backend for initial consumer experience and content consumption with links to products on BigCommerce for conversion.

So, did Shopify or BigCommerce prevail? The answer was neither!

When it was all said and done, GoldenEgg came out on top with a 3x conversion rate when compared to that of Shopify and BigCommerce alone. Why? While there are a large number of potential reasons to consider, based on our test it seems likely that the content experience on the website was the cornerstone of engagement and ultimately created the required stickiness to convert visitors to customers. So, the winning combination was an immersive visual experience that aligned with the brand messaging and appealed to the highly visual user base.

All things considered, is this solution the best one for every business? No! For a landscape as dynamic as digital, there is no prescriptive solution. The key takeaway is the importance of testing and measuring the potential solutions to find the correct answer for a specific business, e-commerce or otherwise.

--

Chris Chilek is the co-Founder of Victory.

The Corporate Overlords' Battle is Your Gain

“Chaos is a ladder.” For those of you who are Game of Thrones fans, you know what Peter Baelish meant when he said this, and how true it is.

If your “opponents” are pitted against each other, there is a lot that slips away from them. They are all too consumed by their respective attempts to best the other(s) that they fail to observe how their surroundings may be changing. Bystanders can leverage this fact to their benefit.

For App Developers, the race between Google Polymer and Facebook React tech stacks allows an excellent opportunity to improve their own app longevity in a way that is less risky and more effective.

Read more about how this battle is for your gain in this article by Pete Carapetyan, a software engineer in Austin, TX.

Project Estimation in Small Companies

Timing is everything. It is a finite resource, and an invaluable one, and as such we seek to control time in order to leverage it for our own gain, especially in business.

The most efficient and effective use of time is the holy grail of project management. Everyone strives to be able to perfectly estimate the timing of all the disparate mechanics that must eventually come together to produce a synchronized, productive business outcome. However, there are unseen forces that would meddle with our perfect timelines, pipelines, and our sanity to boot.

In this article, Boyd E. Hemphill, Victory’s CTO, discusses the different measures Victory experts employ to avoid productivity pitfalls so that your teams attain maximum efficiency and never miss a deadline again.

Read the full thought-piece here, and if you’d like two hours of complimentary consultation of your small business and its technical team, please contact Angela Miller.

ANNOUNCEMENT: Tech & Digital Freelancer Meetup Group

We operate a little differently at Victory. While we have a core group of FTE employees, we rely extensively on a network of extremely talented and experienced freelancers to work on projects as they come in. This way we get the best and brightest talent, perfectly aligned to the scope.

There’s a few reasons for this. The principal two being that we don’t want to bring on team members when landing a client only to have to let them go if that client moves on; and that some of the best and brightest talent either has a day job they love or that they like being independent (#gigeconomy).

This means we spend a lot of time talking to freelancers and independent contractors - and something we hear a lot is the difficulty in finding other really quality, dependable people to bring in on projects. Establishing trust is important, and there’s no better way to do that than to meet in the real world.

We realized there isn’t a regular tech or digital freelancer networking event in Austin. At all. In fact, there was even a call for events in what’s arguably the biggest freelancer board locally, Austin Freelance Gigs.

So, in true Victory style, we decided to take things into our own hands.

We've launched a monthly meetup where freelancers in the tech and digital spaces can meet and find contacts in other areas besides what they offer.

If you’re a freelancer, join our meetup group.

We’d like to meet you, and you may just find your perfect project partner, too.

JOIN HERE!

Re-Thinking How to Increase Email Opt-Ins

There’s been something going on with brands recently. We all have noticed it, even if we all have not been able to quite put a finger on it.

It’s the meteoric rise of the D2C brands. Think Casper Matresses, Glossier Cosmetics, et al. These agile “micro-brands” exist mostly on their native e-commerce sites and on social media, interjecting themselves seamlessly where their broad consumer base interacts most heavily. These firms stand to turn the tables on larger legacy brands due to their digital savvy and their high quality-high convenience-low cost alternatives.

What does this have to do with email? Looking to the new social media precedent set by these same “micro-brands,” Angela Miller, Victory’s CMO, takes a deep dive into the marketing positioning and messaging of popular brands today and concludes that the new email opt-in no longer has anything to do with email. She argues, using behavioral understanding, that brands should contextualize their social media “follow links” in a way that emulates an email opt-in.

Read about it here.

Native Mobile is Dead

Many would argue that, today, phones are merely forums for the myriad applications that now exist in Apple and Android digital marketplaces.

Sure, the basic functions of phones are ever important and useful; but, even now, there are apps that seek to circumvent these same integral elements, e.g WhatsApp, Skype, etc.

There's an app for that, the modern adage reminds us. However, is this a sustainable market? Has it become too saturated?

Boyd E. Hemphill, Chief Technology Officer at Victory, outlines a case for the departure from native mobile apps in favor of progressive web applications. Citing many brands and Victory clients that have taken up this charge, this article delves into the reasoning behind why, as consumers, we should begin to appreciate and adopt web applications instead of cluttering our screens with a rainbow of icons.

Read the rest here.

Software Archeology - Log Aggregation

In October of 2017, Lee Fox introduced Austin DevOps to the concept of Software Archeology. The entire idea and analogy really struck at the heart of what I do.

Archeologists use specialized tools to unearth the behavior of complex social systems. The use of these tools must be painstaking in detail so as to have a minimal impact on the facts available about a society.

In software even the simplest of systems can be complex when no context exists. Consider a Django web application that has been in service for 5 years. Nobody remains from the team that set up they system. There was never an operations person.

Translated into the terms of archaelogy, we could say that there was no written language about how the system worked. There was nothing written about the culture of testing or security. There are no benchmarks for performance.

For years, my first move has been to look at basic systems monitoring. Things like disk usage, load average, network transfers and build patterns of behavior. However, in the last year, I have started in a different place: application logs.

Log Aggregation Provides Awareness

Even in a system with only three web servers and a database server, tracking down an issue to understand system behavior is a mind-numbing task. Opening multiple terminals, tailing logs, losing your place. In the extreme I have seen developers with 25 windows open watching for information to flow as they exercise a feature in hopes of a clue to what is wrong.

Not only is this inefficient, the friction to get this sort of information has two crippling consequences.

  • Troubleshooting is often not done and strange work arounds in the application code are frequent. (increased complexity)
  • Developers do not think about log exhaust as a first class output because it is not useful. This means logging by the application is ... well ... useless. (downward spiral)

Digging for History

I am not brought onsite at a customer because they have awesome situational awareness. My customers are the common 90%. They have a small application that they have built over time to run their business. The people who know how it work have left the organization. The customer wants to:

  • add a feature
  • scale the business
  • track down stability issues

Yes there are other solutions with more power and more features. That is not what I need. I need answers to why the software behaves the way it does for a given stimulus. Papertrail's time-to-value is its key feature.

After several hours using common cultural practices we found evidence of a Redis server that was not known to the current team.

Other tools like find were employed with common naming patterns like the .log extension.

Knowing the application came from the Django tribe of the Python nation we tracked down the settings.py files in each web root.

The result below is the 17 log files we had to track individually now codified in a simple file found at /etc/log_files.yaml.

files:
# cms logging
  - path: /srv/www/example.com/logs/access.log
    tag: cms/nginx/access
  - path: /srv/www/example.com/logs/error.log
    tag: cms/nginx/error
  - path: /var/log/uwsgi/app/example.com.new.log
    tag: cms/uwsgi
# spiffyawards logging
  - path: /srv/www/spiffyawards.example.com/logs/access.log
    tag: portal/nginx/access
  - path: /srv/www/spiffyawards.example.com/logs/error.log
    tag: portal/nginx/error
  - path: /var/log/uwsgi/app/spiffyawards.example.com.log
    tag: spiffyawards/nginx/error
# backoffice logging
  - path: /srv/www/backoffice.example.com/logs/access.log
    tag: backoffice/nginx/access
  - path: /srv/www/backoffice.example.com/logs/error.log
    tag: backoffice/nginx/error
  - path: /var/log/uwsgi/app/backoffice.example.com.new.log
    tag: backoffice/uwsgi
# portal logging
  - path: /srv/www/portal.example.com/logs/access.log
    tag: portal/nginx/access
  - path: /srv/www/portal.example.com/logs/error.log
    tag: portal/nginx/error
  - path: /var/log/uwsgi/app/portal.example.com.new.log
    tag: portal/uwsgi
# system wide application logging
  - path: /var/log/nginx/access.log
    tag: systemwide/nginx/access
  - path: /var/log/nginx/error.log
    tag: systemwide/nginx/error
  - path: /var/log/uwsgi/emperor.log
    tag: systemwide/uwsgi/emperor
  - path: /var/log/example-example/django_log.log
    tag: systemwide/django
  - path: /var/log/redis/redis-server.log
    tag: systemwide/redis
destination:
  host: logs2.papertrailapp.com
  port: 12345
  protocol: udp

Some things of note for you about the organization of the file.

Papertrail does not require the tag but we have found, over time, that a strategy of system/component/log to be immensely helpful.

For example if there is an API called kfc that would be a system. Components of that system might be nginx and uwsgi in the Python world or Apache and php-fpm in the PHP world. Each will create their own log exhaust that has value to a person attempting to discover what the system is doing under stimulus.

Paydirt

archeology-papertrail-payoff

The screen above is a search across 5 snowflake servers and more than 100 log files. You can see that it is only showing log messages with the word "error" in them. If I need to look for errors only from the subsystem "colonel" then I can simply change the search parameters to "error AND colonel"

Ultimately the above represents a "single pane of glass" for everything the system can tell me about what it is doing.

Learning from the Past

Of course, on this customer site, we learned a great deal about what was going on and used this to help us improve their system stability and reconstruct the knowledge necessary to keep the business profitable.

But you, as a developer, can also learn from the past.

It should be apparent from the above Papertrail screen capture that there are far too many errors happening. This is the culture of "works-on-my-laptop".

Consider that Papertrail will push alerts to chat and alert services. To reinforce that capability note the screen shot below and note that it is bereft of any value.

archeology-papertrail-alerts-empty

Because the signal to noise ratio of the logs is so low (i.e. there are so many meaningless errors in a given time period), we cannot yet use the tool to alert us if there is a problem.

This is a typical problem in operations. Developers do not produce effective log messages and product owners do not enforce any sort of standard.

Considering the case above, there are two behaviors that must change to move from the "works on my laptop" culture to the DevOps culture of "makes-us-money".

Investigate Each Error

For every error in the log, a ticket should be opened and the error should be investigated.

Many log messages emitted as errors are, in fact, expected behavior. If an error expected, then it is not an error in the context of application logging. It should be set to the INFO level. They are still being logged, but they are only informational in nature.

In less frequent cases the error is, in fact, an error. In this case it should be addressed.

The goal is an ERROR and CRITICAL free log. With that as the goal, when most of the noise is removed, then alerting can be created to wake people up at night when CRITICAL events occur and to force a morning reaction when ERRORs happen.

In this way, the logs have operational value beyond the forensic. They become a key part of monitoring the health and performance of the system.

Verbose Errors

The messages in the search box screen capture above are, at best, cryptic. Each takes its own "dig" to get the meaning from.

When you, as a developer, decide to log any message, it should, in the minimum, include the file and line number when the error occurred. Additional context in the form of a message with key parameters will save time.

In a production incident, effective log messages reduce the time to resolution (TTR) and the general stress level of the team.

The Logging Tool

On a software dig, the logging tool is the primary window into the behavior of the system. If your system monitoring is going nuts, the logs are the most likely place for you to find out why.

Get log aggregation in place and use it to improve your teams knowledge of your system over time.

If you are in the process of building a system, use log aggregation to drive the creation and enforcement of useful logging standards that assist the first responders when things go wrong.

Archeological dig, brown field or green field ... log aggregation will save you time and improve your production system.

Better Quality Recruiting at Scale: Kasasa Case Study

In this Article:

  • Victory is rolling out their Recruitment Program, termed Karma, a scalable solution that is able to expedite the hiring processes for client companies by combining years of technical expertise with A.I. capabilities.
  • The Karma Program was initially developed as an internal tool for Victory, then showed enough promise to test in the market after requests from partners. Kasasa, a Fintech Company HQ'd in Austin, Texas, connected with Victory in a proof of value concept to solve their recruiting needs.

Client: Kasasa

Case Overview: What is the main problem with recruiting? Efficiency and scale.

HR managers have struggled staff their firms fast enough, while also connecting with meaningful work. The recruiting process is multi-faceted, built upon communication, and oftentimes can be tedious.

Which is to say that it is a wholly human process and is full of human error. As a result, there are inefficiencies and missed opportunities, like missed emails, crossed wires, or absent offers.

It is also an expensive process in terms of both time and resources. For this reason, many firms have now begun to outsource their HR practices to avoid the headache.

Enter Kasasa.

Kasasa needed to fill 3 open positions within 60 days and had no internal recruiter to oversee the process. They then turned to Victory for an expert solution. Victory’s Karma program developed and tailored their existing in-house process to meet Kasasa’s needs in order to best fit the goal.

The result? They were able to fill not 3, but 25 open positions at the firm, all within that 60-day window.

How?

Karma+AI:

In its internal stages, and in its deployment with Kasasa, the Victory Karma program was a fully manual process. This is important to note because we were able to successfully fill the firm’s ask using only the Karma process (8x, no less!). However, we’ve improved it even more: it now uses Artificial Intelligence (AI) to automates the “recruiter” role even further.

It is more efficient, effective, and scalable than its human counterparts, and is able to work through approximately 250,000 applicant profiles in 60 seconds.

But this quantity does not mean a sacrifice in quality. The platform is able to expertly identify the skills that a firm is seeking in candidates, and then send the best-match profiles to recruiters.

This is a platform that considers both the needs of applicants and the needs of recruiters during the hiring process. The skill identification capability of Karma allows for managers to be more focused on the applicants that are sent through and more generous with their time. This, in turn, creates a significantly more positive experience.

It is a platform, that despite incorporating AI, is surprisingly considerate of the human aspects of the hiring process--without sacrificing efficiency.

And that’s why we named it Karma - because we believe that what goes around, comes around - and everyone should be treated with the respect they deserve.

Vennesa Van Ameyde, COO at Kasasa:

“It was an unexpected breath of fresh air during a usually stressful process. They were able to more succinctly identify the skills we needed and able to connect us with good people.”

Description of Victory's Karma Program: Victory is about finding solutions to complex and unique problems. One of the biggest problem for many companies is the search for technical resources. That’s where the Karma program comes in.

By applying years of technical expertise, scalable technology, and a proprietary process, we make the experience of acquiring and placing technical resources faster and better for both our clients and the people doing the work.

Victory itself offers strategic consulting, and Karma connects the people to get the work done.

Sometimes a client has a the strategy and plan, and just needs more hands to do the work. We can help with both.

Contact Lucas Mitchell (lucas.mitchell@victorycto.com) to learn more.

Build an Amazon's Web Application Firewall Sandbox

Amazon's Web Application Firwall (WAF) as a service is a key pillar in the protection of your online applications. This tutorial gives you a start with AWS WAF, by walking you through the creation of a sandbox.

You will

  • Build a trivially simple application fronted by an Application Load Balancer.
  • Count traffic from one potentially hostile host while passing traffic from another host.
  • Work with the basic building blocks of the WAF.
  • Block traffic from the hostile host.
  • Watch and reason about statistic provided by the WAF on traffic filtering.

At the end of this tutorial you will have a WAF sandbox and a general grasp of the basic concepts of the Amazon WAF service. In that sandbox you can run experiments and learn how to protect your live web applications.

It is assumed you have access to an AWS account and that you can launch infrastructure in EC2 and WAF.

It is assumed you are familiar with:

  • the AWS Console and EC2.
  • basic installation of Ubuntu packages
  • basic shell scripting

All activity will take place in the AWS Region us-east-1 (Virginia).

GOTCHA: Links to the console in this article will put you in the us-east-1 region. That should not be a problem but do be aware.

Overview

To create a safe sandbox to experiment with the Amazon WAF we will:

  1. Launch two t2.micro instances
  2. Launch an ALB and asscoiate one instance, running Apache, to it
  3. Deploy a shell script to the second instance
  4. Attack your "application" with the shell script from your local worstation
  5. Pass valid traffic from your other AWS instance
  6. Observe and tune various behaviors of the WAF
  7. Profit

Building the Target Application

You will do the following

  • Launch a pair of EC2 instances and name them.
  • Prep the backend to receive web traffic.
  • Create a target group
  • Launch an ALB and associate it to the target group
  • Test the setup

Launch EC2 Instances

Pro Tip: All AWS objects will be namespaced with the prefix vcto-waf-tutorial- to assist you in cleaning up when you are done.

We will launch two t2.micro instances running Ubuntu 16.04.

  1. From the console, click the blue "Launch Instance" button in the upper right. launch-instance-1
  2. Choose Ubuntu Server 16.04 LTS (HVM), SSD Volume Type. (ami-41e0b93b)
  3. Choose t2.micro for the instance size. Click "Next: Configure Instance Details"
  4. Fill in the following, then click "Next: Add Tags"
    • Type "2" for the Number of Instances.
    • Choose a non-production VPC.
    • Set the Shutdown Behavior to "Terminate"
  5. Click "Next: Configure Security Group"
  6. Select "Create a new security group" and fill in the following:
    • Security group name: vcto-waf-tutorial
    • Description: vcto-waf-tutorial
    • Set rule per the screen shot below:

launch-instance-2

  1. Click the blue "Review and Launch" button at the bottom right of the screen.
  2. Review the information and click the blue "Launch" button in the bottom right.
  3. Choose a key pair you have access to, or take a moment to create and install one.
  4. Click on the blue "View Instances" button.
  5. Name your instances
    • vcto-waf-tutorial-attacker (attacker)
    • vcto-waf-tutorial-backend (backend)

launch-instances-3

Prepping the Backend

Shell into (ssh) your backend instance and install Apache, then exit the machine. (Yes, that is all we need)

  • sudo apt-get install --assume-yes apache2

Gotcha: Dont forget that you are expected to be the user ubuntu and not the user on your workstation (e.g. ssh ubuntu@52.207.238.240)

Trust but verify! From your workstation, you can test your apache set up:

  • curl -I 52.207.238.240
  • Or you can use your browser.

Gotcha: The IP of your backend server will be different than mine.

Setting up the ALB

First you will need to create a target group.

  1. In the EC2 console, click the Tagret Groups link on the left.
  2. Click the blue button in the upper right "Create target group"
    • Set the Target group name to: vcto-waf-tutorial-target-group
    • Accept the other settings and create the group.
  3. Select the Row of your new target group and click the Targets tab below.

setting-up-the-alb-1

  1. In the tab, click the blue "Edit" button.
  2. In the list of instances below, select the one named vcto-waf-tutorial-backend (the same one you installed Apache on), and click the blue "Add to registered" button.
  3. Click the Save button to make the instance part of a target group.

Now create the ALB:

  1. Click on the Load Balancers link in the left menu.
  2. Click the blue button labeled "Create Load Balancer" and click the "Create" button in the "Application Load Balancer" tile on the left.
    • Set the Name to: vcto-waf-tutorial-alb
    • Since my backend instance is in us-east-1d I will choose it and us-east-1a as my Availability zones.
  3. Click the grey "Next: Configure Security Settings" button (you will have to do this twice to avoid the https nag message)
  4. Select the vcto-waf-tutorial-sg security group and click the grey "Next: Configure Routing" button.
    • For Target Group, select "Existing target group"
    • For Name, select "vcto-waf-tutorial-target-group"
  5. Click the gey "Next: Register Targets" button.
    • You will only have one, so simply click the "Next: Review" button.
  6. Look over the information and click the blue "Create" button in the lower right.

This is a good time to get coffee | tea | adult beverage, as you will need to wait for the ALB to provision.

Time to trust, but verify the ALB functions as expected.

Click on the ALB then the copy the DNS name from the Description tab below.

setting-up-the-alb-2

You can check the response from the ALB with a curl command curl -I http://vcto-waf-tutorial-alb-2042528276.us-east-1.elb.amazonaws.com

Or, you can drop the URL in your browser. You will see the Apache/Ubuntu welcome screen.

Optional DNS Entry

We will make a Rout53 entry to make the address easier to remember.

Gotcha: While we will be using a subdomain that is almost assuredly not in use, remember you could be dealing with live records. Careful what you delete when cleaning up!

We are going to create a cannonical name (CNAME) record to point to vcto-waf-tutorial.victory.cloud

optional-dns-entry-1optional-dns-entry-1

Remember it will take a minute or two for the record to propogate. Route53 has a nice test feature to use.

Once the test is past, confirm with a curl command from your local workstation curl -I http://vcto-waf-tutorial.victory.cloud

We will use this address going forward, but you will need to translate it to the address you have configured.

Checkpoint - ALB Fronted Web Application

Take a deep breath. We have just provisioned a simple web application that is using an Amazon Application Load Balancer (ALB).

Now it's time to protect that application from bad actors by putting a Web Application Firewall in front of it.

Basic Concepts of the Amazon WAF

It can be very easy to lose your way the first time through implementing Amazon's WAF. To mitigate some of that confusion keep the the following associations in mind.

A "Web ACL" is associated with one or more ALBs or Cloudfront Distributions. This article will focus on ALBs. A "Web ACL" has one or more rules to apply to the traffic coming to your ALB.

A "Web ACL" is made up of one or more "rules." Each rule is a collection of one or more "condtions". For a rule to apply to a request, all conditions of the rule must be met.

Similarly a "condition" is made up of one or more "filters." All of a filter conditions must be met for the filter to apply.

Creating a Web ACL

WAF is not a service that you deploy (e.g. EC2, RDS), rather it is a service you consume. As such, there is no WAF instance, there is only configuration.

Click on the WAF & Shield and click the blue "Go to AWS WAF" button on the left.

To get started, click the blue "Configure web ACL" button at the top of the screen.

You are presented with a Concept overview. Take a few minutes to compare it to the Basic Concepts section above, then click the blue "Next" button at the bottom right.

Get the ip of your workstation. Mine is 99.62.168.201 for reference later.

Create the Web ACL by setting the following and clicking the blue "Next" button.

  • Set the Web ACL name to
  • For "AWS resource to associate" select vcto-waf-tutorial-alb

What Happened? - An ACL that will hold one or more conditions to evaluate traffic has been staged for creation. (Staged because we are using the wizard)

In the IP match conditions section, click the grey "Create condition" button.

  • Set the Name to vcto-waf-tutorial-block-my-local
  • Set the address to your local IP in CIDR notaton. (e.g. 99.62.168.201/32)
  • Click the grey button in the middle of the light box "Add IP address or range" and you will see your local IP fill in as a Filter below.
  • Click the blue "Create" button.

What Happened? - The Condition was created and you have been circled back to the Create Conditions step of the wizard. YOu should see the name of the condition in the list of IP match condtions.

That is all we want for conditions right now right now.

Scroll to the bottom of the page and click the blue "Next" button. YOu will be presented with the "Create rules" screen.

Choose the grey "Create rule" button to the right.

  • Set the name to vcto-waf-tutorial-rule
  • In the Add conditions box,
    • ensure "does" is selected
    • Choose "originate from an IP address in"
    • select vcto-waf-tutorial-block-my-local

Click the blue "Create" button in the bottom right of the lightbox. You will see a green message saying "Rule created successfully"

Below the message you will see the new rule with the ACL set to "Block". Change this to count.

Pro Tip: Implementing a WAF can block desirable traffic if done wrong. When getting started, use the "Count" for all of your ACLs first. Move to Block after you are sure the rule as the desired intent.

Choose the Default action: "Allow all requests that don't match any rules", then click the blue "Review and create" button. One you are happy that you have what you want, clicke the blue "Confirm and create" button.

Congratulations

At this point, you have implemented the AWS WAF in front of of your web application. But, that is just the start. WAFs require care and feeding.

Generating Traffic

Before we look at the WAF reporting, lets put our application under load.

Remember that we are setting up to block our local machine, but for the mean time we are going to count requests from our local IP as if we might be a bad actor.

A Simple Traffic Script

Consider the following bash script:

#/bin/bash
# waffing.sh 

URL=http://vcto-waf-tutorial.victory.cloud

for I in {1..100000}
do 
    # Would be nice to randomize some stuff on the end
    ATTACK=$((1 + RANDOM % 2))
	case $ATTACK in 
		1)
			URI=''
			TYPE="None - maybe IP blocked"
			;;
		2)
			URI='/'
			TYPE="None - use slash after URL - maybe IP blocked"
			;;
	esac
    curl -s $URL$URI > /dev/null
    echo "$I - $TYPE"
    sleep 1
done

If you are unfamiliar with bash it makes requests to our application via curl. It is set up to allow for many diffent types of attacks to be coded in, but right now it is just hitting http://vcto-waf-tutorial.victorycto.cloud and http://vcto-waf-tutorial.victorycto.cloud/ (note trailing slash).

The Attacker Instance

Remember that you created the second AWS instance vcto-waf-tutorial-attacker.

Shell into that machine and create a file in the home directory of the ubuntu user (where you landed upon loggig in).

touch ~/waffing.sh && chmod +x waffing.sh

Then copy and paste the above code into the file. Be sure to change the URL to either that of your ALB or the DNS entry you created above.

Run the script from the attacker:

~/waffing.sh

The expectation is that we will see this traffic pass with no problem.

The Local Workstation

On your workstation, create the file waffing.sh and give it executable permissions

touch ~/waffing.sh && chmod +x waffing.sh

Then copy and paste the above code into the file. Be sure to change the URL to either that of your ALB or the DNS entry you created above.

Run the script from your workstation

~/waffing.sh

The expectation here is a bit different. In this case, we will see traffic be counted within a given sample.

Traffic to the ALB

I used a green terminal for my attacker instance and a black terminal for my local workstation.

You should see traffic requests ticking by once per second something like this:

generating-traffic-1

It is time to refill the coffee | tea | adult beverage and stretch. WAF sampling takes a few minutes.

Watching WAF Behavior

Don't start this section until you have waited at least 10 minutes for traffic to flow.

There is no other way to put it. At this point the WAF dashboard is cludgy at best. Watch the gotchas below to save time thinking about what is happening.

In the upper left corner menu, click the "Web ACLs" link. Click on the vcto-waf-tutorial-web-acl link. This is the one you created above.

You will be presented with a graph that looks like the one below.

watching-waf-behavior-1

Gotcha: Everytime you reload the page, the color key will change.

What you can immediately see is that thre are two types of traffic.

  • All AllowedRequests - This is every request that makes it through the WAF.
  • ALL CountedRequests - Every request coming from our local workstation.

It is easy to see the counted is about half the total which makes sense.

If you click on the rule "vctowaftutorialrule" (the green one above) you will see it covers up the purple "ALL Counted Requests" perfectly. This also makes sense b/c it is the only rule we have doing any counting.

Data Samples

Below the graph, you can request data samples as shown in the screen shot below.

watching-waf-behavior-2

The selection criteria is for the DefaultAction. That is to allow traffic to pass. So in this case we see only the requests that pass. This would be all of them since we are only counting requests from our local workstation. You can see this explicitly in the Action column with the status "Allow."

In the select box, choose vcto-waf-tutorial-rule and click the grey button "Get new samples."

watching-waf-behavior-3

Note this time that the Action is "Count."

Blocking Our Local Workstation

Let's actively block our IP local IP address. In the upper portion of the screen (above the graph), click the "Rules" tab.

Hidden in the grey table header, click the grey button labeled "Edit web ACL". You are now looking at the top level of the WAF implemenation.

blocking-our-local-workstation-1 blocking-our-local-workstation-1 You can see the one rule we have created and its control is set to "Count". Change this to "Block" and click the blue "Update" button. You are returned to the Rules tab.

Click on the Requests tab.

  • Note that there are no blocked requests.
  • Note the Time (UTC) of the most recent request (at the bottom of the sample) is about 5 to 8 minutes behind the present.

Gotcha: There is a warm up and cool down period for the rules. The screen does not auto-fresh. Remember that each time you refresh the screen the color scheme will change.

Time for another cuppa and a stretch. Come back in about 10 minutes just to be sure.

When you do a full page refresh, you will see there are new graph keys in the legend that have to do with blocked requests. That is a good sign.

You will likely see, in the graph, that both the "ALL AllowedRequests" and "ALL CountedRequests" are dropping at the same rate. This makes sense because only half of the requests (the ones from the attacker) are now getting through to the ALB.

You will likely not notice any blocks on the graph. This is a problem of scale. The graph will not resize if you take out the larger values, nor will it show a line along the x-axis.

Finally, in the Sample requests section below, choose vcto-waf-tutorial-rule and click the "Get new samples" button. Scroll all the way to the bottom of the page and you will see that traffic from your local workstation is being blocked.

If you check the URL (http://vcto-waf-tutorial.victory.cloud in my case) in your browser, it will return a 403.

Curl, from the workstation, does the same thing:

boyd.hemphill:~/Desktop 17:53:27 > curl -I http://vcto-waf-tutorial.victory.cloud
HTTP/1.1 403 Forbidden
Server: awselb/2.0
Date: Wed, 31 Jan 2018 23:53:31 GMT
Content-Type: text/html
Content-Length: 134
Connection: keep-alive

Next Steps

Obviously this is a very simple example intended to get an Amazon WAF set up, grok the basic concepts and look at some of the tooling.

You now have a sandox for trying other conditions, rules and ACLs to understand how all these things fit together.

Try setting up new conditions, then attaching them to rules. Associate the new rules in the Web ACL and play with the script to cause SQL injection attacks, or overly large request bodies. Try some geo matching.

And when you are done, first don't forget to spin down your infrastructure:

  • 2 ec2 instances
  • 1 alb
  • 1 target group
  • 1 condition
  • 1 rule
  • 1 web acl

I Want My Engineer's Time Back

At Austin DevOps the other night, Charity Majors, founder of Honeycomb stated several times, "Your Engineering time is the most scarce and the most precious resource of your company." She further stated that solving the same problem more than once is wasteful.

These statements seem obvious on their face, but how often do decision makers consider the value of an engineer's time? How often do we consider the opportunity cost in using engineers' time on problems for which solutions already exist? Do we look for opportunities to remove this waste from the system?

Words Matter

Words matter because time is money. I have kept time in meetings just to see how long we spend just defining terms. Here is an example:

Engineer A: "We will need a bigger server for that." Engineer B: "But we are not CPU bound, we don't need a bigger server." Engineer A: "Not the web server, the database server." Engineer B: "Ah, that makes sense. Do you think its the physical server or the virtual private server?" Engineer A: "Physical, we will have to rack and cable a newer, bigger one."

Granted the above is a contrived exchange, but it shows the word server in four different contexts and two distinct meanings. In that meeting you paid for the definition. Worse, your engineers are doing this all the time!

I have kept time in meetings on these "definition activities." In general we lose anywhere from 10% to 25% of our time to defining terms and retracing our steps due to this confusion. It provides no value. Worse, it will happen in the next meeting around many of the same terms!

Let's take a look at this problem in the real world.

The Language of Logging

The languague of logging, like most technology verticals, is rife with overloaded terms. During Charity's thought provoking presentation however, I was constantly parsing the various meanings of terms. This is known as cognitive interference. I would much rather have been focused on her main points.

Here are a couple of examples.

Event vs. Entry

Charity tossed out the question to the crowd, "What do I mean by an event?" It was clear from the question and context she had a specific meaning in mind.

From the crowd: "An event is something like an outage that would cause me to go look at the logs." In the context of logging, this is a reasonable meaning for the term, "event."

Charity, an expert in logging, stopped and considered the answer. It was clear the answer what she was looking for. Another person chimed in, "It's more general than that. It could be anything really. Like a file upload or a write to the database." An excellent generalization, but it was clear this was not what Charity was looking for.

To Charity, an event is a single entry in a log file. For most of our careers this was likely a single line in a text file. Some vendors in the space such as Splunk and Sumologic use this terminology.

However, I would like to offer the following context. When William Shattner said, "Captain's log, start date ...." he was not recording an event. He was making an entry. That log entry was comprised of a series of events recorded for a given time period.

Honeycomb looks for key-value input and shreds it to a proprietary store so searches on the keys are lightning fast. It is each of these json documents Honeycomb defines as an event. This is a new way to think about the term as well.

Of the 75 minute presentation, this exchange took about 5 minutes. That is 5 minutes of time to define and agree on the meaning of the word event. If we made that investment only once that would be fine. However, I knew walking out of the venue that night that the next time logging came up at Austin DevOps, we would be spending another 5 minutes agreeing on the same terminology.

In a supposed meeting room, given six people - all fully loaded at $100 per hour - your company just spent $50 to define a single term. If this happens 4 times per day you are paying $200 per day just to define terms knowing you will have to spend another $200 on those same definitions in the future. Worse, the definition might even change the next time.

Structured vs. Unstructured

Charity explained that Honeycomb uses unstructured data. She suggested that applications should emit json in key-value pairs rather than raw text lines. She suggested that software engineers use linting tools to ensure it's good json.

This seems oxymoronic. The very fact that Honeycomb uses key-value pairs implies a structure. The fact that the log entry (event?) can be linted means there is predictable regularity (structure).

When asked what a linter might do, Charity suggested it could ensure the presense of keys and even light type checking. This sounded very similar to XML's DTD schema definition. In other words, it was structured.

At an after-presentation social, I asked people if they were confused by this. Many were and some realize that Charity's usage of the word "structure" was in a different context. She had explained pointing out that Splunk limits the number of usable keys, and Honeycomb does not. To her, the term structure was the artificial limit placed on the important metadata. A valid meaning for the term.

I am not saying either definition of structure is correct or cannonical. Charity is and expert and I am not. I am pointing out that this very confusion made it difficult for Charity to make her point: Honeycomb gives you an immense power not found in other log aggregation systems.

I estimate this topic was covered three different times for about 10 minutes total. How much would it cost in your organization to define the word structure?

Give Your Engineer's Time Back

We cannot control people outside our organization. But, we can limit the problem locally. As a leader in your company pay attention to the act of clarifying terms during meetings.

  • Keep a small notebook with the terms, definitions and amount of time spent on coming to agreement.
  • Watch for confusion and miscommunication caused by these overloaded terms.
  • After a month, ask yourself, "Is it worth solving this problem?"

It is our experience at Victory CTO that the answer is, "Yes."

What can you do to recover your engineers' time? Ultimately, the solution is to standardize on a meaning for a give term. Your architecture team is the source of many standards. Standard usage of terms should be added to their duties. Typically they will:

  • Document the agreed upon meaning of a term
  • Broadcast that meaning in documentation and meetings
  • Enforce the meaning of the term
  • Recruit engineers to reinforce the usage of a term

At Victory CTO we recognize that leadership isn't only about vision. Leadership is also about identifying and removing obstacles for your teams. Taking the time to identify, measure and remove the right obstacles allows your teams to spend your most scare and precious resource on the most important problems.

Continuous Integration for Laravel in AWS ElasticBeanstalk via Travis-CI

At Victory we use Laravel a lot - having worked with every PHP framework with decent traction in the last 20 years, this one works well and makes the most sense - it solves the important problems a framework should solve without trying to solve everything.

When we start a new project (or a major renovation) pulling this code base shaves about half a day in setup. For this tutorial feel free to pull the codebase down and follow along.

Development

This sample environment is running on Laravel Homestead, and integrated into the codebase with some extra provisioners. This isn't ideal - we are working on an environment that is closer to AWS Linux and provisioned from Ansible. But, as of this writing it's not done. For the purposes of this article we'll be fine, but environmental parity is important for more advanced topics.

Branch Management

When starting off, enable a good workflow. Setting a CI and CD habit starts here. We typically use Gitflow. Generally for a new project we set the staging branch as the default and build and deploy that to a staging environment when code is merged to it. Master likewise will build and deploy to the production environment. Set these good habits now before you have real users on a production environment.

Laravel -> Travis-CI

In this example the following happen:

  • Travis builds and tests our code,
  • Create an artifact,
  • Load it to Beanstalk,
  • Deploy it to the right environment.

Let's break that down a bit.

Testing

To test the code, we will need:

  • An environment as close to production as practical
  • A database with test data
  • An Elasticsearch index with test data
  • Composer packages
  • Javascript and CSS compiled

Note I didn't say anything about what sort of tests you plan to run, code coverage, etc. This solution allows you to take full advantage of the testing built into Laravel. In this codebase we have also incorporated PHPMD to test for things like Cyclometric Complexity and Code Complexity (this keeps things readable).

Write tests!

Environment

This is where the example will fall apart a bit - ElasticBeanstalk runs on AWS Linux, which is a variant of RedHat / Fedora (as far as I can tell). Travis likes Ubuntu, Homestead is run in Ubuntu. Theoretically you can set up Beanstalk to run on any AMI you like, but in practice it's a headache and violates Victory's value of simple is better than custom.

For our purposes we will be happy with ensuring that we're on the same version of PHP (7.1). For the rest of the environment talk, let's look at the .travis.yml file - I'll break it down into pieces in the post - feel free to go look at the one in the repo:

language: php
addons:
  apt:
    packages:
     - oracle-java9-set-default
php:
- '7.1'
jdk:
  - oraclejdk8
services:
- mysql
sudo: required
dist: trusty
group: deprecated-2017Q4

The above snippet determines:

  • we'll be running php 7.1,
  • installing Oracle Java 9 (we'll need that for Elasticsearch)
  • installing MySql
  • running on Ubuntu Trusty (14.04)

The group: deprecated-2017Q4 is a holdover as we had a little trouble with the latest environment.

In the real world, the above describes a container with the specified configuration where the rest of the build and test tasks occcur.

A Database with Test Data

The next couple of lines in the .travis.yml file create the default database:

before_install:
- mysql -e 'CREATE DATABASE homestead;'

Things to know:

  • Travis serves MySql with the username travis and no password.
  • If we wanted to load it with data we can do that with the db:seed artisan command or create our own artisan command specific to the project.

Just remember that anything you do here will cause the entire build to wait on the data to be loaded. Make sure you can test with a small subset of data.

Your feedback loop should always be as short as possible.

The before script

As the name implies, these commands run before the build begins.

before_script:
- cp .env.travis .env
- composer install --prefer-dist --no-interaction
- php artisan cache:clear
- php artisan key:generate
- nvm install 7.7.1
- npm install -s npm@latest -g
- npm install -s -g jshint
- npm install -s

In this script:

  • we set up the .env by copying .env.travis to .env.
  • composer packages are installed
  • the app is competely set up.

Script

We are big Elasticsearch fans - not just for it's use as a search engine but also for high-speed denormalized front end caching, so we include it by default in every build. To get Elasticsearch on Travis (version 5.5.3 for parity with the version we're using from Amazon's Elasticsearch Service)

So, the script section is where do some final set up of the codebase. AFter that, testing occurs.

script:
- sudo apt-get purge elasticsearch
- curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.5.3.deb && sudo dpkg -i --force-confnew elasticsearch-5.5.3.deb && sudo service elasticsearch start
- wget -q --waitretry=1 --retry-connrefused -T 20 -O - http://127.0.0.1:9200
- php artisan es:indices:create
- php artisan migrate
- npm run production
- vendor/bin/phpmd app text codesize design naming unusedcode
- vendor/bin/phpunit --testdox --coverage-text tests

To ensure we get the right version of Elasticsearch:

  • first ensure it is not installed already
  • grab the specific one we want and install it.
  • The third line allows the service some time to get started up.
  • The artisan command es:indices:create creates the indices.
  • The MySql database is set up with the migrate command,
  • Javascript is compiled with npm run production
  • Tests are run with the last two commands.

If any of this fails, the build fails.

Laravel on ElasticBeanstalk

At this point the application is built and tested. We could stop here and have a nice CI pipeline. But - we want more. Specifically, we want this puppy to launch itself into Beanstalk.

So let's talk Beanstalk - specifically architecture. There are a few things to keep in mind:

  1. Servers are cattle, not pets. Don't get attached. Nothing should need lives on the box longer than a user session. Use S3 as your primary file store.
  2. The servers are hard to get to on purpose. Our setup is designed with emergency-only server access in mind. Don't expect to get logs easily. Services like Papertrail and Bugsnag are your friends.
  3. This up for is for production. So the routes and config files are cached. You can't use any closures in the route files, and you should be meticulous about running your environment variable through the config files (don't use the env helper in the code - only in config files).
  4. If you want to use Laravel's Queues or Scheduled Tasks (both fantastic) you will need something on the box to run the workers. More on that below.

Note that we reason from the the production setting back towards the development environment. That is because "production" means "to produce the money that is my paycheck." It must work in production first.

Travis -> Beanstalk

Moving down the travis.yml file to the deploy the application:

Notifications

If you like notifications Travis keeps you informed. The details are beyond the scope of this post. For our purposes we have Slack notifications coming to specific channnels regardless of success of failure.

notifications:
  slack:
    rooms:
      secure: "REDACTED"
    on_success: always
    on_failure: always

before_deploy

This is where any final cleanup of your code happens before the artifact is created.

before_deploy:
- rm .env
- rm .env.travis
- rm .env.example
- touch .env
- export ARTIFACT_PRE=$(echo $TRAVIS_REPO_SLUG | sed  s_^.*/__)
- export ARTIFACT_NAME=${ARTIFACT_PRE}-${TRAVIS_BRANCH}-$(
  echo ${TRAVIS_COMMIT} | cut -b 1-8
  )-$(
  date -u +%FT%T%Z
  ).zip
- export ELASTIC_BEANSTALK_LABEL=$(echo $ARTIFACT_NAME | sed s_.zip__)
- zip $ARTIFACT_NAME -q -r * .[^.]*
- ls -la $ARTIFACT_NAME

In this case:

  • clean out the .env files to make sure we don't confuse Beanstalk,
  • create an artifact name and export it,
  • zip up the artifact.

Notice we make an empty .env file, that keeps Laravel from complaining.

branches

Optionally you can specify which branches should be considered for builds.

branches:
  only:
    - master
    - staging

For this example we set them to only build on a push to master or staging.

deploy

Travis is quick to say that Elastic Beanstalk support is in beta, so be ready for things to change. For the last 6 months we've been using it with little issue. If you look in the .travis.yml file you'll see two deployment providers - one of these is for staging and the other master - in a production site the master branch will go straight to production once tests have passed.

For the sake of space only one deployment provider is annotated.

deploy:
  - provider: elasticbeanstalk
    skip_cleanup: true
    zip_file: "$ARTIFACT_NAME"
    access_key_id:
      secure: REDACTED
    secret_access_key:
      secure: REDACTED
    region: us-east-1
    app: meetup-sample
    env: MeetupSample-env
    bucket_name: elasticbeanstalk-us-east-1-732770059798
    on:
      branch: staging

Here's what you see above:

  • deploy: - this is the start of the deploy section
  • skip_cleanup: this prevents Travis from resetting what it did to built the artifact
  • zip_file: - name of the artifact
  • access_key_id and secret_access_key - These are AWS keys - they have to be in the codebase so please encrypt them.
  • region - AWS Region
  • app and env - The Beanstalk information
  • bucket_name - All Elastic Beanstalk environments in a region share one bucket for artifact storage
  • on: branch: This limits this particular deploy profile to a specific branch
  • Optional - only_create_app_version - if you set this to true then the app version will be uploaded but NOT deployed

Elastic Beanstalk

Elastic Beanstalk provides us with preconfigured machines, auto scaling and other simplifying services for running an applicaiton in production.

EB Extentions

This example will work almost perfectly out of the box - but we will need to do a few things on the servers. This is whereb*.ebextenstions* comes in handy. EB Extensions are a the configuration management system for Elastic Beanstalk. Think Ansible, but more rudimentary and not idempotent.

They are yaml files which run provisioning on the server - simple stuff, but effective. A couple in the codebase are included to play with.

Order matters and EB Extensions run in alphabetical order. Adopt the idea of naming them with a leading number (and leave yourself some room in between).

  • 05_supervisor.config - This is a pretty complicated script that will install and set up supervisord for Laravel's queue system
  • 11_database.config - Initially this script just ran artisan migrate - hence the name - but now it also handles the general cleanup and optimization of the code. Note on the migrate command there's the inclusion of leader_only: true - this ensures that the script is only run on the first server to push code out.
  • 13_forcessl.config - This adds a file to the apache config that will read the headers from the load balancer and redirect to https:// when a user comes in insecurely.
  • 16_phpini.config - this is to update the php.ini - the only thing in there now allows for bigger file uploads.
  • 90_friendly_shell.config - this makes the shell of the servers a little cleaner, nice for debugging

If any of these items fail to run, the deployment will be rolled back. Debugging that is outside the scope of this article, but there's plenty of documentation on it.

Introductory Slides

Below is the conceptual introduction given during the Austin PHP Meetup. We'd like to thank them for the opportunity to present this material.

Vagrant 101 - Power of Disposability

Even in 2017, it is quite common to find the hand built, artisinal, bespoke development environment. As a leader in your technology group you will recognize this as the first two weeks of a new-hire's tenure where they are not contributing any sort of value, but constantly interrupting their team mates to get to a point where they can.

Vagrant makes such a situation a thing of the past, but that is only the beginning.

Developers develop. They change things and they need the freedom to take risks with software component upgrades.

This tutorial is designed to introduce concepts and leave you with a working Python development environment. It assumes no knowledge of Vagrant or its virtualization concepts. Because much of the effort will be downloading large files, plan for it to take between 2 and 3 hours.

Terms and Concepts

By far the most important concepts to understand during this tutorial are those of workstation and guest:

  • Workstation - Your laptop. It will act as the host for one or more guest machines.
  • Guest - A virtual machine started with Vagrant Up.

These concepts are important to understand so that you have a firm idea of where the execution of a command takes place.

A couple of other important concepts are hypervisor and mount point.

  • Hypervisor - the software that allows a guest to run on your machine and handles much of the "plumbing" like networking and file systems.
  • Mount Point - A directory in the guest machine where files from the host are present on the guest as well.

And for the purposes of this article the projet root and project directory:

  • project root - A directory called ~/victorycto and will be the place where you can put numerous other tutorial project directory's from Victory CTO.
  • project directory: A directory that is a child of ~/victorycto representing a tutorial. These would usually be the source directory of your own PHP, Python, Ruby, etc al project.

Prepping a Vagrant Workstation

To make sure your workstation is ready, you must install Vagrant and Virtualbox. You will also need to create a place to work.

Install Virtualbox

To install the latest version of Virtualbox, downloads are found here. You will need to run the install process.

Virtuabox is a free hypervisor that will allow you to run various guests on your workstation. You can think of it as the software that racks and cables your guest into your workstation.

Trust but Verify:

%> vboxmanage --version
5.1.26r117224

Install Vagrant

To install the latest version of Vagrant, downloads are found here. It will take 15 minutes or more.

Gotcha!: Vagrant and Virtualbox should be kept as up-to-date as possible. Upgrading one can lead to problems with older versions of the other. This tutorial was written for:

  • Vagrant: 1.9.8
  • Virtualbox: 5.1.26

Trust but Verify:

%> vagrant --version
Vagrant 1.9.8

Create a Workspace

On your workstation open a terminal and create a workspace in your home directory and move into it:

%> mkdir -p ~/victorycto/vagrant-101 
%> cd ~/victorycto/vagrant-101

What Happened?: Paths will matter, so we want to use something less generic than "code". The name victorycto is unlikely already in your home directory. The name of our project is vagrant-101. Eventually this is where our little web app will live.

Explore Vagrant Guest Concepts

In the directory ~/victory/vagrant-101, create the Vagrantfile

%> vim Vagrantfile 

Gotcha!: Note the file name is capitalized!

Copy the below into the Vagrantfile and save it.

Vagrant.configure("2") do |config|

  # The type of guest you are running. 
  #
  # In this case it is Ubuntu 16.04 LTS, 64 bit.
  config.vm.box = "ubuntu/xenial64"

end

Now start the guest server on your workstation. This could take up to 30 minutes, so please read ahead:

%> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'ubuntu/xenial64' could not be found. Attempting to find and install...
    default: Box Provider: virtualbox

... <SNIP> ...    

    default: Downloading: https://vagrantcloud.com/ubuntu/boxes/xenial64/versions/20170830.1.1/providers/virtualbox.box
==> default: Successfully added box 'ubuntu/xenial64' (v20170830.1.1) for 'virtualbox'!
==> default: Importing base box 'ubuntu/xenial64'...

... <SNIP> ...    

==> default: Preparing network interfaces based on configuration...
    default: Adapter 1: nat
==> default: Forwarding ports...
    default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 127.0.0.1:2222
    default: SSH username: ubuntu

... <SNIP> ...    

    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM...
    default: The guest additions on this VM do not match the installed version of
    default: VirtualBox! In most cases this is fine, but in rare cases it can
    default: prevent things such as shared folders from working properly. If you see
    default: shared folder errors, please make sure the guest additions within the
    default: virtual machine match the version of VirtualBox you have installed on
    default: your host and reload your VM.
    default: 
    default: Guest Additions Version: 5.0.40
    default: VirtualBox Version: 5.1
==> default: Mounting shared folders...
    default: /vagrant => /Users/boyd.hemphill/code/vagrant-python

What Happened?: There are quite a few things going on here:

  • From the configuration you created in the Vagrantfile, a Unbuntu 16.04 image was downloaded by Vagrant and started within the Virtualbox hypervisor.
  • The ssh port for the guest was forwarded from 22 to 2222 (more on this later)
  • The machine comes online
  • The guest mounts the directory /vagrant which maps to the workspace we created ~/victorycto/vagrant-101

Let's make this more concrete by working with the guest a bit.

%> vagrant ssh 
ubuntu@ubuntu-xenial:~$ whoami
ubuntu

Pro Tip: The directory you are in on the workstation, ~/victorycto/vagrant-101 is the full context for this guest. You can have a different project in a different directory with a different Vagrantfile. When you issue the vagrant up command there, you will get a second guest on your workstation.

What Happened?: By using the vagrant ssh command you have used the ssh port forwarding to become the ubuntu user on the guest.

Pro Tip: Study that output as it is happening and work at understanding what it is telling you. If something is wrong with the guest, it is highly likely the answer to how to fix it is right there in the output!

Exploring Vagrant's Mount Point Concept

One of the biggest pay-off with Vagrant is the separation of the run time environment from source development. This will be illustrated in great detail later. Fror now lets explore the concepts that underly this capability.

Create a file in the ubuntu user's home directory.

ubuntu@ubuntu-xenial:~$ touch file-in-ubuntu-home-directory
ubuntu@ubuntu-xenial:~$ ls -lah

What Happened?: Nothing you haven't done before most likely. You just created an empty file in the home directory on the guest. Note however that this file is completely invisible to your workstation.

Still on the guest, create a file in the /vagrant directory

ubuntu@ubuntu-xenial:~$ touch /vagrant/created-from-guest

Open a different terminal and see the file on your workstation.

%> ls -lah ~/victorycto/vagrant-101

Pro Tip:: Use terminals of different colors to distinguish between your workstation and the guest. Below, black is the guest and red is the workstation.

vagrant-101-workstation-and-guest

On your workstation create a file in the project directory.

%> touch created-from-workstation

From your guest terminal see the file.

ubuntu@ubuntu-xenial:~$ ls -l /vagrant/

Gotcha!: If you are not seeing the files, ask yourself the following questions:

  • Am I on the appropriate machine (workstation vs. guest)
  • Did I create the file in the correct directory (~/victorycto/vagrant vs. /vagrant)

Vagrant Provision

Provisioning a machine, in Vagrant terms, is the act of installing packages, users, and other configurations so that it is able to run an application.

Secret DevOps Sauce

It is the act of provisioning your Vagrant guest that allows you to separate your development environment (essentially a git project with your code in it) from your run time.

(servers as code) Further, the configuration of your machine is now described in code that is kept in source control with the project. This means that a developer who needs to add a package can do that without making the change to their workstation.

(disposability) Further still, if the developer makes that change and it turns out to be a bad one, then can simply destroy the guest with vagrant destroy, remove the package from the provisioning code, and launch a new guest with NO need to rebuild the workstation.

(sharing) Further even still, if the change is good, then they push the project and ask teammates to pull and relaunch their own guests. Your team is working from the SAME RUNTIME.

The consequences of this on your teams productivity and creativity are substantial:

(servers as code) No more will it "work on my laptop" but not in the test environment. The code to build the development environment is the code for the test and production environments.

(disposability & sharing) Developer are free to try and upgrade of important libraries knowning they can easily back out any number of changes. What could have been days of risk is now minutes.

(security) Because the configuraiton is code, you can easily upgrade databases, languages, frameworks and even operating systems. A developer and operator (see the DevOps?) can collaborate in a couple of hours to show that all tests pass when moving from Ubuntu 14.04 to 16.04 as an example.

Provisioning the Vagrant Guest

Replace your Vagrantfile with the the following:

Vagrant.configure("2") do |config|

  # The type of guest you are running. 
  #
  # In this case it is Ubuntu 16.04 LTS, 64 bit.
  config.vm.box = "ubuntu/xenial64"

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  config.vm.provision "shell", inline: <<-SHELL
  	sudo apt-get update
  	sudo apt-get upgrade --assume-yes
  SHELL

end

On your workstation, issue the command

%> vagrant provision

Remember that my workstation is the red terminal.

vagrant-101-simple-provisioning-example

What Happened?: When Vagrant ran this time, it exectued the statements in the shell provisioner to update the package lists and then upgrade older packages.

Don't let the simplicity of this example fool you. If you can script it, you can put it in the SHELL provisioner.

GOTCHA!: The shell provisioner is not idempotent. You can really only run it once then you will have to destroy the machine to try again. This gets very old very fast. Look for another article on how to use Ansible or Chef to provision your guest.

Developing in a Vagrant Runtime

The rest of this tutorial is based on simple Python Flask application. Don't worry if you are not a Python developer. This is truly simple stuff for anyone with a technical background.

On your workstation (as if you were a developer working with your dependencies) create the requirements.txt file to install dependencies:

%> vim ~/victorycto/vagrant-101/requirements.txt

Place the folloowing in the file:

Flask
flask-login
flask-openid
flask-mail
flask-sqlalchemy
sqlalchemy-migrate
flask-whooshalchemy
flask-wtf
flask-babel
guess_language
flipflop
coverage

On your workstation, replace your Vagrantfile with the following

Vagrant.configure("2") do |config|

  # The type of guest you are running. 
  #
  # In this case it is Ubuntu 16.04 LTS, 64 bit.
  config.vm.box = "ubuntu/xenial64"

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  config.vm.provision "shell", inline: <<-SHELL
    sudo apt-get update
    sudo apt-get upgrade --assume-yes
    sudo apt-get install python-pip --assume-yes
    pip install -r /vagrant/requirements.txt
  SHELL

end

On the workstation, provision the Vagrant guest:

%> vagrant provision

Take a moment to read through what happened. You should see pip (Python's package manager) get installed by aptititude then it will install each of the dependencies.

Now consider what happend for a moment. You created a file on your workstation (requirements.txt) then used that file within the context of the guest by referencing it in the SHELL portion of the Vagrantfile.

Developing

On your workstation, create a simple Flask application.:

%> vim hello-world.py

Place this code and save:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def main():
    return "Welcome!\n\n"

if __name__ == "__main__":
    app.run(debug=True)

Gotcha!: Are you in the directory ~/victorycto/vagrant-101 I will leave it to you to remember that from here on out.

Trust but Verify: On your guest, see that the hello-world.py file:

ubuntu@ubuntu-xenial:~$ ls -l /vagrant

Now it gets interesting.

On the Vagrant guest, start the application

ubuntu@ubuntu-xenial:/vagrant$ cd /vagrant
ubuntu@ubuntu-xenial:/vagrant$ python hello-world.py &
[1] 8693
ubuntu@ubuntu-xenial:/vagrant$  * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 190-841-149

What Happened?: The application started and was placed in the background. You can check this with the command ps -ef | grep [h]ello

Excercise the application on the guest:

ubuntu@ubuntu-xenial:/vagrant$ curl http://127.0.0.1:5000
127.0.0.1 - - [03/Sep/2017 02:18:11] "GET / HTTP/1.1" 200 -
Welcome!

ubuntu@ubuntu-xenial:/vagrant$

The next ticket in my developer queue says, "A user should be greeted to our site with the message: "Hello from VictoryCTO!"

Open the hello-world.py file on your workstation and change line 6:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def main():
    return "Hellow from VictoryCTO!\n\n"

if __name__ == "__main__":
    app.run(debug=True)

Don't forget to save it.

On the guest you will see the application restart. Now exercise it again.

ubuntu@ubuntu-xenial:/vagrant$ curl http://127.0.0.1:5000
127.0.0.1 - - [03/Sep/2017 02:32:48] "GET / HTTP/1.1" 200 -
Hello from VictoryCTO!

What you have seen is the separation of the development environment from the application run time. You can develop on your workstation with all the usual tools you would use, but when you exercise the application to test, it is happening on a guest machine configured like every one elses on the team (and hopefully those in test and production!).

But curl is Lame

Obviously we want to view our application in a browser on our workstation rather than from curl on the guest. In fact, we would like to interact directly with the guest as little as possible.

Lets prove this environment is truly disposable.

%> vagrant destroy

Now add to the Vagrantfile one more time:

Vagrant.configure("2") do |config|

  # The type of guest you are running. 
  #
  # In this case it is Ubuntu 16.04 LTS, 64 bit.
  config.vm.box = "ubuntu/xenial64"

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  config.vm.network "private_network", ip: "192.168.33.10"

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  config.vm.provision "shell", inline: <<-SHELL
    sudo apt-get update
    sudo apt-get upgrade --assume-yes
    sudo apt-get install python-pip --assume-yes
    pip install -r /vagrant/requirements.txt
    python /vagrant/hello-world.py &
  SHELL

end

The new lines in the file will assign the IP address 192.168.33.10 to your guest from your machine.

Start your new guest.

%> vagrant up

This time when the machine comes up you will see it configure itself adding the necessary packages, pip packages and it will start the application.

Now hit the site in your browser here.

It fails! But why? This is actually not a configuraiton or Vagrant problem. Its an application issue. By default the flask app runs only on the local interface of the guest. We will have to change the app. Lets do that on our workstation:

%> vim hello-world.py

Change and save the file:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def main():
    return "Hello from VictoryCTO!\n\n"

if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0')

Now hit the site in your browser here, and you will see our welcome message as expected.

Finally, lets prove we don't need to interact with the run time by marking up the welcome message with heading tags.

%> vim hello-world.py

Change and save the file:

from flask import Flask
app = Flask(__name__)

@app.route("/")
def main():
    return "<h1>Hello from Agile Austin DevOps Sig!</h1>"

if __name__ == "__main__":
    app.run(debug=True, host='0.0.0.0')

Hit the site in your browser here one more time and realize you can now develop in a runtime that is described entirely by code.

Conclusion

What we have shown in this article are the following concepts:

  • How to set up Vagrant
  • How to benefit from workstation directories mounted on the guest.
  • The separation of the development environment from the application run time.
  • How the developer can make changes to the configuration, dispose of the existing run time and bring up the new run time in minutes (not hours)

If there is a demand, we can go deeper into Vagrant with an Ansible provisioner and a more production-like runtime. Let us know at VictoryCTO.


This article was specifically prepared for the Agile Austin DevOps S.I.G. community. We'd like to thank them for the opportunity to present this topic.

3 Product Prototyping Tools to Launch Your Idea

Sometimes a nontechnical person like me has to create something that requires technical chops that you can't get overnight from some YouTube video tutorials. Aside from some basic HTML'ing or mad WYSIWYG-building skills, what are some simple tools that can help you prototype your idea closer to get stakeholder buy-in and development time?

I want to share some product prototyping tools that will help you create the visualization of your idea to share with others to secure the budget or land the funding. It's also the key to effectively working with an outsourced developer or development team. You can save a lot of time and heartache if you can better "show" what you are thinking, to help the development team guide you in actually building the idea.

Before I start diving into my "best of" product prototyping tools, a little from the design thinking toolkit.

Design Thinking for Product Development

Design Thinking

Design thinking is one of the many useful processes behind product development. The process entails distinct phases of observation, ideation, rapid prototyping, user feedback, iteration, implementation, and then back to the observation phase again. Round and round you go through the process, continuously innovating and iterating.

If you want to bone up on the entire process, the experts in the process, global design firm IDEO, has provided some lovely (free!) design thinking resources for you to dive deeper. Assuming you are up to speed on the process, you are ready to roll up your sleeves and get to prototyping. Here are some tools that don't require coding and will help you get your prototype further along:

Product Prototyping Tools

(1) Aha! Mockups

Aha-mockups

The product roadmap software at Aha! is there to help product folks get further along in their process. All good stuff. But I am most excited about their November launch of Mockups. While the tool is only free for a 30-day trial, that's long enough to take it for a drive and see if it helps you get your product prototyping farther.

The Mockups tool offers wireframing and diagramming with easy drag-and-drop functionality and a library of shapes and pre-made UI elements from Bootstrap, iOS, Android and more. You can export your mockup and share with sales and development teams to get feedback or plan dev cycles.

In addition to the wireframing goodness, you can build flow diagrams to help visualize the user experience. Create your user journey diagram and quickly work out the documentation of user flows and charts. A tool like this can get your idea from out of your head and onto paper. Sweet!

(2) Unbounce

po landingpage slider

Let's say you want to prototype a new product or service for your website. Creating a whole new product landing page, building it into your system architecture, and making sure it has all the right site mapping...well that's important, but not for prototyping. For now, you just want to show your idea and maybe test on a few users for feedback. How do you get the green light to invest in the full-on development?

Unbounce is another product prototype resource that you can check out for free with a 30-day trial. Without any coding skills, you can recreate a mock-up in under an hour. Take your campaign idea, and quickly create a landing page for your website using their templates or from a design you have in mind.

You can drag, drop, and position your page elements like text, images, forms, videos, whatever. Add your company colors and fonts to match your brand's look and feel and set up your lead capture form or call to action. After a quick little set up, your landing page can be live. Share with the masses or create a hidden URL that only invited people can view. That was easy!

(3) Wondershare Filmora

filmora

Sometimes what you need is a way to share the concept, not the actual product. Maybe you want to utilize strong storytelling for influence or how you envision your product to look as it's used out in the wild.

For strong visual storytelling of a product or idea, use a simple, free tool that will get you there -- like the Wondershare Filmora video editor.

It just takes a couple of minutes to download. And with an easy to follow UI, you are off an running creating beautiful videos of your own. The editing tools are in the menu tray, ready for use. No need to spend hours in video editing tutorials. Instead, you can start adding, merging, splitting, and cutting video clips out of the gate. There are features to adjust the image quality (I could make my videos nostalgic or futuristic with some Instagram-worthy filters), the speed, add music, and so much more.

But you don't even have to have snazzy video editing skills to be a great storyteller or pitch your idea. My favorite example of a super simple and scrappy storytelling came from the Sesame Street Elmo's Monster Maker iPhone App.

In 20 seconds, you can see what the mobile game would do, and the fun and prototyped style of the pitch makes it all the more charming.

Conclusion

Product prototyping isn't just for the engineers and developers. Without any coding, you can start prototyping your product or idea to get further along in the development process. All without getting weighed down by the technical mumbo-jumbo.

Ready to take your idea into the technical mumbo-jumbo realm? Time to hire Victory CTO and leverage their operational savvy, strategic and tactical experience to level-up your company or product.

Practical Application of a Convolutional Neural Network (CNN)

We had a case that required the classification of a large number of images into several distinct categories. Manual classification would be a monumental and prohibitively expensive process, however, that was the industry’s current state of the art. Image or item recognition is an oft-used example of a good application of machine learning so we were curious to see if we could apply machine learning to solve our specific challenge.

After several iterations, we designed a process that uses a convolutional neural network (CNN) that’s based on the Inception-v3 model. We used a pre-trained network, retrained it on our dataset using transfer learning, and after tweaking quite a few parameters we were able to achieve results that surpassed manual classification accuracy.

To operationalize our neural network, we designed a sophisticated queue and processing mechanism that allows us to scale capacity as needed through processor and OS-independent parallelization. Depending on the requested turn-around time in the SLA, we can use expensive GPU instances to quickly rip through a large image backlog or we can use cheap, commodity hardware to steadily work through the queue but at a very low cost. The independence afforded by this design enables us to easily match speed to the appropriate to business and market factors allowing the client's pricing strategy to stay responsive and very competitive.

DevOps Is Changing The Way We Do Business

Traditional IT management practices and primary role was to manage systems, application development environments and user support. The core of IT-Ops was a mix of systems automation techniques, application modeling, and integration. It was considered to be an extremely high level of complexity requiring arcane skills and knowledge.

Nowadays the term DevOps is used often for all the techniques employed in today's big IT operations teams to make the work easier and more efficient. However, since many IT organizations still do not believe in it or have yet to apply it correctly, there is a misunderstanding of the value to an organization much less how to implement DevOps harmoniously.

One of the interesting aspects of the DevOps movement is how it combines old concepts and traditional infrastructure with all the new concepts of continuous automation, scaling and improvement.

The name DevOps does not mean what you think it does. Let's start with the Five Pillars of DevOps:

  1. Culture
  2. Automation
  3. Lean
  4. Measurement
  5. Sharing

For those who have worked in IT for a long time, you have the feeling that DevOps the term constantly used for 'new & exciting', while IT/Infrastructure is used for 'expense and outdated'. Yet if you look at the pillars listed above that seems very similar to the goals of every IT professional I have come across and worked with.

You Can Take IT to a Whole New Level

In my opinion, IT needs to change. It is currently under-represented and supported by leadership in the broader corporate community. Additionally, it is a common concern that DevOps is replacing some of the more conventional ways of working, especially when it comes to IT projects. DevOps has led to significant changes in enterprise IT practices. DevOps teams are increasingly agile. Agile means agile in the sense that they do not always manage a project as though it were being run by a traditional team, process or constraints. DevOps helps in the process of reducing resource costs and the need for costly management systems like a single source control system.

DevOps elevates IT in the following manner:

  • It often involves collaborative planning and continuous integration.
  • It promotes agility and modularity and can be used in the production as well as the development environment to deliver more stable and scalable development systems in fewer days from start to finish.
  • It helps in the process of improving the efficiency of systems of measurement by reducing the amount of data required on a daily basis.
  • It leads to more efficient use of IT resources and thus reduces IT waste.
  • It has the potential to result in significantly lower technology investment and the IT cost curve.
  • It speeds IT investment decisions (reduction at the point of sale and less need for costly capital IT systems to be migrated to an application or framework), and the potential to create economies of scale for IT in the production environments.

As private enterprises are established, creating technologies, new markets and new jobs, we are faced with the need to scale. This means developing roll-your-own features, processes, systems and services. With so many technology as a service options available today, many businesses can pick and choose from low cost, easy to spin up platforms and hit the ground running. The question is, can these operations scale to become profitable?

The answer is varied and typically management is concerned about the risks involved if they can't. Obviously a way businesses can scale involves building some of the technologies and applications from scratch while integrating other business processes. Easier said than done, right?

At the end of it all we need to figure out how we decide where the money goes. Do we invest in the customer product or the infrastructure? What happens when the product suddenly outpaces the previous internal systems, applications and technologies chosen?

How is DevOps Different from Traditional IT?

DevOps is a philosophy, not an engineering discipline, and not just a technical tool. That's why most people don't understand it either. There are a lot of myths associated with DevOps, and they were all developed by people who were obsessed with IT, who really thought they knew everything about IT, and who always believed there was this secret tool that was going to change IT, but, it didn't.

The big question is why would anyone want a DevOps environment? DevOps is the perfect change to a corporate environment for new hires, experienced developers, and teams with a desire to embrace and work in a rapid manner, using technology in an accelerated fashion with lower costs and rapid delivery.

Some people think that DevOps is an oxymoron and are trying to get around it by focusing on "DevOps as a technical discipline."

I disagree - the DevOps practice in an organization starts by defining the boundaries of what DevOps means and how it should be understood or implemented. In doing so, the DevOps practice can define a new way of managing operations, costs and customer delivery that the business leaders of today so deeply desire.

How DevOps Changes the Mindset of IT

In the recent years several news stories have shown that cloud servers, networking and architecture have affected the mindset of IT. The reality is that most IT practices are based on the principles of continuous improvement but are challenged by lack of budget, staff or support from the business or leadership. Yet DevOps principles can bring in fresh ideas and encourage learning new concepts if applied to IT.

This opens up the door for creating new practices in areas you don't necessarily think are a priority. As you can see, a focus on the DevOps philosophy has allowed for a huge improvement of the mindset of IT, especially during those critical time points as an organization is growing and competing in a rapidly changing technology-driven landscape.

DevOps brings IT to a place where the the infrastructure and developments team can start using similar technologies to achieve the same goals in a rapid fashion. IT can leverage DevOps to improve budgets, staffing and standing within the company while leading to a much more transparent process. It's a process where you can observe changes in the technology, infrastructure and software development while measuring how this change affects your project and the organization in an almost 'living organism' view.

To Be Continued…

It's not all about the IT or the DevOps. IT and DevOps intersect in both innovation and operational development. They can support both the same business challenges and happen simultaneously.

Does DevOps influence or evolve the IT organization or will it remain the same?

Does that change the way these organizations think or are they still operating the same way?

Photo Explanation: Milky Way shot in Maui (Sigma 17–70mm F2.8, ISO 2000, 15s exposure). I figured it appropriate as an analogy to the sheer size of opportunity when looking at DevOps, IT and technology as a whole. That and don't forget to look up from time to time. It's a big big place out there.