If you want to build a simple but attractive looking API documentation site, you can’t really go wrong with an open-source tool like Slate. Despite being created by a teenager developer during a summer internship, it has become an incredibly popular tool with the project being forked more than 15,000 times and with well-known organisations including NASA, Best Buy, Monzo and Skyscanner all using it.
🎬 So what is Slate?
Slate is a Ruby-based tool that generates a great-looking, three-panelled API documentation static site from a set of markdown files. It was built by developer Robert Lord in 2013 when he was an 18-year-old intern at at travel software company Tripit. He convinced his boss at the time to let him open-source the project and the rest is history.
In this example, I’m going to use the generic Swagger Petstore example which I have saved to my Desktop and called petstore.yaml. To convert this to Markdown using swagger-to-slate, open a terminal and run:
swagger-to-slate -i ~/Desktop/petstore.yaml
This saves a file called petstore.md in the same location as the .yaml file. Once you have this, you can get started with Slate.
🔨 Build your site using Vagrant
To build your API documentation site using Vagrant, follow these steps:
Alternatively, if you want to run your Slate site locally, you can also use Bundler. To use this method you must install Ruby version 2.3.1 or newer. To check which version you have installed run: ruby -v
I recently gave a talk at the API the Docs conference in London where I was finally able to share some valuable advice about GraphQL documentation. My talk followed my journey from first being told that GraphQL was self-documenting and didn’t need documentation, to speaking to GraphQL co-creator Lee Byron in my quest for answers and receiving the words of wisdom that I was able to share at the conference.
After initially being told by a developer that GraphQL wouldn’t need documentation, I was pretty sceptical, but as I starting researching I found numerous examples of developers advocating GraphQL’s self-documenting nature with someone even declaring that it didn’t need documentation.
Although the majority of people were fairly positive about the self-descriptive features, one tweet from a developer who was unhappy with the GraphQL documentation he had encountered made me realise I might be onto something.
🤖 What does “self-documenting” mean?
I explored what is meant by self-documenting – something written or structured in such a way that it can be understood without prior knowledge or documentation – and highlighted how the PC Magazine definition came with this caveat about subjectivity:
“It’s very subjective. However, what one programmer thinks is self- documenting may truly be indecipherable to another.”
I investigated the risks of subjectivity, how words like homographs such as “second”, “number” and “subject” can have multiple meanings and might be interpreted differently. I also shared different opinions on self-documenting code, how some people feel it is a myth and an excuse for developers to avoid writing documentation:
Self-documenting code is a myth perpetuated by those who hate to write documentation.
I also referred to a blog post by Write the Docs co-founder Eric Holscher who said self-documenting code was “one of the biggest documentation myths in the software industry”, adding that the self-documenting argument boils down to:
I made something with a specific use.
I gave it a name to represent that specific use.
It doesn’t need documentation now because it’s obvious how it works.
Holscher argued that people who believe in a world of self-documenting code are actually making it more difficult for normal people to use their software.
🔬 How intuitive is GraphQL?
To test some of these self-documenting claims, I stripped out the introductory documentation from the Github GraphiQL explorer and asked six of my colleagues (members of QA, development and documentation) to try and retrieve my name, location and avatar URL with a GraphQL query from just my Github login name.
The results were pretty interesting with pretty much all of them struggling with the syntax and encountering fairly similar parsing errors. The amount of time it took them to formulate a query through trial and error proved to me that GraphQL isn’t actually that intuitive without some form of example query or hand-holding documentation to get you started.
The other common issue I have encountered with some GraphQL APIs was developers either failing to add descriptions or using ‘self-descriptive’ as a description for queries and fields that weren’t particularly descriptive. Some of these relied on assumed knowledge, expecting the end user to have a prior knowledge of the schema and the data it relates to.
After looking at the GraphQL spec, I found this line, which might explain why some developers are not including descriptions: “All GraphQL types, fields, arguments and other definitions which can be described should provide a description unless they are considered self descriptive.”
Whether they realise it or not, the issue is these people are unintentionally making it difficult for people to use their APIs. GraphQL co-creator Lee Byron spoke about the importance of naming at the GraphQL Summit in 2016:
“It’s really important not just to have names that are good but to have names that are self-documenting […] Naming things that adhere closely to what those things actually do is really important to make this experience actually work.”
🐴 Straight from the horse’s mouth
I thought this was pretty interesting but I still wanted a definitive answer about GraphQL documentation so I emailed Lee Byron, who also happens to be the editor of the GraphQL spec, asking him if he would answer some of my questions. To my surprise he agreed to an interview back in September. We spoke for about half an hour and he told me all about the history of GraphQL, his hopes for its development and we touched upon documentation. When I asked him about the importance of descriptions in GraphQL, he gave the following advice:
“APIs are a design problem, way more than they’re a technical problem and you know this better than anybody else if you’re working on documentation.
If there’s no documentation, it doesn’t matter how good the API is because so much about what makes an API work well is mapping well to your internal mental model for how something works and then helping explain those linkages and the details.
If you do that wrong, it doesn’t matter how good your API is, people aren’t going to be able to use it.”
“GraphQL doesn’t do that for you, it provides some clear space for you to do that.
There’s the types and the fields and introspection, you can add descriptions in places so it wants to help you but if you don’t put in the thought and you end up with a poorly designed API, that’s not necessarily GraphQL’s fault right?”
I asked Lee’s permission to use the video clip of him giving this advice during my talk as I knew it would resonate with other API documentarians and having one of the GraphQL co-creators validate what I’d set out to prove all along was a pretty awesome mic drop moment for me!
✍️ So how do we document GraphQL?
Lee Byron spoke about how GraphQL provides you with “clear space” for the documentation: the types, the fields, the descriptions and introspection. So by using self-descriptive names for the types and fields and by using good descriptions in your GraphQL schema, you make it a much more user-friendly experience for your end user. I have highlighted where descriptions appear in GraphiQL for the Star Wars API (SWAPI):
However, these descriptions will only get you so far because documentation generated dynamically from your schema is never going to be very human.
Former technical writer and developer Carolyn Stransky spoke about this issue and a number of other blockers she encountered while trying to learn GraphQL at the GraphQL Finland conference. These included “an unhealthy reliance on the self-documenting aspects of GraphQL”, unexplained jargon and assumed knowledge. She felt most of these issues could have been easily prevented if more care and consideration had gone into the documentation.
I wanted to see what other technical writers were saying about GraphQL documentation but given the technology is so new, my questions on the Write the Docs Slack channel and other forums went unanswered. However, I did find a couple of good resources.
Andrew Johnston, who works on the GraphQL documentation at Shopify, spoke about the importance of providing on-boarding or “hand-holding” documentation for people who are new to GraphQL and not just assuming your end users will know how to formulate queries and mutations.
Technical writer Chris Ward wrote a blog post about whether GraphQL reduces the need for documentation and concluded that while it “offers a lot of positives on the documentation front”, documentarians should treat it just like any other API. He wrote:
“Documenting API endpoints explains how individual tools work, explaining how to use those tools together is a whole other area of documentation effort. This means there is still a need for documentation efforts in on-boarding, getting started guides, tutorials, and conceptual documentation.”
So my conclusion was GraphQL can be self-documenting but only if you put in the effort to give your fields and your types self-descriptive names, if you add good descriptions and if you provide adequate supporting documentation, especially for people who are new to GraphQL. Ultimately I think technical writers have a pretty important role to play in documenting GraphQL and ensuring the experience works, to repeat Lee Byron’s advice – if your API doesn’t have any documentation then people aren’t going to be able to use it.
I recently interviewed the GraphQL co-creator Lee Byron for Nordic APIs, an international community of API enthusiasts. It was a great opportunity to find out how GraphQL came about, why it was open-sourced and where he sees it developing in the future. We also touched upon documentation and the importance of descriptions in GraphQL, something I’ll share in a future post.
The Adventures of SuperGraph
Ten years ago Lee Byron was a graphics engineer designing interactive news graphics at the New York Times when a friend approached him to join a small social media startup based in San Francisco, California. The company was Facebook, which had only just surpassed MySpace as the world’s most visited social media website at the time, and four years later Byron would find himself managing the team working on the Facebook native iOS app when the first seeds were planted for what would later evolve into GraphQL.
“Right around then our mobile apps were built with HTML, they had native wrappers around them and they had suffered from real performance problems,” he said. “We made a bet on that technology thinking that Apple and Google would maintain really high quality web browsers and they didn’t so that didn’t really work out very well and we decided we needed to build a native app.”
“We started this little skunkworks project where two engineers from my team and two engineers who were relatively new to the company started building out what would become the native iOS app for News Feed.”
The team produced a high quality, working prototype but Byron spotted that News Feed stories were missing because they had used a three-year-old, unsupported platform partner API and he realised they would need to build a new one.
“That kind of sent things to crisis,” he said. “They thought they were almost done and it turned out they had a ton of stuff left to do so I started focussing in on those problems and I was like “Okay, I need to build a News Feed API somehow. Who are the people I need to talk to? How does that need to get done?” A big problem is that the News Feed is incredibly complicated and typical API technology probably wouldn’t do quite the right job so I started sketching out what a good API might look like. It definitely wasn’t what GraphQL is now but it was sort of like really beginning inklings in that direction.”
“Meanwhile another one of the GraphQL co-creators Nick Schrock had just spent the last couple of years working on a bunch of data infrastructure on our server side and had spent a little bit of time exposing some of that over APIs, not GraphQL but a different kind of API, and had an idea about how this could be made much, much more simple so I credit Nick Schrock with the first prototype that really resembled GraphQL. He called it SuperGraph.”
A screenshot of an early GraphQL prototype that Nick Schrock called SuperGraph.
A member of Byron’s team introduced him to Schrock and Dan Schafer, hailed as the best News Feed engineer at Facebook at the time, and the trio started work an initial version of GraphQL. “The three of us got to work trying to figure out how to build a better News Feed API and we just got super far down the rabbit hole,” Byron said. “I think just a month or two of iterative improvements on what started as a prototype enfolding all of our ideas ended up being the first version of GraphQL.”
The launch of the native iOS app, helped by the introduction of GraphQL, was a success and the excitement around GraphQL and its capabilities made other Facebook teams interested in using it. As a result, Byron and the early GraphQL team would go on to develop a whole ecosystem around GraphQL; how it integrated with the iOS and Android apps, how it integrated into the server and GraphiQL, the in-browser IDE.
“We were excited about it,” Byron said. “I mean sharing things with the community is always good but it would be a lot of work and we weren’t totally sure people outside of Facebook would even care or find value in it. We thought maybe this was something that only solves a Facebook problem and wasn’t a generic solution but the Relay team had us excited so we followed that path and I’m super happy that we did. GraphQL now has a really big community outside of Facebook.”
The release and adoption of GraphQL
The adoption of GraphQL took far less time than the team initially predicted. Speaking at the first ever GraphQL Summit in October 2016, Byron said he hoped GraphQL would be picked up by big companies within four years and reach ubiquity within five years. Byron laughed when he reflected on the accuracy of those predictions.
“I think I overestimated how long it would take for large companies to adopt it and underestimated ubiquity,” he said. “It’s probably because ubiquity is kind of vague but certainly I still talk to tons of people who work in the API space and at best they say “Oh GraphQL, I think I’ve heard of that before but I don’t really know what it is”. It’s certainly better this year than it was last year and better than the year before that.”
He added: “I remember going to an APIDays conference shortly after the first GraphQL Summit and literally there were zero talks on GraphQL. After the next one, there was a whole track talking about GraphQL. The one after that GraphQL was featured in one of the key notes and there wasn’t a specific track but GraphQL was scattered around. So it’s definitely picking up steam. I think there’s visible progress towards a ubiquity, if we want to talk about ubiquity as knowledge. People are aware of the technology and what it does and why they should use it or not.”
One of the biggest surprises for Byron was seeing Github become one of the early adopters of the technology, particularly as he considers them an API leader.
“I was really surprised to see that within a year of GraphQL being open-sourced, Github decided that their public API would be GraphQL,” he said. “That was particularly significant because they kind of helped to popularise REST. You know REST has been around for a while but it wasn’t really the dominant, popular way to build APIs until Github decided to build their API and they used REST and they made a big deal about it and wrote a bunch of blog posts and everybody paid attention.”
He added: “I thought “Wow, this API is really well built, it must be because of REST” and it was to a large degree but it’s also because the people at Github are really smart and they built a really great API. It’s really exciting to me that I consider Github to be sort of an API leader and they jumped on that first and they’re not the only ones any more.”
GraphQL and REST APIs can co-exist
Although GraphQL has been lauded as the natural successor to REST technology, Byron is modest about its capabilities and believes the two can co-exist.
“There are plenty of things that REST does well or that does better than GraphQL and vice versa,” he said. “I’m a big believer in the more tools that we have, the more choices that we have to solve problems. I’m certainly not one of those people who that think I’ve invented the silver bullet here and everything should be GraphQL and there’s no room for anything else. I think that would be a little unwise. I think REST is an amazing technology so I would be really sad to see it disappear.”
“I’m certainly not one of those people who that think I’ve invented the silver bullet here and everything should be GraphQL and there’s no room for anything else. I think that would be a little unwise. I think REST is an amazing technology so I would be really sad to see it disappear.”
“I do think that as GraphQL continues to expand in scope we’ll see a much healthier balance between the two. My expectation was that public APIs would remain REST because that was simpler and more familiar where internal APIs, so to build your company’s own product, would use GraphQL because while it brought more complexity, it also brought some more expressiveness and capability.”
As GraphQL continues to grow, one of the things Byron is excited to see is more public APIs adopting the technology, like companies have done with REST.
“I think the space of public APIs or partner APIs is particularly interesting because I think the vast majority of GraphQL adoption so far has been for a company’s own internal projects. For example, Walmart use GraphQL but they use it for the Walmart app and I think it would be really interesting if GraphQL starts to be used for these public and partner APIs so that we have companies that are working with each other and then it’s not just about the API design and the mental model for within that company but between companies.”
“I think that could be really interesting because it could help start to build one conceptual graph of all information. I don’t think GraphQL is going to be the technology that gets us there but that’s one of the big dreams of the internet is that we could have the one data internet but we need to start having some serious conversations along that path if we ever want to get there. I think GraphQL could be a really useful stepping stone on that path.”
Hopes for the future of GraphQL
Despite being happy with its growing popularity and some the open-source development going on around it, Byron hopes to see more growth in GraphQL tools and integrations.
“It’s kind of sad that there’s the Apollo Client for iOS and Android and then that’s kind of it,” he said. “There needs to be many competing pieces there and that’s true for any sort of technology that’s reached ubiquity has at least two if not closer to a dozen different options for how you would go about implementing that. If you wanted to build a web server, there’s like hundreds of ways to build a web server in dozens if not hundreds of languages and that’s kind of where I want to get to with GraphQL as well.”
Byron left Facebook after a decade of service to become head of web engineering at fintech startup Robinhood earlier this year, citing the desire to work at a smaller company and its refreshing vision as some of his reasons for leaving.
“Robinhood’s roughly the same size today that Facebook was when I joined it and I really missed that and I realised that some of the best work that I did at Facebook was when there were a little smaller. Not that Facebook’s not a great place to work now, it’s just I really appreciated having the smaller work environment and was happy to have that back.”
“I’m also just kind of interested in finance in general so it’s a new space for me to learn which has been pretty fun and then they’ve got a bunch of really interesting technical challenges and people challenges. That’s my bread and butter. I really love technical problems and people problems, then the product problems I’m interested in but it’s new to me so there’s room to learn.”
On top that, he is still the editor of the GraphQL spec and runs the working group meetings to ensure that GraphQL continues to improve while also maintaining stability.
“One my of goals for GraphQL is that it is stable because Amazon and Twitter and Pinterest and Airbnb and Facebook and Walmart and so many other companies have bet their future on GraphQL,” he said. “If GraphQL changes so rapidly that every year there’s like maintenance work to have to go in and improve all of those pieces of infrastructure, if I was an engineering director at those companies I’d feel shaken and I’d question the choice to use that technology. At the same time I want to make sure that there’s room for it to grow and improve and those improvements don’t have to come from me. I don’t think that I’m the smartest person in the room. I want to make sure that experiences of people from lots of different companies and environments can help influence that direction.”
He added: “GraphQL is still new. I’m really impressed with how much has been built by the open-source community and how much adoption has happened within the open-source community, especially the large companies. I mean, there’s a ton of large companies that are using GraphQL and that’s only three years out from open-sourcing, I think that’s pretty incredible but there’s always room to grow.”
If you Google for ‘API trends’ or ‘the future of APIs’ , one technology that crops up a lot is GraphQL. Developers rave about it being a more powerful and flexible alternative to REST. Not only that but if you’re a technical writer like me, claims that it is self-documenting are particularly interesting. So what is GraphQL and is it really as self-documenting as people say?
What is GraphQL?
GraphQL is an open source data query and manipulation language that was developed internally by Facebook for their mobile applications before being released publicly in 2015. Since then it has grown in popularity with some people claiming it might replace REST APIs in the future.
Like REST APIs, both operate over HTTP with requests being sent to retrieve or manipulate data. The key difference is with REST you might need to send requests to multiple endpoints to retrieve a particular set of data, with GraphQL there is only one endpoint so with a single request you can retrieve an object and all of its related objects.
For example, with this GraphQL schema and server wrapping the SWAPI (Star Wars API), you can retrieve multiple pieces of data using just one endpoint. In this case, finding out the species and home planet of Luke Skywalker by adding more fields to the endpoint:
“The self-descriptive nature of GraphQL”
There seems to be plenty of love for GraphQL on Twitter with developers praising its speed, flexibility and introspective nature. The other key attribute that crops up a lot is “self-documenting” or “self-descriptive”:
The self-descriptive nature of GraphQL and the visual, auto-completing, browser-based query builder "GraphiQL" is pure genius.
One developer even went as far to say that GraphQL doesn’t require documentation at all. However, after playing around with GraphQL and experimenting with some public GraphQL examples out there, I’m not so sure I agree.
The key thing about GraphQL from a documentation perspective is the importance of naming. Lee Byron, one of developers behind GraphQL, spoke about this in his talk “Lessons from Four Years of GraphQL” at the GraphQL Summit in November 2016: “Naming things is super important in GraphQL APIs,” he said. “An important question to ask when designing APIs is ‘Would a new engineer understand this?'[… ] And that means no code names or server-side lingo.”
He continued: “Imagine that most of the engineers who are going to be using your API might not find it so easy to go and find out how that field maps to some underlying system. It’s really important not just to have names that are good but to have names that are self-documenting. Naming things that adhere closely to what those things actually do is really important to make this experience actually work.”
“An important question to ask when designing APIs is ‘Would a new engineer understand this?’ […] Naming things that adhere closely to what those things actually do is really important to make this experience actually work.” – Lee Byron
Despite Byron’s warnings, fields with poor or no descriptions were a common issue in the different GraphQL APIs I looked at. In the example below, taken from the GraphiQL documentation explorer, I had no idea what the ‘section’ query field did or what data it sent back because it had no description:
Apart from the documentation explorer, another way to see what query and mutation fields are available is the auto-populating feature in GraphiQL. Hovering over the field or type reveals a description but this can be as useless as the description in the documentation explorer if all it says is ‘Self descriptive’, as this Twitter user found out:
I agree that GraphQL is self-descriptive and if you’re familiar with the query language and the schema, its introspective nature means it is easy to refer to the description of a field or type to find out what it does. One of the other advantages of GraphQL is the API documentation is easy to keep accurate and up-to-date as the descriptions are pulled directly from the code. In version 0.7 or above of GraphQL, this is as simple as adding a comments directly above the type or field in the code:
However, GraphQL is only “self-documenting” if the developer or a technical writer has given the fields adequately intuitive or self-descriptive names or has added decent descriptions for them in the schema code. If the names are obscure or the descriptions aren’t great then your GraphQL API is as useful as a chocolate teapot and there are already a few chocolate teapots out there from what I’ve seen. So I guess the good news for technical writers is that we still have a role to play in helping to document GraphQL, it isn’t a magical solution that renders us unnecessary just yet!
Back in 2013, developer Robert Lord, then an 18-year-old intern at Tripit travel software company, was challenged to create an API documentation tool by his boss. It took him several weeks but the result was a beautiful, responsive API documentation generator called Slate. Five years later, it has grown into a popular open-source tool that is used by a number of global organisations and companies including NASA, IBM and Coinbase.
Lord said the Slate project grew out of a set of requirements the Tripit engineering team had at the time. He said: “I was interning at TripIt and my boss pointed me towards some two-column documentation pages and said ‘We’d like a page like this for our new API.’ They also had the requirement that their technical writer could make changes, and I think they didn’t want to write raw HTML. I made a generator that ended up being pretty generic to any documentation, and convinced them to let me open source it.”
How to Use Slate
Slate is simple to use, you fork the Slate Github repository and create a clone. Next you customise the code to meet your requirements; adding a custom logo, fonts and any additional CSS styling in the source folders, before adding your API endpoints and their descriptions in Markdown.
When you’re done, you start Slate and launch your API documentation site using Vagrant or create an image using Docker. The result is an attractive, responsive three-panelled API documentation site with code samples in multiple languages down one side and a smooth scrolling table of contents down the other. For more information on how to use Slate, follow the instructions in the Slate README.
Slate in the Wild
Today more than 90 people have contributed to Slate on Github, it has been forked more than 13,000 times and has been given more than 23,000 stars. Some of the organisations and companies listed as users include NASA, IBM, Sony, Monzo, Skyscanner and Coinbase. There is a list of more than 90 companies that have used it on the Slate in the Wild sub-page of the repository.
Lord admits he still finds it “pretty surreal” that such large companies have adopted what he labels the “buggy project” he created as a teenager. “I really did not expect anybody else to see it or care about it,” he said. “Slate never really had a big rush of new users all at once, the growth in stars has been more or less linear over the years. No hockey sticks here. So there was never a single moment where suddenly a bunch of people were using it, it was a very slow process of discovering one company at a time.”
Life after Slate
Interestingly, a year after working at Tripit, Lord interned at Stripe, one of the leading API-first companies whose own API documentation inspired him when creating Slate. Stripe realised the value of their product hinged on people being able to read and digest their APIs. They invested a lot of time and effort in developing their own in-house API documentation tool and set the bar for the rest of the industry with the two-panelled design that has inspired so many other API tools.
Lord had plans to develop further API tools but decided to focus on other things. “Initially had some plans for similar tools,” he said. “But I think I realized I’m still early in my career, and would rather branch out and work on a variety of projects instead of focusing in on just one area.” Despite moving onto other projects and being fairly modest about the success of Slate, it’s an impressive piece of work for the young developer to put on hisresumé. Indeed, one of the main reasons he asked Tripit to allow him to open source the project was so he could show future employers his work. “I mostly convinced them to open source it just so I could point future employers to this chunk of code I wrote,” he said. One company clearly took notice, Lord starts work on Fuschia at Google in a few of weeks time.
Earlier this year I stumbled upon Write the Docs, a global community of people who care about documentation, and through its Slack channel, I have learned so much from the advice and knowledge shared by its thousands of members. The discovery has been a real godsend for someone like me who has worked independently or in small teams for most of my technical writing career.
This month I was lucky enough to go halfway across the world to the annual Write the Docs conference in Portland, Oregon to meet some of the community in person and listen to some brilliantly insightful and entertaining talks from fellow technical writers. In this post, I’ll share my highlights of the conference, my favourite bits of Portland and offer some advice on how to get there.
DISCLAIMER: I didn’t attend every single presentation but all of the talks I listened to were great. I’ve highlighted a few memorable ones below:
Kat King from Twilio, who had the unenviable task of giving the first talk of the conference, delivered an entertaining and engaging talk about how she and her team were able to quantify and improve their documentation with user feedback.
Beth Aitman from Improbable spoke about how to encourage other members of your development team to contribute to the documentation. This is something I think we all struggle with and can relate to. It’s well worth a watch:
Bob Watson gave a great talk about strategic API documentation planning, with some interesting tips about your target audience and the different types of API doc consumer you might come across. These included the ‘Copy and Pasters’ and the ‘Bigfoot’, the rare developer who actually studies the documentation and applies the code!
As well as the main talks, there were some excellent Lightning Talks, five minute presentations given during the lunch breaks, that contained some real gems such as Mo Nishiyama’s resilience tips when dealing with Imposter Syndrome and Kayce Basque’s talk on improving response rates from feedback widgets:
If the talks aren’t your thing, there was also an Unconference where you could discuss topics such as API documentation, documentation testing, individual tools; whatever you want really. I just sat and talked with two technical writers about a documentation tool for half an hour!
Apart from the people, one of the best things about Write the Docs Portland was the venue, a striking 100-year-old ballroom with a “floating” dance floor that has played host to the likes of Jimi Hendrix, the Grateful Dead, Buffalo Springfield and James Brown. Also, if stickers are your thing then you could collect a load of stickers provided by the conference sponsors, hiring companies and Write the Docs themselves (see below):
Apart from its scenic surroundings and the views of the Tualatin Mountains, Portland has a lot to offer in the city itself. Some of my personal highlights included:
Doughnuts – Portland has a reputation for great doughnuts. We skipped the enormous queues outside Voodoo Doughnuts and went to Blue Star Donuts instead. The PB & J with habanero pepper was pretty unusual!
Coffee – Portland has developed a thriving yet relaxed coffee culture with more than 30 coffee roasters across the city. It goes without saying that the coffee here is good! Check out Heart or Barista.
Restaurants – The food in Portland was amazing. One of my favourite meals was at Life Aquatic-themed oyster bar Jacqueline in SE Portland. For sushi check out Masu on SW 13th Ave and for a relatively cheap but delicious lunch go to Nong’s Khao Man Gai thai food cart.
Washington Park – If you want to escape the sights and sounds, head to the 412-acre Washington Park which boasts a Japanese garden, a zoo, a rose garden, an amphitheatre and lots of trees!
Powell’sBooks – No trip to Portland is complete without visiting the world’s largest independent bookstore. My only advice would be to pick up a map and have some idea of what you’re looking for, otherwise you’ll find yourself wandering the many colour-coded sections and aisles for hours.
How to get there
If you live in the US or Canada, it might be slightly easier to convince your boss to fund your trip to Write the Docs. If like me, you’re based in the UK, its slightly more difficult but there are a number of options:
1. Use your training budget – Ask if you can use your training budget for the trip. It cost me my annual budget but it was well worth it and I was able to combine it with a trip to my company’s head office in San Francisco.
2. Become a speaker – I met a few writers whose company paid for them to be there because they were speakers. It’s great exposure for you, your documentation team and your company.
3. Recruitment – If you’re company needs to grow its documentation team, you might be able to justify the cost by attending because there is a job fair and you have the opportunity to network and meet writers with a wide range of experience.
4. Exposure – Even if you don’t become a speaker, it’s a great way to raise your personal profile and that of your company. You never know when that visibility might come in handy in future.
5. Specific talks – Highlight a few specific talks from the schedule of the upcoming conference or a previous conference that may benefit you or your team. Write the Docs is a fantastic opportunity to learn from some of the best technical writers in the business!
If all else fails, see the sample email and other tips under the ‘Convince Your Manager‘ section of the Write the Docs website.
If you Google for quotes about tea, one of the top hits is from the philosopher Bernard-Paul Heroux who is attributed with this quote:
There is no trouble so great or grave that cannot be much diminished by a nice cup of tea.
The philosopher’s words of wisdom about tea are quoted in articles by the Telegraph, Reuters, the Guardian and numerous blogs and websites online. An image search also shows that the Tregothnan estate in Cornwall and American retailer Trader Joe’s use the quote on their packets of tea.
The former journalist in me wanted to find out more about the mystery man behind the famous phrase. Searches of the name Bernard-Paul Heroux return no wikipedia listings and his name isn’t listed alongside other famous Basques or famous Basques philosophers. In fact, the only hit I got at all, aside from the quote, was that the Heroux name was a surname from the Languedoc-Rousillion region of France, a good six hour drive from Basques country. However, the Heroux surname is not listed in any online database of Basques surnames and trawling several sites of Basques births, deaths and marriages returned nothing. It’s as if Mr Heroux appeared at some point in the 1900s, made his famous quote about tea and then vanished into thin air.
Apart from the lack of evidence he ever existed, my other major doubt around the authenticity of this phrase is the fact that Basques country, as with other parts of northern Spain, has had has much more of a coffee-drinking culture for centuries. It’s just a fact, the Basques and the Spanish are traditionally coffee drinkers – not tea drinkers.
So is Bernard-Paul Heroux’s quote fact or fiction? Was he a real man or it just a figment of the imagination, dreamed up as part of an elaborate piece of marketing? I don’t want to make a storm in a teacup but until someone can prove to me otherwise, I think it’s probably the latter.
A critic labelled the ‘text speak’ of the 1990s as “penmanship for the illiterates” but the latest threat to written English is the emoji, said to be the fastest growing language in the UK. While ‘text speak’ saw words shortened and abbreviated, emojis have replaced text altogether, harking back to the dark ages of cavemen and hieroglyphics, when pictures formed the basis of communication.
The rapid spread of emojis into modern communication has seen a translation company hiring the world’s first emoji translator, a restaurant launched in London with an emoji menu and the recent release of the Emoji Movie in our cinemas. So, where did they come from and is there a place for them in modern communication and technical writing?
🎬 Origins of the Emoji
The emoji first appeared on mobile phones in Japan during the late 1990s to support users’ obsession with images. Shigetaka Kurita, who was working for NTT DoCoMo (the largest mobile-phone operator in Japan), felt digital communication robbed people of the ability to communicate emotion. His answer was the emoji – which comes from the Japanese ‘e’ (絵) meaning “picture” and ‘moji’ (文字) “character”.
The original emojis were black and white, confined to 12 x 12 pixels without much variation. These were based on marks used in weather forecasts and kanji characters, the logographic Chinese characters used in Japanese written language. The first colour emojis appeared in 1999and other mobile carriers started to design their own versions, introducing the smiling yellow faces that we see today.
Speaking to the Guardian, Kurita admitted he was surprised at the popularity of emojis. “I didn’t assume that emoji would spread and become so popular internationally,” he said. “I’m surprised at how widespread they have become. Then again, they are universal, so they are useful communication tools that transcend language.”
“I’m surprised at how widespread they have become. Then again, they are universal, so they are useful communication tools that transcend language” – Shigetaka Kurita.
However, Kurita doesn’t believe emojis will threaten the written word. “I don’t accept that the use of emoji is a sign that people are losing the ability to communicate with words, or that they have a limited vocabulary,” he said. “Some people said the same about anime and manga, but those fears were never realised (…) Emoji have grown because they meet a need among mobile phone users. I accept that it’s difficult to use emoji to express complicated or nuanced feelings, but they are great for getting the general message across.”
💹 Emojis in Marketing
It is this ability to get the message across very simply that has resulted in companies using emojis more and more in marketing, particularly on platforms like Twitter and via email. It has become a way for brands to humanise themselves, have a sense of humour or put across a message that a younger audience can relate to.
One example of emoji marketing is a Tweet sent by Budweiser which was composed entirely of emojis to celebrate the 4th July this year:
Meanwhile, Twentieth Century Fox took emoji-based humour to whole new level with posters and billboards bearing two emojis and a letter (💀💩L) to announce the release of Deadpool in 2016:
✍️ Emojis in Technical Writing
A number of tech companies, especially those with a younger (in their 20s-40s) target audience like Slack and Emoji, have also embraced the use of emojis in their technical documentation and the software itself.
Slack use them sporadically in the product, often as the punchline of a joke or message when you’ve read all unread threads (see screenshot above).
Emojis also appear in their help system, with Emoji flags for the chosen language and to highlight bullet points (see below):
Startup bank Monzo also embraced emojis early on, designing an emoji-rich interface that would a younger client base than typical banks could relate to. Emojis are automatically assigned to transactions and you’ll find them incorporated in the Monzo API documentation and the app’s Help screen:
Speaking to brand consultants Wolf Olins, CEO Tom Blomfield explained how they also use machine learning to pair your transaction’s spending category with relevant emoji. For example, it will display the doughnut emoji 🍩 when you shop at Dunkin Donuts. He said: “There’s no business case for the emoji donut, but people get ecstatically happy when seeing it and go on social media to share the moment.”
☠️ Risks of using Emojis
While emojis might work for some tech companies and give them a way to humanise their brand and relate to their target audience. I think there are several risks which come with their use as well.
The first risk is alienating users who don’t relate to emojis, or even dislike them. Although most of my office do use them as a way to react to each others’ Slack posts etc, there are a number of people who refuse and as there are a lot of nationalities with different cultural references, sometimes the emojis are used in different ways. For example, in Japan the poop emoji (💩) is used for luck while the English use is a lot more literal. Similarly, the folded hands emoji (🙏) means ‘thank you’ in Japan, while it is more commonly used to convey praying or saying ‘please’ in English usage.
Secondly, if emojis are just a fad like the Kardashians, Pokémon GO and Tamagotchi then you face the unpleasant task of replacing them all when they become unpopular, are considering annoying or are phased out. If you have saturated both your product and documentation with emojis then this task will take you and your team a lot of time and effort.
Thirdly and finally, studies have shown that emojis can get lost in translation as they are incredibly subjective so the meaning and intended emotive message can often be misinterpreted. This has continued to get more and more muddled as different vendors and browsers redesign and create their own versions of the unicode emoji characters. A study by GroupLens research lab found evidence of misinterpretation from emoji-based communication, often stemming from emojis appearing differently on different platforms.
The grimace emoji (😬) is said to cause the most confusion, with researchers finding that 70% of people believed it to be a negative reaction while 30% thought it conveyed a position emotion.
On the whole I don’t dislike emojis or think they’re a threat to the written word. They definitely have a role to play in social interaction, can humanise communication and even add humour to it. However, I still feel there are too many risks, too many different cultural interpretations which mean they simply won’t work in a multinational business. Technical writing is all about choosing the clearest form of communication, the shortest, most simple words that cannot be misunderstood. I’m just not convinced there’s a place for emojis in documentation yet, at least not while there is still room for things to get lost in translation.
Etymology and the origin of English language have always fascinated me, partly because so many of the words we use every day represent remnants of history; artefacts left behind by the Roman Empire, the Vikings and the Norman conquest. Although words relating to computing and technology are much younger, some are just as quirky and steeped in history as those from the past.
Like a Moth to a Flame
The origin of the word ‘bug’ in the computing world is often mistakenly credited to computer scientist Grace Hopper. The story goes that while working on the Harvard Mark II computer in 1947 she discovered a dead moth stuck in a relay. It was removed and taped into a logbook where she wrote “First actual case of a bug being found” (see picture below), which suggests that the term was already in use at that time.
While this might have been the first literal case of ‘debugging’, there is evidence that ‘bug’ had been used in engineering for many years before that.
Scarecrows, Bugs and Bogeys
The most accepted origin of ‘bug’ is the Middle English word ‘bugge’ or ‘bogge’ (n.), which meant a scarecrow or a scary thing. One of the first iterations of the word came in John Wycliffe’s English translation of the bible (circa 1320-1382): “As a bugge either a man of raggis in a place where gourdis wexen kepith no thing, so ben her goddis of tree.” (As a scarecrow or a man of rags in a place where gourds grow guards nothing, so are their gods of wood.)
As language evolved, another off-shoot of ‘bugge’, the scarecrow, was ‘bogey’, an evil or mischievous spirit. This gave rise to a family of other ghost and hobgoblin names including ‘bogeyman’, ‘boggart’, ‘bogle’ and ‘bugaboo’. While the archaic form of ‘bugbear’ is also another hobgoblin figure. In general these all have the same negative connotation of things to avoid and that cause fear or irritation. The direct descendant of these words is ‘bogey’ which still survives today in modern English, in aviation where a ‘bogey’ is an enemy aircraft, in golf where a ‘bogey’ is one over par (a bad score) and a ‘bogey’ (UK) or ‘booger’ (US) is a piece of nasal mucus.
By the middle of the seventeenth century, the word ‘bug’ no longer meant scarecrow and had come to mean ‘insect’, which makes sense as many people consider them to be alien and scary. The earliest references to ‘bugs’ meaning insects often related to ‘bedbugs’, supposedly because when someone woke up covered in bedbug bites, it was as if they had been visited by something scary during the night.
Thomas Edison’s Bugs
By the 1870s, the meaning of bug had changed once more and perhaps made its first appearance in technology when American inventor Thomas Edison referred to what he called a ‘bug’ while developing a quadruplex telegraph system in 1873. He also mentioned ‘bugs’ in a letter to an associate:
“It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that “bugs” — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached.”
They were mentioned once again in an article in the Pall Mall Gazette in 1889:
“Mr. Edison, I was informed, had been up the two previous nights discovering ‘a bug’ in his phonograph – an expression for solving a difficulty, and implying that some imaginary insect has secreted itself inside and is causing all the trouble.”
Another early example of ‘bugs’ being used to refer to technology was with the release of the first mechanical pinball machine, Baffle Ball, which was created by David Gottlieb in 1931. It was advertised with the strap-line “No bugs in this game!” (see poster below):
So it seems fair to assume that the word ‘bug’ came from ‘bugge’, the Middle English for scarecrow, which led to ‘bogey’ and all the similar words meaning an obstacle, a source of dread or something to be feared. In modern times the word ‘bug’ has become a verb meaning to vex or irritate, while the noun form has become a synonym for disease-causing germs, crazily enthusiastic or obsessive people (e.g. a firebug is a pyromaniac), concealed recording devices used by spies and perhaps, thanks to Edison, an error in technology.
After a tumultuous and slightly short-lived affair with Sharepoint, I was introduced to Confluence and I was quickly won over by its simplistic UI and text editor. However, three years later I’m starting to feel disillusioned and frustrated with it. Here are some of the reasons why:
Confluence has become bloated. I’m not sure if it’s a result of popularity or customers’ demands for new features but the feature set has been bloated while the basic functionality is neglected. It’s like a pet dog that has become fat and lazy from too many treats.
Any frequent user of Confluence will be aware of the numerous bugs that seem to go unfixed for long periods of time. We encountered a bug last week where images were breaking when copying a page (we later discovered this was caused by the image name having a colon).
Another common bug, which has caused me grief in the past, relates to being unable to export pages as PDFs for various reasons. This case, first reported in 2014, is still affecting customers two years later: https://jira.atlassian.com/browse/CONF-34275
To do anything useful or practical with the vanilla version of Confluence you need to install expensive plugins. Want to use versioning? You need buy a plugin. Want to translate your content? You need to buy a plugin.
Apart from the additional costs, my main issue with this is only a handful of plugins are built and maintained by Atlassian so you either have to take the risk of using a free plugin that will break in the future or you have rely on a third party developer to continue supporting it to ensure it works with newer versions of Confluence.
4. Basic Missing Features
The basic text editor in Confluence, the thing at the heart of the software, is still pretty poor and even things like basic formatting are a chore unless you manipulate the CSS.
Off the top of my head, the things that annoy me include: you can’t insert certain macros directly after another macro or a table because they will break or it will mess up your formatting, you can’t create a table without borders (unless you have Source Editor), you can’t choose different fonts or font sizes (unless you import them in the CSS), you can’t change the background colour, you can’t justify your text and you can’t remove historical attachments that have been uploaded to a page in the past. These are all things I’ve just accepted as Confluence-isms, things you just have to accept that Atlassian aren’t going to fix any time soon.
Despite all these things, Confluence is not cheap. If you’re a company with 100 or more employees, the Cloud version will set you back 3,000 dollars (£2,419) each year:
On reflection, it’s pretty scandalous how much they are charging when so many bugs still exist, basic text editing functions are missing and most companies will need to install and pay for further plugins to get it to meet their requirements. Unfortunately, until someone comes up with a decent alternative I don’t see things changing.
Have you found a decent alternative that can be used for wiki content or software documentation/online help? If so, please let me know!