I often get asked by coworkers and folks in the community "How do I get to speak at a conference", “How do I write a good talk abstract?”, “How come…
The post Submitting conference abstracts that get accepted appeared first on The New Dev's Guide.
]]>I often get asked by coworkers and folks in the community "How do I get to speak at a conference", “How do I write a good talk abstract?”, “How come I keep getting rejected?” or “What does the conference submission process even look like?”. Because this is a complex topic and a recurring one, I thought I’d share some of my insights into this common question in writing.
The simple answer is that you must write an abstract that is relevant to the conference and its attendees, is easy to understand, and is more interesting or more relevant than other good abstracts. But the details get a bit muddy, so let’s dive into it.
First of all, it’s helpful to understand the conference review process. I think I’ve now reviewed session proposals from 12 total distinct events and all of them work in a similar way: the conference has a public call for proposals - or call for papers or call for presentations or call for speakers – the terminology varies but they’re typically called CFPs for short.
During the CFP process the conference has a public way for people to submit information on their talk. This information includes:
It’s important to note that all information in the proposal aside from the additional notes field will be visible to attendees should your talk is accepted, so you write this primarily to the attendees, even though the conference must review and select it.
The CFP process is usually open anywhere from a week to a month. Once the CFP closes submissions are not allowed and the review process begins.
Review processes vary from event to event, but they sometimes will involve a preliminary “weed out” round to remove abstracts that are not well formed, did not follow event instructions, or are not well suited to the conference.
The purpose of this “weed out” round is to reduce the workload for the conference for serious review and ranking. Most of the time you don’t need to worry about the “weed out” round, but I’ve seen a few friends get rejected by conferences from not reading CFP instructions for special conditions like “Don’t refer to yourself by name in the abstract” or they’ve submitted 1 hour talks for a 30 minute timeslot.
The “don’t refer to yourself by name in the abstract” condition might sound weird, but more and more conferences are doing either entirely blind CFP reviews where reviewers don’t know who submitted a talk or reviews where the first round is blind and then final rankings are made once speaker information is available. The purpose of this is to reduce biases that might be preventing women and minorities from getting conference speaking slots, or to reward relative unknowns who
are submitting great talks and competing with established speakers who are starting to coast.
During the actual review process, a panel of community reviewers – typical senior tech professionals in the area who are trusted by the conference – review each session and either give it a numeric rating directly or rank it relative to a few other random sessions from the conference. Reviewers will do that for every talk in a specific technology area, such as web development or databases or data / AI.
Once all reviewers have reviewed sessions, the conference organizers have a body of numeric ratings for each talk they’re reviewing. Conferences then tend to take the topmost ranked sessions overall for the conference, or the top X talks from every track of the conference to form their agendas.
If a track has a lot of talks in a specific topic, it’s possible that a talk on an already covered topic that ranked third might skipped over for lower placed talk if it deals with a different technology.
Additionally, conferences can only cover so much travel and hotel costs for speakers so many conferences like to select multiple talks from the same speaker in order to reduce schedule impact. This is part of why you should submit multiple talks to the same conference.
So, why do people like me rate sessions poorly?
There’s a number of reasons I might rate a talk poorly:
In conference selections you get usually one shot to select speakers and you want to select someone who’s going to give your conference and its attendees the attention and preparation they deserve. That means someone who’s going to follow directions / instructions, read emails from organizers, and put in the time and effort to deliver a great talk.
I do note length of abstract in a few of those points above along with the talk title. We’ll come back to those topics later.
For now, keep in mind that most of the time I don’t actively reject talks.
More often I’m comparing 3 talks to each other based on their merits and talks simply don’t rank as high as others because the other talks are more compelling.
So, what makes an abstract compelling?
I think a good abstract accomplishes a few key objectives:
You may notice that my focus here is almost entirely on the attendee and not on the session reviewer. This is because you write your abstracts for the attendee, not for the conference.
The conference’s job is to determine which abstracts will best serve their attendees. They often look at multiple competing abstracts on the same topic and can only accept a few, if any. Yours needs to more clearly communicate its unique approach and value to attendees in order to win among multiple competing abstracts.
Sometimes you’ll submit abstracts on niche topics that the organizers or attendees may not be familiar with. In those cases you need to communicate the basics of the technology, where it fits into people’s existing workflows, and why it’s worth exploring.
For example, I produce a lot of content on a technology called Polyglot Notebooks that lets .NET developers perform data science and data analytics experiments in Jupyter Notebooks using .NET.
Most .NET devs don’t know to look for this technology, so I have to “sell” the technology as something that helps dev teams communicate their APIs in interactive ways and allows them to perform regular analytics tasks in a shareable way.
Unique and niche topics aren’t bad to submit to conferences, but conferences can only accept so many non-mainstream topics, so your abstract will need to be especially polished and understandable – particularly since your session reviewers won’t be familiar with what you’re speaking on.
Now that we’ve covered the details of the CFP process, let’s walk through the structure I’ve found for writing my abstracts.
First, you start with a title. Consider the technology and approach you’re taking and come up with 3 titles. Then write those down in a document and come up with 12 more.
You need to consider a lot of different titles to come up with your best possible title.
For the sake of argument, let’s say that you want to share with the community on a new JavaScript AI technology that magically detects and eradicates bugs in your code as you write it. For sake of argument, we’ll call this fictitious technology Woodpecker.js.
Our first three titles might be:
All of these mention the core technology and the key benefit and all of them have a humorous bent to them.
We can also ask a large language model for ideas if we get
I sent Bing Chat the following prompt:
Give me 12 titles for a tech conference talk on a JavaScript framework called Woodpecker.js that detects and removes software bugs as you write your code. Titles should include Woodpecker in the title and be interesting to conference attendees. Sample titles include “Tired of bugs? Let woodpecker eat them as you code!”, “Woodpecker.js – It eats bugs so
your users don’t have to.” and “Getting pecked to death by bugs? Woodpecker.js can help!”
Bing Chat gave me the following response:

Here we see 12 decent titles for the same abstract as well as 2 links to a similarly named CI tool called woodpecker-ci, an article on JavaScript conferences (that we could potentially submit this talk to), and a link to writing good abstracts.
I’m rather partial to “From Bugs to Breakfast: Woodpecker.js Devours Code Issues” so we’ll go with that and hope the conference doesn’t schedule us just before lunch or dinner when attendees are hungry.
Now that we have our title, let’s start our actual abstract.
I like to begin my abstracts with an introduction to the problem and technology. Focus on a need or challenge attendees have and introduce your topic as a potential solution.
For our abstract, our opening paragraph might read as follows:
Tired of bugs slipping through and reaching production? Is your AI copilot asleep at the helm? Woodpecker.js might be able to help. Woodpecker.js is a revolutionary technology that gleans insights by looking at multiple alternate realities in which your code went out as written, learns from the bugs that occurs in those alternate worlds, and silently applies the
fixes for those bugs to your code.
While this technology sounds too good to be true (and is, since it doesn’t exist), the paragraph establishes a need or challenge: bugs are reaching production and developers want to focus on development and not debugging.
The paragraph also assumes readers may not be familiar with the technology and briefly explains what it is and how it works at a very high level. This orients your readers to reading the rest of the abstract and doesn’t assume much knowledge for attendees.
Now that we’ve oriented the reader, our next task is to help them understand what we’ll cover and how we’ll cover it.
I usually do this with either a single middle paragraph or a short sentence and a list of 3 – 5 bullets. Let’s do the latter here:
In this talk we’ll show an interactive demo of Woodpecker.js and cover the following topics:
- Woodpecker.js in action: bug-proofing complex time zone and validation code.
- How Woodpecker.js works
- Getting started with Woodpecker.js
- Securing your application by anticipating future breaches
- Overcoming key challenges, including explaining alternate realities to your CTO
While this clearly is a ridiculous technology (that I want – please someone invent this), this list of bullet points helps attendees and conference organizers imagine your talk before you give it. This is important because I’ve seen more than a handful of abstracts get rejected for reasons like “There’s no way you can cover all this in 50 minutes” or “This talk looks like it has about 10 minutes of content. I’m not making this a half-day workshop”.
Additionally, if attendees are already familiar with a topic, they might be on the fence on attending your topic in the hopes that you cover more advanced topics. By outlining your high-level approach, attendees won’t be playing roulette on what you’ll cover and can make informed decisions. This protects you a little from people complaining about your session not meeting their needs / expectations.
Your final piece to the abstract is a short summary or call to action that recaps the key benefit and need of the talk.
In our case, that’s going to be as simple as the following short paragraph:
Come see how Woodpecker.js can help you leave bugs to the birds and focus on what truly matters: delivering value to your organization through secure bug-free code that meets the needs of your users.
With that in place, you close your abstract and send and now the waiting process begins.
After you send in your abstract (which you definitely did well before the last day of the CFP process, right?) the waiting begins.
For me, this is often the hardest part as I’m frequently checking my inbox as time goes on for those acceptances or rejections.
My best advice is to keep yourself busy with another project during the CFP review process, and to have a backup plan for if you’re rejected.
Preparing an accepted talk takes a lot of time and effort, so think about what you might do with that time if you weren’t selected for a talk.
In my case, I had a heartbreak a few years ago where a conference I really wanted to get into rejected a talk I really cared about and it stung.
I decided instead to spend that time building a small course on a topic of interest to me and I sent out a tweet announcing the disappointment and the new project in that course.
This brings up an important point: be gracious. It’s natural to want to blame organizers or be mad or vent, and these things are understandable, but I advise you to do them privately if you need to do them at all. Session reviewers and conference organizers are connected to their communities and tech is a small world sometimes. If you leave a bad taste in someone’s mouth from a post on socials, they might remember next year and it could be a deciding factor in reviewing your abstract.
Remember, conference organizers want reliable speakers who will do a good job.
Publicly complaining is unprofessional, but understandable, and may negatively impact your chances. In my case, it ended well because the conference had a number of speakers cancel and I was chosen as a back-up speaker to fill those slots. I graciously accepted, scrapped my planned course (which I’m now building a few years later, oddly enough), and delivered a talk I’m still proud of.
My closing thought for you is that even if you do everything correctly, you may not get picked. Things you view as critically important may be seen as less important by organizers, or you may be competing with a lot of other related topics.
Rejection happens, so learn from it, grow from it, and submit a diverse set of abstracts to give conferences many options for content of yours to consider and accept.
Finally, if you happen to be around the Columbus, Ohio area and want to give a talk at a user group, send me a note; I run one of our user groups and know most of the other organizers and would love to help connect you
The post Submitting conference abstracts that get accepted appeared first on The New Dev's Guide.
]]>Previously I wrote on what it's like to write a tech book. In that article I deliberately glossed over the process of getting to the agreement to write the book.…
The post Pitching a Tech Book to a Publisher appeared first on The New Dev's Guide.
]]>Previously I wrote on what it's like to write a tech book. In that article I deliberately glossed over the process of getting to the agreement to write the book. Let's talk about that process.
There are a very limited number of scenarios in which people talk to a publisher about a book project:
The first case is what people frequently think of, and what I recently went through this fall as I successfully pitched a book idea (more on this soon). In this scenario, you have an idea, you think people will like it, you've decided you don't want to self-publish, and you now want a publisher to work with you.
The second and third cases involve an acquisition editor reaching out to you about an opportunity they believe exists in the market that you might be able to write about. This is how I got to write my first technical book, Refactoring with C#.
Even if a publisher reaches out to you, you usually will have to fill out a formal book proposal.
In a future article I may expound on cases 2 and 3 more (or self-publishing for those who have interest in what I've found there), but for now, let's focus on pitching an idea to a publisher who isn't expecting you to do so.
The first step in pitching a book is to figure out who you want to pitch it to.
As many other writers have pointed out, take a look at the books on your bookshelf (or in your e-reader) and see who is publishing them. There's a good chance that publisher would be interested in your idea.
If you want to see your book on shelves in a bookstore, your list of publishers decreases significantly as most tech sections of bookstores focus either on mass audience tech books (non-programmers) or only shelf established hits from certain publishers.
Once you've identified 3 or so key publishers, I recommend you look over their website for a "become an author" link. From there, you can reach out and pitch your idea. Often times this will be a simple contact form where you can briefly identify yourself and your idea and ask if they'd like to hear more. Other times publishers will request you fill out a full proposal and submit that for review.
If you are connected in the tech industry, you may know authors who have published with the publisher before. For example, I speak at a number of regional and local conferences and have gotten to know many speakers as a result. I also am active in technical circles online, which further develops these relationships. If you have enough trust and familiarity between you and an established author, you can ask them if they'd be willing to connect you to an acquisitions editor at the publisher they worked with. This can give you another avenue to get established and get additional information before submitting a formal proposal.
My general message when reaching out to publishers was something like the following:
Hi [Acquisitions editor name], I'm Matt Eland, an AI Specialist and consultant with several decades of experience in the industry and an established presence in the industry. I'm just finishing up my first technical book, Refactoring with C#, and I have a strong urge to do this again and was wondering if my interest areas might align with your needs.
I'm currently exploring options for my next project and would like to know if [publisher] has interest in discussing partnering on a book on X, Y, or Z in the near future.
This was friendly enough, gave them enough information without overwhelming them, and showed them my qualifications by giving them a set of areas I was focused in.
This in turn helped me identify needs that the acquisition editor was most interested in, which helped me craft the right book proposal to send to that publisher.
The tech book proposal differs from publisher to publisher but addresses the following key points:
The exact template and format of this will vary from publisher to publisher, but these are the major points you'll need to cover.
Keep in mind that publishers know what they've published before, what other publishers have published, and what projects are currently underway. If you pitch a project to a publisher that they published a few years ago, it will not go well unless you have a compelling reason why your project is different from that project.
The process of filling out this document will take you a significant amount of time, and the finished proposal will probably be 6 - 10 pages long or so, depending on the level of detail the publisher requests. Do not cut corners here or copy / paste from other documents you've filled out. Publishers will be able to recognize the competing format and it doesn't inspire confidence (I've heard horror stories from publishers at conferences before).
Some publishers will also ask for a writing sample, such as a link to blog posts you've written or even a sample chapter. I've personally not encountered the sample chapter requirement, so this may be a requirement that's going away, or this may be something that people ask for when they can't find any samples of your writing style online.
Once you've submitted a proposal, you'll need to wait some time before you hear back. This is part of why I recommend initially reaching out to a small handful of publishers, not just one; there's a lot of waiting in the process, and that's fine.
When you ultimately do hear back from a publisher, the response may be a flat decline, a request for clarifying details, a request for modifications, or an approval.
The declines sting, particularly if you were invested in a publisher. I still vividly remember the not so nice phrases that entered my mind when I got a simple "No thanks" reply from a publisher in response to an 8 page document I created just for them.
More generous "no" replies will tell you why they're declining the project (typically because they have a competing title or its outside their focal areas), but you're not guaranteed to get this.
Sometimes a "no" will also come with a request for other ideas or, more frequently, an inquiry if you're able to write about a specific topic of interest to the publisher. I even had a no come in the form of a rejection of the target audience for the book and a request to retool the book for a different audience. That project was promising, but I ultimately decided that my ability to write at that level of expertise on an advanced topic wasn't a good fit for me at that time and decided to pursue things more likely to yield a good result.
When acquisition editors ask for more details it's usually because they didn't understand something in your proposal or because the proposal reminds them of a different book that struggled in a specific area or market.
If an acquisition editor doesn't ask you questions on your project, watch out! It could be that they don't care about it, aren't planning on offering you a fair contract, or are looking to get as many books out as possible.
Sometimes acquisition editors will ask you to add something to your proposal. This is either to increase search engine optimization (SEO) or stay competitive with other books on the topic. In either case, you are allowed to counter with reasons why you didn't include that or didn't feel it is relevant to your topic. Keep in mind that every chapter or section you add will be additional weeks or months writing, revising, and reviewing your content.
Most acquisition editors don't come from programming backgrounds, but these individuals know their areas well. Acquisition editors research things they see in book trends and in the industry, and they research things they encounter in book proposals. While your acquisition editor might not have written a line of code, they still deserve your respect as a researcher and SEO optimizer, but you may need to clarify misconceptions or assumptions they have about your areas of expertise.
On the topic of expertise, you do not need to know everything that is going to be in your book to submit the proposal. You need to know what is and isn't possible with technology, but you are allowed to deepen and broaden your knowledge while writing a book. I would propose that you should have somewhere between 85% and 98% of the knowledge you need to write the book during the proposal stage. The rest comes when you build out individual chapters and stretch into new areas.
For Refactoring with C#, I knew almost all of the material up front as it was from my own journeys as an engineer and engineering manager. However, I hadn't built a Roslyn Analyzer myself at the time of writing the book proposal. I'd studied a few and knew what they looked like and how they worked, but knew those chapters would be a learning journey for me. In this particular case, I wanted to learn this to develop my own skill and also fill a gap I saw in the market as someone who had searched for books on that topic before in the past.
Ultimately an acquisition editor is trying to find a market fit for your book and improve the proposal to the point where it can be approved during an editorial review meeting and green lit for production. Once that happens it moves on to the scheduling and contract phase.
Each publisher handles scheduling a little differently. Many publishers are interested in specific milestones such as when the first chapter will be done, when the first 3 chapters will be done, when the book reaches halfway, or when the book is fully complete.
Packt operates a little differently where they request page estimates during the proposal process and then extrapolate your writing speed based on the page count and hours per week to build a candidate schedule. They ask you to factor in work, vacation, travel, and other commitments, and that helps determine how long each specific chapter will take to reach draft phase and accepted phases at a per-chapter level.
Whatever you and your publisher determine, this will go into your contract. This sounds scary (and is a little scary), but the clauses it sits next to are generally related to reducing your advances on royalties if you're over 30 days late or potentially giving them the ability to cancel the project entirely if you're not making progress. Usually your publisher is more interested in maintaining active communication with you than strictly adhering to a plan.
It's important to note that each contract is different and I cannot offer you legal advice, so it is important to seek out someone who can.
Most book contracts will include a post sales percentage that you are awarded, an advance upon royalties awarded and a schedule for when that money is distributed. Your percentage is likely to be between 10 and 20% from what I've observed. Some publishers also do sliding windows of percentages based on sales so books that don't sell in high volume net you lower percentage rates than ones that sell more.
Be wary of contracts on the lower scale of royalties and contracts that do not offer you much of an advance on sales. Having an advance not only guarantees a minimum income from the project, but it usually protects you from the publisher deciding to cancel the project without compensating you (they'll rarely want to do this if you're on schedule, but technology and markets change).
Watch out for clauses that talk about when they won't compensate you, such as if a book is bundled, heavily discounted, or used on another platform.
You have a right to have a lawyer look at your contract. You have a right to raise concerns with the publisher on clauses or overall compensation. You also have a right to walk away entirely.
This is a right I exercised a month ago when a publisher I previously respected offered me a contract starting at 10% royalty on a title without any advance and without compensation when bundling the book with others or discounting it. They also wanted to purchase the ability to sell the book through a subscription service without compensating me further for it.
I ultimately walked away from that deal, found another publisher, revised the abstract based on their feedback and based on new developments in the industry during that time period, and signed a much better deal with a publisher I trusted.
Ultimately, your book is going to take a lot of your time. You won't be fairly compensated for it at an hourly rate no matter what given the industry, but you should sign with a publisher you feel comfortable working with, because you'll be working with them for some time.
Keep in mind that self-publishing is entirely viable and doesn't carry the risks it once did with newer print-on-demand publishing models.
However, I've found that publishers are very valuable in terms of understanding the market and optimizing for it, providing valuable editing, proofreading, typesetting, and cover design services, and just generally being an accountability partner to make sure the book is progressing.
I know this was a longer post, but my hope is that it helps future authors out there. If you found something particularly interesting or have a question, please let me know and I'd love to chat with you.
The post Pitching a Tech Book to a Publisher appeared first on The New Dev's Guide.
]]>This summer and fall I had the opportunity to write "Refactoring with C#", a technical book on C# development, through Packt Publishing. When I considered doing this project in the…
The post Writing a Book with Packt appeared first on The New Dev's Guide.
]]>This summer and fall I had the opportunity to write "Refactoring with C#", a technical book on C# development, through Packt Publishing.
When I considered doing this project in the spring there wasn't much out there on what it's like to be an author with Packt, so I thought I'd share my experience writing the book for those considering it in the future. Please note that this is focused just on the writing process. I'm deliberately not covering the proposal and outlining phases as I plan to write a separate article on that topic.
Also, you may be coming into this article with a negative opinion of Packt Publishing. I read a lot of technical books, including many from Packt, and can attest that they hit a slump awhile ago with quality issues in their published books. I have to say that my experience with Packt was almost entirely positive - provided that you make the assumption that you as the author own the quality level for your own book. We'll talk more about this as we go on.
The drafting process for the book starts a week or two after the book's proposal and outline are approved and the contract is signed.
Packt provides a Word template that they expect you to follow. This template uses special format markers that match Packt's styles and formatting. These styles include special coloration to help communicate the style during the editing process and result in your markup looking very colorful:

Because Packt wants you to use these styles, it means that you can't just use Ctrl + B to bold or Ctrl + I to italicize. I was surprised how much this slowed me down, but it did slow me down until I found a workaround for it.
I eventually edited Packt's template to include keyboard shortcuts for the formats I used frequently. I then took those complex keyboard shortcuts and created buttons for them in my Stream Deck setup:

I then mounted the Stream Deck next to my monitor so it was always in reach and in my peripheral vision as I was writing:

This effectively "gamified" formatting and helped me format things as I went without needing to page through different Word settings to find the style I was looking for.
Before I found this productivity improvement I wrote my chapters in markdown and then ported them over to Word. I stopped doing this after the first few chapters once I realized that adapting the format from markdown to Packt's styles took a longer than expected amount of time.
When I built the schedule for writing, I assumed that I'd be writing two or three days a week for two or three hours a session. I figured that if I focused on writing the code for a chapter first then the content for the chapter would flow easily from there.
This assumption largely turned out to be true, but the biggest thing that changed from my plans to the actual writing process was that I found I couldn't just write 2 or 3 nights a week. I found that the book crept into all of my creative processes and things flowed better if I worked on it at least 5 days each week.
This change had the positive effect that I was able to keep momentum more easily from night to night, but it also heightened the negative effects of writing a book as well. I found myself missing working on other learning projects or just playing around with interesting technologies. I also largely stopped playing games during this time, though I still found ways of resting and re-energizing myself.
Usually I was able to get a first draft of a chapter done every 10 days or so. The book topic was one that I was well-acquainted and didn't really require research, so when I was happy with the direction of a chapter's code, the rest of the chapter flowed easily around it.
The exception to this would be three of the early chapters in the book. I knew I had 3 large chapters focused on various refactoring techniques, but what made sense to me during the outline phase of the book didn't make as much sense when I went to write the book. I found myself constantly moving things between these three chapters. I eventually wound up moving and renaming the chapters themselves. The content I intended to put into the book made it, but the sequence and flow of that content changed.
Because I saw the content flowing between these three chapters, I told Packt I'd be delivering the three chapters together, instead of one-at-a-time as each chapter was initially drafted. This wound up being one of my smartest decisions in writing the book as it gave me greater freedom in making these chapters as good as I could. Also, to Packt's credit, they were very accommodating on this need and the resulting structural changes to the book versus what I had planned and what we agreed on during the outline phase.
The other thing I noticed about writing was that my estimations of page length for chapters and the actual page length were rarely very close. Images and code listings took up a lot of page height, resulting in higher page counts. Additionally, many of my chapters were built to be step-by-step tutorials with very explicit steps which added to the page count. I believe this was the right decision for early chapters, but it did increase the overall page count.
Adapting working code to code listings in a book is a non-trivial skill. You have a very limited width per line to work with and you also need to keep the overall length of the listings down so it can fit on a page and content can flow freely. Choosing short variable and method names that are clear to the reader is a skill. I also found myself adopting a different curly brace style than I normally program with in order to make an economical use of page space.
Once you complete a chapter, you can email the Word document to your editor team or upload it to their author portal (which is what I did).
From there, Packt usually got first draft feedback with edits back to me between 3 and 10 business days from when I gave them the first draft. These revisions often came in waves with me getting several from the editor within a day or two. This can add significantly to your stress level to have edits waiting for you while you're trying to form a later chapter. However, when it came down to actually responding to edits, it was far easier than authoring the chapter to begin with.
Much of the editing feedback I got was on consistency of use of Packt's styles, consistent casing, and working on improving transitions between sections and chapters of the book. There was also some very valuable feedback in there on run-on sentences, overused words, and unnecessary adverbs.
The entire Packt team that I worked with were scattered throughout India. This meant that my US English writing style didn't always translate to Packt's editorial team. This helped me identify phrases that work in programming settings in America but may not translate well to international readers.
On working with an international team, I actually really enjoyed it. I live in US Eastern and wrote each night from 9:30 PM to midnight. Every Sunday night I'd give Packt a weekly report with plans for the next week. When I needed to I'd email them more frequently, but usually at the end of a night. I'd then start winding down for the night, and I'd often have a reply before I went to bed.
Often I'd wake up with messages from Packt in my inbox, which gave me something to think about as I started my day. I found this was a very effective way to work with my editor and it was often a treat for me to wake up and see what progress had been made on the book overnight.
Once a chapter is approved from editorial review it gets sent on to the technical reviewers. In my case I recommended a number of my friends from the Ohio technical community as technical reviewers for the book. This decision was one of the best decisions I made for the book's overall quality because these people were stellar and I knew they were stellar going into the project. I knew my technical reviewers understood the topic and had great perspectives on .NET development.
Knowing my technical reviewers knew what they were talking about and were invested in making the book as good as it could be made them incredible allies in writing the book. They flagged things that were unclear, technologies I could mention, and occasionally things that didn't work for them or places where I wasn't fully correct.
The technical revisions section is also the last place you can freely make significant edits, so I used this in many cases to add in extra polish and make the chapter more cohesive with other chapters. The technical reviewer feedback was very helpful in this regard and I frequently found myself removing, adding, and modifying content in response to the suggestions from my reviewers.
I don't know what happened in Packt's books of poorer quality a half decade ago, but I imagine one of the key things they were missing was world class technical reviewers like I had through Calvin, Brad, Sam, Matthew, and Steve.
Once you pass off the book to Packt and all chapters hit technical review, the book enters the final phases.
During this phase Packt has copy editors and proofreaders look for issues in the chapters. If they have questions, your editor will get in touch and ask for clarifications on areas of confusion.
Once this process is complete, the chapter gets laid out by a typesetter and you'll get a PDF of the chapter in its potential final state.
I read this PDF on my tablet and marked it up in digital ink when I found issues. You will definitely find issues during this phase. The issues may be readability, inconsistencies, casing or formatting, or potentially things related to the typesetting process.
I also found a few places where the copy editors made incorrect assumptions about the intent of a sentence or section and wound up accidentally inverting my meaning. This is why I say you have to own the quality of your book. Spend time proofreading it and be explicit in the changes you want to see based on what you found.
As I wrap up this article on writing a book with Packt, let's talk about what I wish was different.
I loved the process of writing the book and would gladly do so again. However, there were a few things that I would want to change about the process.
First, Packt doesn't have any sort of early access program like Manning or PragProg offer with their titles. Under an early access program, readers can read your content while it is still in development. This system allows you to get early feedback on a book, detect issues early, and start to make sales and gain a following while the book is still in development.
In fact, I didn't have a page that I could link anyone towards on the book until the book had been finalized and its Amazon pre-release page and Packt's page for the book came out. This meant that I was speaking at conferences and user groups and doing other activities that would make it easy to market the book, but there was no action I could ask anyone to take if they were interested in the book. I estimate that I talked to about half a thousand people over the months I was writing the book, and there was nothing anyone could do but follow me for future updates if they had interest.
The next major thing I'd change would be having a process that could generate chapter PDFs early on in the development cycle. Every time a chapter draft gets submitted, I'd like a PDF to be created for ease of reviewing and reading. This would make it easier to read the overall flow of the content instead of getting fixated on Packt's formatting markers.
This is also an idea I found in LeanPub's systems. I'm currently using markdown to write and self-publish a small book on Computer Vision on Azure through LeanPub. One of the things I love about this process is that I can write content in markdown and version it in git, then push it to my cloud repository and LeanPub automatically generates a PDF of the entire book for me to preview.
Having git-based version control for content would also have made it easier to catch content issues like copy edits that inverted my meaning.
Ultimately, I'm very glad I wrote this book and I'm glad I went with Packt. I plan to write with Packt again in the future, though I am curious about workflows at other publishers and have even explored some ideas with other publishers for content in the new year.
Stay tuned for more articles on writing Refactoring with C# as we approach the book's release on November 24th.
The post Writing a Book with Packt appeared first on The New Dev's Guide.
]]>I'm pleased to share that I have completed my first technical book: "Refactoring with C#" with Packt Publishing. The book is in final preparation for publishing and goes live on…
The post Announcing “Refactoring with C#” appeared first on The New Dev's Guide.
]]>I'm pleased to share that I have completed my first technical book: "Refactoring with C#" with Packt Publishing. The book is in final preparation for publishing and goes live on Amazon on November 24th.

I've been a developer for over several decades, I've worked with .NET since beta 2, and I've had the privilege to teach new learners .NET the past 3 years. To me, this book was the perfect opportunity to continue to invest in those people as their careers progressed from junior to mid and beyond.
During my time in the workforce, I've worked in less-than-optimal codebases (and even created a few myself). As part of stabilizing and improving those organizations, I learned to safely refactor code in a reliable and repeatable way. Refactoring with C# is my opportunity to share these techniques and strategies with engineering leaders responsible for maintaining old codebases and budding new developers who find themselves working in these areas.
The book provided an opportunity to talk about the latest and greatest .NET and C# features as well as sharing specific actionable tips around libraries like Shouldly, Scientist .NET, Snapper, and more to give you the safety net you need to make major changes and improve the stability and maintainability of your code.
The book is broken up into 4 major parts:
The book is 17 chapters long and I had a blast writing it.
I wasn't alone in writing this book. In addition to Packt's editorial team, I had four amazing technical reviewers in Matthew Groves, Calvin Allen, Sam Gomez, and Brad Knowles. These individuals helped expand and contract the scope of each chapter, found a few issues in samples, and helped support the overall approach of the book.
Finally, Steve "Ardalis" Smith joined on as a foreword writer, but also provided a comprehensive technical review of the book with his own suggestions.
These individuals are all part of my regional tech community and share the same commitments I do in mentoring new developers and equipping seasoned developers with the tools and techniques they need to succeed.
Refactoring with C# releases on November 24th (Black Friday in the States) on Amazon in paperback and digital formats. It can also be found on Packt's website as well. The book was written against C# 12 and .NET 8 using Visual Studio 2022.
I hope you enjoy the book and it teaches you valuable new approaches to a critical aspect of software development. Please let me know what you think by leaving a review or getting in touch.
The post Announcing “Refactoring with C#” appeared first on The New Dev's Guide.
]]>Big O notation can be confusing, but Polyglot Notebooks and a simple magic command can help simplify it.
The post Exploring Big O Notation in Polyglot Notebooks appeared first on The New Dev's Guide.
]]>Polyglot Notebooks is a great way of running interactive code experiments mixed together with rich markdown documentation.
In this short article I want to introduce you to the #!time magic command and show you how you can easily measure the execution time of a block of code.
This can be helpful for understanding the rough performance characteristics of a block of code inside of your Polyglot Notebook.
In fact, we’ll use this to explore the programming concepts behind Big O notation and how code performance changes based on the number of items.

This article builds upon basic knowledge of Polyglot Notebooks so if you are not yet familiar with that, I highly recommend you read my Introducing Polyglot Notebooks article first.
Let’s start off here showing how the #!time magic command can be used to measure the execution time of a cell.
Before we do that, let’s define a number of items variable in a C# code cell:
// The number of items for our loop int numItems = 100;
We can then reference this variable in another cell:
// This is constant Time O(1) Console.WriteLine(numItems);
When we run this cell, we’ll see 100 printed out below the cell.
However, if you wanted to instead see the amount of time the cell took to execute, you could add the #!time magic command to the beginning of the code block as shown below:
#!time // This is constant Time O(1) Console.WriteLine(numItems);
Now when you run this cell you’ll see a time it took the .NET Interactive kernel to run that cell. If you run the cell multiple times you’ll likely see some variance in responses. These are a few values I saw:
There are a few things I note about this.
First, the first run tends to be slower than subsequent runs. That’s often a normal thing you see in dotnet in general with just-in-time compilation (JIT). I’m not sure if that’s a factor under the hood for Polyglot Notebooks / .NET Interactive, but I wouldn’t be surprised if it was.
Secondly, the wall time metrics reported below the cell are typically different and more precise than the cell’s recorded time running itself.

This is where the real value of the #!time magic command comes into play: it records just the time of your .NET code and returns the time with a greater degree of precision than the cell's user interface displays.
Finally, note that any cell output is still rendered below the cell. The wall time indicator is just an additional piece of information available to you.
Now that we have the basics of the #!time magic command down, we can use it to explore the performance characteristics of different types of algorithms in code.
This article isn’t intended to be a broad introduction to Big O notation, and a proper exploration of Big O requires a class or two from a computer science curriculum, but here’s Big O notation in a nutshell:
Big O notation is a way of understanding the way that the duration of an algorithm scales as the number of items the algorithm is acting upon increases.
The code we did earlier where we did a Console.WriteLine(numItems); is Big O(1) or constant time. It doesn't matter how large numItems is, it should take roughly the same amount of time to write its value whether numItems is 1 or 2.1 billion.
Here’s the full table of results I observed for various item levels:

As you can see, there’s some natural fluctuations, but the performance level stays more or less the same as the number of items increases. This variation is natural in programming due to small shifts in CPU utilization and RAM between runs.
We can represent this on a chart as a more or less constant performance level that doesn’t shift as the item count grows:

Let’s take a look at Big O(N) or linear time. This type of algorithm will have its time grow in a predictable and linear manner as numItems increases.
A simple example of Big O(N) or linear time is a simple for loop as shown below:
#!time
long sum = 0;
// Calculate the sum of all numbers from 0 to 1 less than numItems
for (int i = 0; i < numItems; i++)
{
sum += i;
}
Here our time measurements should follow a roughly linear line that increases as the number of items increases:

While this data has a certain degree of noise to it, once we pass the natural variation levels a linear trend emerges where the more items we see, the time taken grows in a fairly linear and predictable manner as shown below:

Note: the charts in this article do not strictly match the data but represent typical Big O notation curves
Sometimes you need to have loops nested inside of other loops. This causes your performance to increase at a quadratic rate.
We refer to this performance characteristic as Big O(N²) or quadratic time.
We generally try to avoid quadratic time if we can, but it’s not always possible. For example, sometimes you really need to look at combinations of every item with every other item.
Here’s some N Squared code:
#!time
long sum = 0;
for (int i = 0; i < numItems; i++)
{
for (int j = 0; j < numItems; j++)
{
sum += i;
}
}
Note that for every item we are looping over all items again inside of its for loop. This results in the following performance characteristics:

When plotted on a graph, this quadratic level of growth quickly becomes clear:

As you can see, quadratic time should be avoided when possible, because even though it may perform faster than other routines at low item counts, large item counts will quickly impact its performance.
Logarithmic time, or Big O(log n) algorithms are more complex, but typically involve dividing a problem in half recursively until a solution is reached.
This approach has a higher degree of complexity, but scales far better at larger levels than even Big O(N).
The following code doubles i every time through the loop, which should exhibit Big O(log N) performance.
#!time
long sum = 0;
for (int i = 1; i <= numItems; i *= 2)
{
sum += i;
}
This results in the following performance characteristics:


As you can see, the duration of the operations increases as the number of items grows, but it does so at a less aggressive rate than linear or quadratic time.
However, the inherent complexity of Log N operations may make them a less ideal choice than other algorithms if the item count is guaranteed to be small.
Big O is an intimidating concept for many new learners, but looking at simple code examples and being able to explore their performance using the #!time magic command in Polyglot Notebooks helps it become much more manageable in my experience.
I wouldn’t use the #!time magic command to get mission-critical measurements or comparisons of code - I'd use a profiling tool like Visual Studio Enterprise or DotTrace for that.
However, the #!time magic command is really handy for gaining quick insights of how your code performed at the time the cell was executed.
Using the measurements I was able to gather using the #!time magic command, I gathered enough data to be able to plot a fairly representative graph of Big O performance characteristics.

I don’t expect myself to use this command frequently, but when I need it, it’s nice to have it.
The post Exploring Big O Notation in Polyglot Notebooks appeared first on The New Dev's Guide.
]]>Fanatical just released a new bundle of eBooks on Unity Game Development. Here are some of my thoughts on these books.
The post Unity GameDev Book Bundle Review appeared first on The New Dev's Guide.
]]>My students frequently ask me how to get started in Unity game development. While I have many recommendations I give based on this question, including Unity's fantastic learning pathways, I don't often have a single book recommendation for them.
Recently I had the chance to take a look at Fanatical's Unity Game Development 3rd Edition book bundle and I wanted to cover the books that are in this bundle (because I've read most of them at this point) and some overall thoughts on the bundle itself.
Full disclaimer: I was provided the bundle for free, but am not otherwise being compensated by Fanatical, Packt, or the authors for my writeup here.
This is the minimal tier you get if you pay at least $1. This is likely worth it just for the first two books alone.
This book is presents a guided project in building a simple game from the concept and design document stage to the final touches. Along the way it introduces most of the major pieces of the Unity 2021 editor, before concluding with a random chapter on augmented reality. Overall, I think this book is a good help for someone wanting to follow a step-by-step guide, though I wonder if a learning pathway or video course might be better for such an approach. Still, if you learn best by following guided steps and prefer books, this book will be a good fit for you.
This is the book I'd recommend to someone who hasn't programmed or hasn't done much programming before and wants to learn both Unity and C# programming. It covers the basics of the C# language and then transitions into discussions of major features of the Unity editor. This book was written for beginners and should be a good "first step" for many learners.
As you build your first game you will inevitably reach a point where you ask "How do I do X?". This book aims to help you with that by giving a broad set of categorical guides to specific common tasks within these categories from moving to an object when the user clicks it to animating specific body parts to implementing a slight delay before playing an audio sound effect. This book can't address everything that you'll get stuck on, but it has a large volume of wisdom for the types of problems that disrupt many projects.
If you're looking at improving the graphical polish of your game, this book is a good fit as it walks through the processes of creating and customizing your own effects and shaders. This is a specialized discipline and this cookbook approach to common tasks will help you develop your skills in these areas while solving common tasks related to shaders and effects.
This tier includes everything in tier 1 and a few more key resources.
This book is a book on programming design patterns applied to game development in Unity. I would recommend this book to anyone who feels their code is not structured well enough and is struggling with the complexity of what they've written or for someone looking to grow their knowledge as a software developer in general and wants to use Unity as a gateway for this.
I've read both the 3rd and 2nd edition of this book. This is a lovely book for someone trying to figure out why their game is slow or how their game's performance could be improved. Unity projects often seem fine early on and then struggle at scale as you have a larger number of objects and AI agents active in your game world. This book aims to give you tools to understand, diagnose, and resolve common performance problems (including problems with art assets) and is a good quick source of specialized knowledge.
A few years ago I was thinking about taking the Unity Certified Programmer exam and got this book as a reference resource. While I ultimately decided that the certification didn't suit my career goals, particularly given the renewal process, this book would have been my primary resource while preparing for the exam.
Even if you're not looking to get the Unity Certified Programmer certification, this book will help expand and deepen your Unity programming knowledge and help you move to the next level of comfort and experience.
If you're looking to do mobile development or support players using your games on mobile, this is a good targeted resource at the things you need to think about while doing Unity development for mobile devices. This is not an area I've explored much on my own, but the book covers Android and iOS development, mobile input, notifications, resolution concerns, and mobile-specific concerns such as ads and in-app purchases.
This is the 2020 edition of the book included in Tier 1. I don't much see the point of reading this instead of the newer one, but perhaps it has benefit if you wanted to work with Unity 2020 instead of Unity 2021.
This is another prior edition of a book currently in this bundle. It's useful if you want to use Unity 2020 instead of Unity 2021, but otherwise I'd prefer the newer content.
This tier is the full bundle and includes tiers 1 and 2 as well.
Working on user interface development in Unity is radically different than most other types of development in Unity. I've personally found times when I needed to move from active game development to working on my user interface and I've witnessed how that took the wind out of my sails and brought me to a standstill. This book looks to help you solve common user interface development tasks in Unity. The book covers common user interface components and interactions as well as more advanced things such as mobile user interfaces and effects in the user interface and representing your user interface in your 3D game world.
Unity is perhaps best known for its use in 3D game development, but the engine is a capable one for building 2D games as well. I read the first edition of this book awhile back and enjoyed it and the 2nd edition seems to have grown and improved since then. The book follows a role playing game project, which is a fairly common use of a 2D game engine and should be helpful for a variety of people. Along the way it covers the user interface, effects, animations, and systems like a shopping cart and combat.
This book is an interesting collection of game prototypes from a variety of projects. It's a very helpful book for understanding how you might structure a new project and the major challenges you'll need to overcome in building your new game. It also carries two very helpful chapters on letting users save their games and understanding Unity's new user interface.
If you understand the basics of working with Unity and know the basics of the C# language, this book will help you deepen your knowledge by showing you ways of working with Unity-specific tasks such as cameras, scenes, and text assets. This book isn't necessarily for an advanced C# developer, but should help you get from beginner to intermediate. I particularly liked the chapter on artificial intelligence.
This is similar to Mastering Unity Scripting, but more task-focused on things you want to achieve. There are some chapters in here that are really interesting to me such as save game management and programmatically managing game music and ambiance. This is a relatively short book, but it is to the point and focuses on legitimate areas of interest to most developers instead of focusing on the C# language or scripting in general in Unity.
Overall, I really like this bundle. I do wish it had maybe a bit more on AI in Unity or Unity multiplayer, but I think this bundle of books is useful for someone taking their projects from the beginner to the intermediate level and getting past most common obstacles. At the cost of a bit over $15 at the moment, this is a bundle that should be worth it to purchase, even if you think you'll only get value from a book or two.
The post Unity GameDev Book Bundle Review appeared first on The New Dev's Guide.
]]>Learn how to use markdown and Mermaid.js to illustrate finite state machines, hierarchical finite state machines, and UML state diagrams.
The post Diagramming Finite State Machines with Mermaid.js appeared first on The New Dev's Guide.
]]>In this article we'll take a look at how Mermaid.js can help you transform simple markdown into state diagrams suitable for illustrating a finite state machine, hierarchical state machine, or the standard complexities of software systems.
A year or two ago I built a small game prototype that featured a boss fight with a crab monster that was powered by a finite state machine. This monster waited for the player to enter its arena, then descended from the ceiling, roared a challenge, and began fighting the player.
The monster was only damageable after it finished descending. Taking enough damage would make the monster react in pain before it could attack again. Hurting the monster enough caused it to die.
So what is a finite state machine (FSM)?
A finite state machine is a set of inter-related states that reacts to events by moving between different states in a controlled manner.
In this example, the states the boss could be in included descending, attacking, reacting to pain, and dying.
This boss fight could be represented by the following Mermaid state machine:

In this state machine we start at the leftmost dark circle, move to the Descending state, and then move between states until we reach the Dead state and the double circle at the right edge of the diagram. Once we reach the final circle, the state machine terminates and is not evaluated further.
You can build a state machine like the one above fairly easily with Mermaid.js and markdown.
Using Mermaid.js, you use a mermaid-compatible environment such as GitHub markdown, Polyglot Notebooks, the online live editor, or Obsidian. Once in that environment, you can begin a code block and specify the programming language as mermaid and then enter in markdown like the following:
stateDiagram-v2
[*] --> Descending
Descending --> Attack
Attack --> Pain
Pain --> Attack
Attack --> Dead
Pain --> Dead
Dead --> [*]
This markdown generates the following Mermaid.js Finite State Machine diagram:

Here we declare that we want a state diagram by specifying stateDiagram-v2.
Next we declare the various transitions between states by writing the name of the state and the state it can transition to. States may transition to multiple other states. For example ,the attack state may transition to pain or to dead.
The first state is represented by using [*] to the left of the arrow and the last state is represented by [*] to the right of the arrow.
Note that this diagram is identical to the one we saw earlier, except that it is arranged from top to bottom instead of from left to right. If you want to generate a left to right Mermaid.js finite state machine diagram, you can add the line direction LR after the stateDiagram-v2 line.
If you want to be explicit about the reasons for transferring from one state to another, you can add optional descriptions to each transition by adding a : and then the additional comments to the right of the relationship as shown with the following markdown:
stateDiagram-v2
[*] --> Descending : Player entered arena
Descending --> Attack : After roar animation
Attack --> Pain : Hurt a lot
Pain --> Attack : Finished animation
Attack --> Dying : Ran out of health
Pain --> Dying : Ran out of health
Dying --> [*] : After death animation

These labels tend to produce busier diagrams, but the extra text can add valuable information as well.
One problem with traditional finite state machines is that you can get an almost combinatorial explosion of relationships between states the more you add new states to your finite state machine.
To combat this, you can nest state machines inside of other state machines to create a hierarchy of sorts.
Due to the hierarchical nature of these state machines, we call these nested state machines hierarchical finite state machines or HFSMs for short.
Nesting finite state machines can make the different states much more easy to manage while also making larger transitions more apparent as shown in the diagram below:

Declaring hierarchical finite state machines in Mermaid.js is somewhat straightforward, though the syntax involved is a bit different:
stateDiagram-v2
direction LR
state intro {
[*] --> Descending
Descending --> Roar
Roar --> [*]
}
state combat {
[*] --> Attacking
Attacking --> Pain
Pain --> Attacking
}
state defeated {
[*] --> Dying
Dying --> Dead
Dead --> [*]
}
[*] --> intro
intro --> combat
combat --> defeated
defeated --> [*]

Here we declare 3 large root-level states named intro, combat, and defeated.
Inside of each state we list the various states inside of that larger state and how they transition between each other.
We also list how the three states relate to one another at the bottom of the markdown. In this case the three states form a sequence, but states will often cycle between each other more frequently than this.
In the diagram above, each state linked to the state next in sequence, but you can also link to states inside of a parent state by mentioning them explicitly as shown in the following markdown:
stateDiagram-v2
direction LR
state intro {
Descending --> Roar
Roar --> Attacking
}
state combat {
Attacking --> Pain
Pain --> Attacking
}
state defeated {
Dying --> Dead
}
[*] --> Descending
combat --> Dying
Dead --> [*]
note left of combat: The boss is damageable in this state

Here the various states appear significantly more simple because we're relying less on [*] nodes to communicate state entry and exit and more on direct transitions between states.
This diagram does still have a transition from the entire combat state to the dying state within defeated. This is to indicate that any state inside of combat can transition directly to dying if it needs to.
Also note that you can declare a note to the left or right of any state to annotate things that need special attention.
Finally, it is possible to use hierarchical finite state machines and messages for transitions between states in the same diagram, though the result gets a bit messy:
stateDiagram-v2
direction LR
state intro {
Descending --> Roar : Movement Finished
Roar --> Attacking : Animation Finished
}
state combat {
Attacking --> Pain : Took Enough Damage
Pain --> Attacking : Animation Finished
}
state defeated {
Dying --> Dead : Animation Finished
}
[*] --> Descending : Spotted player
combat --> Dying : Took enough damage
Dead --> [*] : AI Stopped
note left of combat: The boss is damageable in this state

I think that Mermaid.js finite state machine diagrams are pretty interesting and help convey the possible states a system or agent might be in.
State machine diagrams in Mermaid.js can do more than just emulate finite state machines and hierarchical finite state machines and I'd encourage you to read the Mermaid.js documentation for features such as decisions, forking, and even concurrency.
If you like some of the features of these diagrams but want additional flexibility, you may want to check out Mermaid.js flowcharts instead.
As for me, I plan on using Mermaid.js for a state diagram the next time I design an AI agent or system with enough complexity in its states and state transitions.
The post Diagramming Finite State Machines with Mermaid.js appeared first on The New Dev's Guide.
]]>Let's take a look at SysML Requirement Diagrams and see how Mermaid.js can render complex diagrams from simple markdown.
The post Creating SysML Requirement Diagrams in Mermaid.js appeared first on The New Dev's Guide.
]]>Mermaid.js is a powerful diagramming library built on JavaScript that can convert simple markdown into full diagrams. While it supports many common diagram types such as sequence diagrams, mind maps, entity relationship diagrams, and sequence diagrams, it also supports a few I've rarely seen before, including the SysML Requirement Diagram.
In this article we'll explore using Mermaid.js to create a basic SysML Requirement Diagram and also introduce requirement diagrams in general as we go.
Unlike the other articles I've written on Mermaid.js, I should note that I've not had the chance to use SysML Requirement Diagrams before in the workplace. but after investigating them a bit more, I see some interesting potential in certain scenarios.
Because of my limited experience, my advice in this article will be more on how to use the Mermaid.js tooling instead of how to apply requirement diagrams to your work. If you'd like a more in-depth exploration of requirement diagrams in general, I recommend this article from the requirement engineering magazine.
The first thing we'll need to do to create a requirement diagram is start in an editor that supports Mermaid.js markdown. Polyglot Notebooks, Github markdown, Obsidian, and the Mermaid Live Editor all support requirement diagrams as of this writing.
Next, we'll start a diagram out with a single requirement for a dark theme:
requirementDiagram
requirement dark_theme {
id: 1
text: We need darkness
risk: low
verifymethod: inspection
}

Here dark_theme is an element in the requirement diagram and has an id, text, risk, and verifymethod associated with it.
id should be something that uniquely identifies the requirement in various documentation.
Text is some additional contextual information about the element beyond a simple name.
Risk represents the risk the requirement poses and must be one of either low, medium, or high.
Verification governs how you plan on knowing the requirement is correctly fulfilled and is one of the following values:
Analysis - analysis will determine that the requirement was correctly fulfilled. For example, traffic volumes & bounce rates.Demonstration - we should be able to demonstrate the requirement to a product owner or other stakeholderInspection - detailed inspection of the requirement in its functional state should be able to mark it as correct or incorrectTest - a testing process can reveal flaws or correctness in the fulfilled requirementAdditionally, note the word requirement before the name of our requirement. This governs which type of requirement the elemement is.
Supported requirement types include:
RequirementFunctionalRequirementInterfaceRequirementPerformanceRequirementPhysicalRequirementDesignConstraintNow that we've covered how to compose an individual requirement, let's take a look at adding elements to our requirement diagrams in Mermaid.js.
In requirement diagrams you will often want to list specific implementations of something to associate them with various requirements they must meet and constraints they must satisfy.
These implementation parts are called elements and can be defined in a requirement diagram with slightly less syntax than we used for a full requirement:
requirementDiagram
interfaceRequirement dark_theme {
id: 1
text: Dark Themes Rule!
risk: low
verifymethod: inspection
}
element revised_skin {
type: css,
docRef: theme.css
}

Here we have an element named revised_skin that has only a pair of properties: type and docRef. These properties are plain-text and can be whatever is appropriate to your solution.
Note that this code also changed the dark_theme element from a standard requirement to an interface requirement. This is unnecessary, but more accurate in this example.
Now that we've shown how to create requirements and elements, let's take a look at how we can relate elements to each other.
To relate a requirement and/or element to each other, you declare their names with a descriptive arrow between the two items as shown below:
revised_skin - satisfies -> dark_theme
This creates a relationship on the requirement diagram and uses the satisfies label to describe that relationship.
In Mermaid.js SysML Requirement Diagrams you must choose a label and that label must be one of the following options:
containscopiesderivessatisfiesverifiesrefinestracesPutting it all together and adding a number of shapes in the process, we get the following more complex requirement diagram:
requirementDiagram
interfaceRequirement dark_theme {
id: 1
text: Dark Themes Rule!
risk: low
verifymethod: demonstration
}
performanceRequirement load_time {
id: 2
text: 200ms or less
risk: medium
verifymethod: test
}
functionalRequirement accessibility {
id: 3
text: Contrast
risk: low
verifymethod: inspection
}
element revised_skin {
type: css,
docRef: theme.css
}
element perf_test {
type: unit test,
docRef: LoadTest.cs
}
revised_skin - satisfies -> dark_theme
revised_skin - satisfies -> accessibility
revised_skin - satisfies -> load_time
perf_test - verifies -> load_time

Here we see that Mermaid.js lets us map out networks of requirements and elements. This lets us illustrate how we are verifying and fulfilling functional, design, and performance requirements in our software systems.
Requirement diagrams in Mermaid.js are interesting, but I'm not sure how much I'm personally interested in using them due to a few key reasons:
Mermaid.js requirement diagrams are very opinionated about the properties each requirement can have, what is displayed, and what values are acceptable. This limits your ability to customize these charts to fit your organization's needs.
Additionally, Mermaid.js requirement diagrams frequently overflow the bounding box of the requirement rectangle as shown in a few places on the last diagram above. This results in diagrams that don't look very professional.
While I love the idea of a requirement diagram in Mermaid.js, I have a lot of trouble seeing how this would regularly fit into my workflow as opposed to representing requirements in a flowchart or even class diagram.
However, that's just my own opinion and I'd love to hear what you think about Mermaid.js SysML Requirement Diagrams.
The post Creating SysML Requirement Diagrams in Mermaid.js appeared first on The New Dev's Guide.
]]>Mermaid.js Timeline charts let you quickly represent high-level timelines for projects and product development cycles using simple markdown.
The post Creating Timeline Charts with Mermaid.js appeared first on The New Dev's Guide.
]]>Earlier this week I wrote about using Mermaid.js to create Gantt charts that help you visualize tasks or major phases in a larger project. This can be great for detailed task analysis, but sometimes you just want to look at a high-level view of what's going on in a time period. Mermaid.js gives us Timeline Charts to help with that.
Timeline charts allow you to generate visuals like the following timeline showing the releases of .NET over the years:

In this article I'll walk you through the process of building this chart, step by step.
One note before I get into this article, however: at the time of this writing, Timeline charts are one of the newer features of Mermaid.js. That means that many tools won't yet be on a version of Mermaid.js that supports timelines.
For now, I recommend you use the Mermaid.js live editor to generate these charts until various tools update to be on Mermaid.js version 10.0.0 or later.
First, let's start by defining the major sections of time that exist in our timeline.
We can do this by adding a timeline root node that tells Mermaid.js to create a timeline chart and then adding a new line for every range of time that we want to exist as a column.
The timeline of dotnet is fairly involved and so I'm choosing to represent several ranges of time together in certain columns using the following markdown:
timeline
2000 - 2005
2006 - 2009
2010 - 2015
2016 - 2017
2018 - 2019
2020
2021
2022
This generates a simple timeline with the time periods I defined:

It's important to note that Mermaid.js doesn't see these values as years or time ranges or anything else. These are just text categories that we can use to describe each column.
Next, let's add the raw entries to our timeline chart.
If you only have one entry in a timeline column, you can add it on a single line, such as:
2021 : .NET 6
Subsequent entries should be separated by a : so you could define several items on a single line like this:
2022 : .NET 7 : .NET Framework 4.8.1
However, I vastly prefer separating out each entry onto its own row for readability. This produces the following markdown and chart:
timeline
2000 - 2005
: .NET Framework 1.0
: .NET Framework 1.0 SP1
: .NET Framework 1.0 SP2
: .NET Framework 1.1
: .NET Framework 1.0 SP3
: .NET Framework 2.0
2006 - 2009
: .NET Framework 3.0
: .NET Framework 3.5
: .NET Framework 2.0 SP 1
: .NET Framework 3.0 SP 1
: .NET Framework 2.0 SP 2
: .NET Framework 3.0 SP 2
: .NET Framework 3.5 SP 1
2010 - 2015
: .NET Framework 4.0
: .NET Framework 4.5
: .NET Framework 4.5.1
: .NET Framework 4.5.2
: .NET Framework 4.6
: .NET Framework 4.6.1
2016 - 2017
: .NET Core 1.0
: .NET Core 1.1
: .NET Framework 4.6.2
: .NET Core 2.0
: .NET Framework 4.7
: .NET Framework 4.7.1
2018 - 2019
: .NET Core 2.1
: .NET Core 2.2
: .NET Framework 4.7.2
: .NET Core 3.0
: .NET Core 3.1
: .NET Framework 4.8
2020
: .NET 5
2021
: .NET 6
2022
: .NET 7
: .NET Framework 4.8.1

This timeline is already useful, but the colors don't convey much other than a gradual progression forwards in time.
You can group together multiple columns into a section to help convey meaning or relationships.
In our timeline, for example, .NET has really had 3 major phases of its life:
We can add section nodes to convey this in our diagram:
timeline
section .NET Framework
2000 - 2005
: .NET Framework 1.0
: .NET Framework 1.0 SP1
: .NET Framework 1.0 SP2
: .NET Framework 1.1
: .NET Framework 1.0 SP3
: .NET Framework 2.0
2006 - 2009
: .NET Framework 3.0
: .NET Framework 3.5
: .NET Framework 2.0 SP 1
: .NET Framework 3.0 SP 1
: .NET Framework 2.0 SP 2
: .NET Framework 3.0 SP 2
: .NET Framework 3.5 SP 1
2010 - 2015
: .NET Framework 4.0
: .NET Framework 4.5
: .NET Framework 4.5.1
: .NET Framework 4.5.2
: .NET Framework 4.6
: .NET Framework 4.6.1
section .NET Core
2016 - 2017
: .NET Core 1.0
: .NET Core 1.1
: .NET Framework 4.6.2
: .NET Core 2.0
: .NET Framework 4.7
: .NET Framework 4.7.1
2018 - 2019
: .NET Core 2.1
: .NET Core 2.2
: .NET Framework 4.7.2
: .NET Core 3.0
: .NET Core 3.1
: .NET Framework 4.8
section Modern .NET
2020 : .NET 5
2021 : .NET 6
2022 : .NET 7
: .NET Framework 4.8.1

This now quite clearly segments the 3 major phases of .NET into sections - at least at the header levels.
Our Mermaid.js timeline chart is doing quite well, but adding a title would help orient readers to what they're looking at.
We can do this in mermaid by adding a title row to the beginning of the markdown as shown below:
timeline
title Major .NET Releases
section .NET Framework
2000 - 2005
: .NET Framework 1.0
: .NET Framework 1.0 SP1
: .NET Framework 1.0 SP2
: .NET Framework 1.1
: .NET Framework 1.0 SP3
: .NET Framework 2.0
2006 - 2009
: .NET Framework 3.0
: .NET Framework 3.5
: .NET Framework 2.0 SP 1
: .NET Framework 3.0 SP 1
: .NET Framework 2.0 SP 2
: .NET Framework 3.0 SP 2
: .NET Framework 3.5 SP 1
2010 - 2015
: .NET Framework 4.0
: .NET Framework 4.5
: .NET Framework 4.5.1
: .NET Framework 4.5.2
: .NET Framework 4.6
: .NET Framework 4.6.1
section .NET Core
2016 - 2017
: .NET Core 1.0
: .NET Core 1.1
: .NET Framework 4.6.2
: .NET Core 2.0
: .NET Framework 4.7
: .NET Framework 4.7.1
2018 - 2019
: .NET Core 2.1
: .NET Core 2.2
: .NET Framework 4.7.2
: .NET Core 3.0
: .NET Core 3.1
: .NET Framework 4.8
section Modern .NET
2020 : .NET 5
2021 : .NET 6
2022 : .NET 7
: .NET Framework 4.8.1

And there we go. That's a nice and compact visual that illustrates the major releases of .NET over the last 20 years.
As you can see, Mermaid.js timeline charts are fairly simple, but can be useful for creating high-level timelines that break things down by buckets of time.
However, I could easily see timelines being used for other things, such as representing work items by status, resource assignments, or other categorical variables.
Whenever you want to organize things by sequential columns and just need a simple card to display, a Mermaid.js Timeline chart might be worth considering.
While Mermaid.js Timeline charts aren't supported everywhere yet, I encourage you to look into their documentation and watch as more integrations support these powerful little charts.
The post Creating Timeline Charts with Mermaid.js appeared first on The New Dev's Guide.
]]>Mermaid.js lets you create flowcharts from simple markdown. This helps you create more compelling documentation and easily version diagrams.
The post How to Make Flowcharts with Mermaid.js appeared first on The New Dev's Guide.
]]>Mermaid.js is a powerful JavaScript library that can build a variety of charts and diagrams from a specialized flavor of markdown. While Mermaid.js supports many common and uncommon types of charts, perhaps the most frequently used type of chart it supports is the lowly flowchart.
Flowcharts are simple and flexible charts that connect different shapes together with arrows to convey a visual picture.

Flowcharts are often used to illustrate logical flows or decision-making processes and are frequently used in software engineering to show data or communication flows in software systems.
To show you what Mermaid.js can do with flowcharts, let's take a look at building out a simple flowchart illustrating a REST request that flows from the client to the server, is fulfilled by the database, and then returns back to the client.
We'll start first by creating a flowchart in markdown and defining the three shapes we'll want on our flowchart.
The markdown for this looks as follows:
``` mermaid
flowchart
Client
Server
Database
```
When rendered in a markdown viewer that supports Mermaid.js, this markdown displays the following diagram:

There are a growing number of places that support Mermaid.js diagrams including GitHub markdown, Polyglot Notebooks, and the Mermaid.js live editor. You can also import Mermaid.js to transform markdown on your webpage into diagrams.
Because a flowchart without any relationship lines is fairly useless, let's see how Mermaid.js allows us to define relationships.
With Mermaid.js you simply define a relationship in markdown with --> between the two connected shapes and the diagram takes care of the rest.
Here's our flowchart with a pair of relationships:
flowchart
Client --> Server
Server --> Database

This is getting there, but most client / server communication diagrams are arranged from left to right. Mermaid.js lets us make that tweak by stating flowchart LR:
flowchart LR
Client --> Server
Server --> Database

Great! Now we have the communications from the client to the server and from the server to the database, but it'd be nice to represent the responses as well.
With Mermaid.js you can have multiple connections to each shape if it makes sense. Additionally, Mermaid.js has a variety of connector styles, including -.-> to represent a dotted arrow instead of a solid arrow.
flowchart LR
Client --> Server
Server --> Database
Database -.-> Server
Server -.-> Client

Now the communications are a lot more explicit, but it'd be nice to label the contents of each message.
Note: if your primary focus is communications between systems, you may want to check out using Mermaid.js for sequence diagrams. Alternatively, if your focus is the data, you should investigate entity relationship diagrams.
Mermaid.js allows you to provide text over each relationship line if you'd like by typing the text in the middle of the connector arrow.
Additionally, because Mermaid.js is very sensitive to inconsistent spelling, you may want to define your shapes at the beginning of a flowchart and give them an alias to reduce typing in your markdown.
The code below aliases the client as c, the server as s and the database as db. Additionally, each relationship is now given text to illustrate what flows between systems.
flowchart LR
c[Client]
s[Server]
db[Database]
c -- HTTP GET --> s
s -- SQL Query --> db
db -. Result Set .-> s
s -. JSON .-> c

This diagram is now significantly more helpful and if we wanted to rename a shape, we only need to rename it in the line that defines the shape initially.
Mermaid.js supports more than just solid and dashed arrows.
Below are a variety of connector types that Mermaid.js currently supports. Additionally, I have a line that shows you how you can define multiple relationships in a single line via chaining:
flowchart LR
Base --> Arrow
Base ==> Heavy
Base -.-> Dotted
Base --- Line
Base --> You --> Can --> Chain --> Relations --> On --> One --- Line

Note: Line is mentioned twice on the last two lines of the markdown which explains why there are two lines going to it.
Our client / server communication diagram looks pretty good, but most programmers I've worked with draw their databases as drums on diagrams.
Mermaid.js allows you to customize shapes if you'd like and helpfully includes a database shape via [(Name)] syntax.
The code below customizes the shape of the database element on its definition line:
flowchart LR
c[Client]
s[Server]
db[(Database)]
c -- HTTP GET --> s
s -- SQL Query --> db
db -. Result Set .-> s
s -. JSON .-> c

As you might expect from a flowchart library, Mermaid.js offers a large number of custom shapes including these below:
flowchart
a[Default]
b([Rounded])
c[(Database)]
d[[Subroutine]]
e((Circle))
f>Note]
g{Decision}
h{{Hexagon}}
i[/Parallelogram/]
j(((Double Circle)))

For a more complete list of shapes, I recommend you view the Mermaid.js flowchart documentation.
While we already have a perfectly usable diagram, we could refine it further by illustrating where each part of our application is hosted.
In Mermaid.js you can add sections or groups to your flowcharts to visually group related elements.
Mermaid.js calls these groupings of elements "subgraphs" and allows you to define them by stating subgraph Graph Name to start the group and end to end it.
Here's our graph that illustrates that the client is hosted on Netlify and the server and database are running on Azure:
flowchart LR
subgraph Azure
s[Server]
db[(Database)]
end
subgraph Netlify
c[Client]
end
c -- HTTP GET --> s
s -. JSON .-> c
db -. Result Set .-> s
s -- SQL Query --> db

The diagram above makes this look easy, but I had to try a number of different ordering approaches until I got Mermaid.js to organize the groups the way I wanted to.
Once you use subgraphs, you will likely need to do some additional tweaks to help your diagram layout meet your needs.
Like Mermaid.js mind maps, flowcharts can use icons to improve their readability or visual appeal.
If you or the tool you are using have already imported Font Awesome, you can specify an icon for each shape via the fa: prefix and then the name of your icon as shown below:
flowchart LR
subgraph Azure
s[fa:fa-code Server]
db[(fa:fa-table Database)]
end
subgraph Netlify
c[fa:fa-user Client]
end
subgraph Netlify
end
subgraph Azure
direction LR
end
c -- HTTP GET --> s
s -- SQL Query --> db
db -. Result Set .-> s
s -. JSON .-> c

While I don't view these icons as particularly helpful in the above example, I certainly could see other diagramming cases that could benefit from adding iconography.
Mermaid.js flowcharts are simple, functional, and efficient.
Beyond helping you generate visuals, one of the key advantages of Mermaid.js flowcharts is that they are easy to embed in markdown documents as raw markdown. This lets others see your diagrams and easily make additions or corrections as systems inevitably change.
Finally, since these diagrams are stored as markdown, they are trivially easy to store in a version control system.
I personally plan on using Mermaid.js flowcharts for quick high level system architecture diagrams going forward. They may not have the full visual polish that you might want for something in a formal presentation, but they are simple, accessible, and powerful.
The post How to Make Flowcharts with Mermaid.js appeared first on The New Dev's Guide.
]]>