Pedro Piñera Open and human tech craftsmanship from Berlin https://pepicrft.me/images/avatar.jpeg 2025-05-12T00:00:00+00:00 Pedro Piñera [email protected] https://pepicrft.me/ Navigating anxiety https://pepicrft.me/blog/2025/05/12/navigating-anxiety 2025-05-12T00:00:00+00:00 2025-05-12T00:00:00+00:00 <![CDATA[

I can’t pinpoint exactly when and how it started happening, but I’ve been having bursts of anxiety that I’m still learning how to navigate.

I think the first event that led to this phenomenon was the layoff at Shopify. I had been trapped in the climb-the-ladder model, given a lot of time and emotional capital, and gained momentum that was suddenly interrupted. As I processed those emotions, I later realized that I was reaching a point of burnout at the company, so on the positive side, I’m glad that it happened before it had been too late.

The second piece was starting a company. It’s a very vulnerable position where you are this tiny fish in an ocean of business where dynamics are nothing like what you might have read in books or on the internet. While in programming things are more logical: you’ve got a problem, write the software to solve it. In business, you are invaded by questions and insecurities: Is this the right decision? Should we be jumping on that trend? What if the business doesn’t take off? Logically, I can provide answers to those questions. Emotionally, I’m not prepared to process them, which makes things uncomfortable at times. There are also other emotions, like disappointment with some untold realities, that blend in weird ways. I’d love to eventually write a book about this transition because as a developer, you are barely told about many of the realities that unfold.

The last piece, which I did not expect, was an accident while running that led to torn ligaments and, worst of all, damage to the peroneal nerve, which disabled one of my feet. I spent one of the most vulnerable moments of my life navigating the German health system, trying to find a doctor who could explain and provide answers to the problem, and in some situations being treated in not very humane ways, in some cases making me feel responsible for the accident because of my weight. It was rough, and even though I eventually got the surgery (I had to pay for it), fears about whether this will recover keep coming back because every day I have to see my foot not responding to my instructions. If you’ve ever had a nerve injury, you probably know what I’m talking about.

So these three things, mixed together with the sprinkle of social network dynamics that favor competition, radicalization, and overall negative emotions, leave me ending my days filled with anxiety, mentally and emotionally exhausted.

To fight it back, I’m trying to slow down. I’m minimizing mental concurrency and the time that I spend on social networks like LinkedIn whose dynamics I’m not comfortable with. I’m also learning to be okay with not being able to chew as much as comes my way, and giving myself time and space to be present with myself, to go out and do analog activities that have nothing to do with business or coding, like taking photographs. I’m trying to lean on the human and social side of things, being supportive rather than competitive. Being human at Tuist is what makes us so unique, and I believe it’s what will lead to the type of company we want to shape. I’m also accepting that life sometimes sucks, and that there are dynamics that, even though I disagree with them, like how the layoffs happened at Shopify or the accident while jogging, are simply part of life. What I had experienced until that moment was unusual. It takes time, but I’m getting there. I just came back from the gym, prepared myself a coffee, and gave myself some time to write these lines. How is your Monday starting?

]]>
<![CDATA[My personal journey through anxiety after experiencing layoffs, entrepreneurship challenges, and a serious injury - and how I'm learning to cope.]]>
Compiler optimizations or speed of delivery https://pepicrft.me/blog/2025/05/06/compilers 2025-05-06T00:00:00+00:00 2025-05-06T00:00:00+00:00 <![CDATA[

Compilers turn code into binaries, optimizing it for the target architecture. In ecosystems like Apple’s, this can mean the difference between a MacBook’s battery lasting an hour—or several. But the compiler can also determine whether we ship code quickly or slowly. And in today’s world, where everything moves fast and businesses must keep up, the compiler can easily become a bottleneck.

Talk to engineers, and you’ll hear excitement about new language capabilities—features that, while powerful, often add build-time complexity and degrade performance. Swift Macros are a perfect example of this. It’s exciting to see the language explore new directions, but if that comes at the cost of slower compilation and unreliable incremental builds, it feels wrong. Executives feel this too. If you maintain a large Xcode codebase and compare your team’s velocity to a web team’s, the difference can feel like an order of magnitude. So it’s no surprise when leadership asks engineering teams to explore React Native. When an app is merely a native presentation layer, abstracting the platform to gain speed becomes a necessity.

Now, add to the compile-time slowness the strong coupling between the compiler and a macOS host, and you’ve got a recipe for disaster. New LLM-based app development experiences are surfacing this problem more visibly than ever. Developers are expecting Apple to respond—but enabling meaningful improvements takes a long-term vision and multi-year effort. That’s something Apple seems to have lost in recent years.

They forked the web ecosystem and actively hindered it—yet the web kept building momentum. Now that momentum is paying off. I don’t know if we’ll end up “vibe coding” our apps with AI prompts, but it’s clear the way we build apps is going to change. And we can’t afford the slow compilation and feedback cycles that Apple imposes. For many new projects, React Native will be the default. You type a prompt, get your skeleton, and you’re coding in seconds.

I love Swift, and I enjoy being close to the platform—it means fewer abstractions. But I fully understand why organizations crave abstraction. And I’d love to see Apple respond to that. They have the resources to make it happen.

I wish they would acknowledge that they’ve lost their north star—but that they’re working on a strategy. Right now, it feels like they’re trying to push Swift in every possible direction while the core problems remain unresolved. How did it take so many years to address frequent merge conflicts? Will it take another decade to automatically copy dynamic binaries to the right destination in the build graph?

Or will they finally define and communicate a plan for the future of build tooling? Are they even thinking about this? If so, why not talk about it publicly? I’d love to follow along.

All of this presents great opportunities for Tuist. But I’d much rather our work felt like we were extending a solid platform—instead of hacking our way around its shortcomings.

]]>
<![CDATA[The Swift compiler optimizations are becoming so costly that Apple needs to rethink its approach to the build system.]]>
Remote macOS Containers as a Service https://pepicrft.me/blog/2025/05/05/commoditization-of-macos-containers 2025-05-05T00:00:00+00:00 2025-05-05T00:00:00+00:00 <![CDATA[

I find it somewhat ironic that mobile CI companies that dominated the mobile automation space, amassed VC capital, and built the muscle to manage macOS environments see squeezing every dollar from their customers as their only way forward, while the rest of the industry could push for innovation if the management of macOS containers became cheaper and easier. There’s never been a better example of The Innovator’s Dilemma than this one.

Hence why I’m so excited about all the creative energy that’s going into the space with projects like Tart, Lume, or Curie. Sure, there’s still a long way to go, but incremental steps are being taken towards enabling that commoditization. Even Cloudflare announced Linux Containers for their workers as part of the platform. Ideally, we move to a world where getting a virtualized environment is just one API call away.

I remember during my time at Shopify when we had calls with the MacStadium team, my recurrent feedback was: you guys should productize your offering and make it developer-friendly. Many developer tools in the space, including Tuist, would pay for some kind of “Remote macOS Containers as a Service.” Many developer tools require teams to leverage their CI/CD pipelines to build and push products. Imagine if you didn’t have to do that. We want to move to a world with Tuist where you can create a preview or a release right from your phone. And for that, we need to access macOS environments quickly and cheaply.

I’d be lying if I told you that we are not already thinking and starting to put some work into making this happen. We are, and we want to offer that investment to our users so that organizations can reduce their costs down to just the infrastructure costs. It’s easy. You place your AWS or Scaleway API keys, and we provision CI runners automatically or manually and plug them directly into GitHub or GitLab organizations. And those are the same environments that we can reuse to build your releases and previews, and sign your apps. No more YAMLs and complex automations. This is the world that we want to enable with Tuist.

BuddyBuild was moving in that direction until Apple acquired them and thought what the industry needed was CI but embedded into Xcode. Once we complete the current cycle, we’ll start exploring this and content addressable stores. This is a muscle that we need to build because it’ll bring the ecosystem closer to the developer experiences that other ecosystems have access to. And we’ll advocate for any open source solution that gets us closer to enabling that future.

This post is written by a human with its grammar reviewed with Grammarly.

]]>
<![CDATA[If we want more innovation in the mobile developer tooling space, we need to commoditize macOS containers.]]>
~AI~Vibe-first businesses https://pepicrft.me/blog/2025/04/30/vibe-first-businesses 2025-04-30T00:00:00+00:00 2025-04-30T00:00:00+00:00 <![CDATA[

Businesses are about maximizing the revenue versus cost tradeoff. They generate revenue by producing value in a market, and the tech industry, like no other, can do so with much less cost thanks to the close to zero marginal cost of software.

However, markets have a ceiling, and even though software has close to zero marginal costs, your software business might require humans, who are expensive—very expensive. And this is something hyper-growth business models are incompatible with.

Executives at major tech companies have found the perfect tool to work around the problem: AI. “We are AI-first companies,” they say. It’ll often be used to eliminate human costs, such as support teams. In other cases, like Duolingo, they’ll use it to break through the market ceiling. “We need AI to scale learning”, they said. What they really meant is that they need AI to dropship slop courses to continue exploiting humans feeling of learning progress because they complete streaks.

If you think you are indispensable in this new mission of AI-first companies, let me tell you something: it is mostly storytelling. You are digging your grave because you are a cost to the company, and your next performance review will be in the hands of LLMs. “Let us know why you can’t do this with AI” is the new mantra. This continues with a “don’t worry, this is about empowering you” so that you don’t wake up from the dream of the company’s mission. Once again, the ultimate goal of a company is to maximize profit; the ultimate goal of a hyper-growth company is to do so at all costs, including your empowerment or well-being.

Good luck to those companies that want to create companies where humans are dispensable. The challenges we face require more humans and more collaboration, not less, which only benefits a few. And yes, that means we can’t work 24 hours a day, 365 days a year, so what? If this will be the new trend, I’m not on that camp, sorry.

This post is written by a human with its grammar reviewed with Grammarly.

]]>
<![CDATA[In this blog post I share my thoughts about this trend of AI-first companies, and what I believe they really mean with it.]]>
Full-time entertainers https://pepicrft.me/blog/2025/04/29/full-time-entertainers 2025-04-29T00:00:00+00:00 2025-04-29T00:00:00+00:00 <![CDATA[

Social networks turned me into a full-time entertainer and entertained person. They managed to detach me from the real world, keeping me busy thinking about how to entertain people or starving to stay up to date because boredom became uncomfortable. This led to peaks of anxiety that I had never experienced before, and a lack of joy for things that used to be enjoyable because I started doing them just to talk about them. I tried to escape this many times, but I always fall back, like addicts experience with drugs. I became addicted to those dynamics.

The thing is that I’m aware that this is unhealthy. I don’t like it. But there’s this idea that I’ve built a brand, either mine or Tuist‘s, that would fade if I’m not active where everyone is. It’s a bit nonsensical, because we have no control on those platforms over how we are presented. Perhaps a brand might even deteriorate there because you might end up morphing it into a meme-posting machinery, but still, that thought is recurrent, and I believe it is the trigger that brings me back, along with the idea that people are talking about the new thing. So what? 5 new vibe coding tools, an add-AI-to-x tools, and a couple of new JS frameworks… Again, so what? I don’t want to sound cynical. I like tech, but a big part of the industry is hype cycles, because the tech industry is technology embedded in capitalism, and therefore it needs to reinvent itself. Today you are generating Ghibli Studio images, and tomorrow you’ll be generating your TikTok dances, the day after you’ll be an expert in the process of electing the new Pope, and in two weeks you’ll show your expertise in power outages in Spain.

I’m happy we decided this is not the game we want to play with Tuist. It’s stressful, tiring, and takes the focus away from the joy of the craft, which is what we love doing. It feels uncomfortable at times. Should we be doing like the others? But at the end of the day, we don’t control those algorithms. They are controlling us and making us part of the exploiting machinery, ensuring there’s stuff we can feed people with. We are placing a focus on our blog. Writing from the passion for what we love, and going deep into those topics. Embracing standards such that people can pull the content when and how they feel. If we are present on any social platform, we try to be on the ones that we think haven’t “enshittified” yet.

I’m trying again. I want to be present. I want to remain mentally healthy. I want to shift the focus from the telling to the doing. I don’t want to feel like a zombie hamster starving for “new” stuff, or feeling I have to contribute to keep other hamsters fed too.

]]>
<![CDATA[I'm tired of social network dynamics, and I'm working towards jumping off the train (again).]]>
Inviting ecosystems https://pepicrft.me/blog/2025/04/08/inviting-ecosystems 2025-04-08T00:00:00+00:00 2025-04-08T00:00:00+00:00 <![CDATA[

I’m a firm believer that a diverse ecosystem surrounding a particular platform correlates with its level of innovation, and the web is a prime example of this.

From highly collaborative apps to technologies that transform web browsers into virtualization platforms, there are countless people building for the web and pushing it in myriad directions. This diversity can sometimes be a drawback—especially if you’re trying to build a company—but it lays the groundwork for unprecedented innovation. People feel invited and inspired to tinker with new ideas.

Apple, too, built an ecosystem—an ecosystem of apps—and invited developers to join. They provided tools, frameworks, and a language, creating a foundation upon which developers could build their apps. Innovation flourished at the application level. This approach worked well for many years while Apple focused on expanding its hardware offerings and, more recently, its services. However, it never felt truly inviting, and it still doesn’t.

First, it excludes those who can’t afford the hardware and prefer platforms like the web, which are universally available and accessible from anywhere. Second, it alienates developers who don’t want their capabilities limited by proprietary tools, frameworks, and even an open-source language that remains tightly controlled. They prefer platforms offering greater freedom. Finally, it distances those who don’t align with Apple’s mission and view contributing to its open-source components as indirectly supporting Apple itself. They don’t see these building blocks as true commons.

This is creativity left untapped—creativity that could have advanced Apple’s build system, making it more deterministic and optimizable. It’s people who might have better understood the issues with LLDB and proposed solutions to ensure developers can effectively debug their apps, or who could have taken SwiftUI to other platforms by integrating it with native rendering technologies. While we’ve seen sporadic efforts from individuals trying to drive change, these initiatives often fade because they ultimately depend on Apple’s approval—a “yes” or “maybe” that requires internal discussion and rarely materializes.

I believe Apple’s current struggle to keep pace with innovation is a direct consequence of this missing creative energy. Their ecosystem can no longer ignore the outside world or the advancements happening beyond its walls, and developers naturally expect a response from Apple that never arrives.

Some suggest that throwing money at the problem might be a solution, and while that could yield short-term gains, Apple would soon find itself back in the same position. The momentum of more inviting ecosystems compounds over time in ways money alone can’t replicate.

I firmly believe Apple needs a mindset shift to make its environment more welcoming to this creative energy. If they succeeded, people would not only work to advance the ecosystem but also become stewards of it, much like Apple itself. Recent job postings from Apple seeking engineers to focus on Windows are a promising first step, but I’d love to see an approach centered on inspiring people to take Swift to Windows. What motivates Windows developers? What drives them to contribute to Swift in the context of Windows every day?

Microsoft nailed this with VSCode and TypeScript. They built an ecosystem that sparked countless ideas and laid the foundation for the AI-based code editors we enjoy today. It took recognizing that open source is an aspiration for many and that showing you care can profoundly shift how people perceive you.

This creative energy is absent from Apple’s ecosystem due to its closed, proprietary nature and the tight coupling of its toolchain, paired with Apple’s firm control over the direction of its projects. We also lack a visionary—someone to oversee and communicate where things are headed, assuring everyone that Apple cares about Swift breaking free from its ecosystem. Imagine someone saying, “Yes, SwiftUI should have been open source, and we’re taking steps to make that happen. It’ll be open source because we want people to take it to new platforms.”

This is a long-term investment that’s hard to justify because the returns are distant in both time and impact. A brilliant idea born from a more open toolchain might emerge years from now, and its connection to that openness might not be immediately obvious. Will it happen? Who knows—it’s Apple. But I believe it will require new leadership that embraces a more open, ecosystem-building approach. A leadership that shares a vision for their technologies beyond Apple devices.

]]>
<![CDATA[In this blog post I talk about how Apple's non-inviting ecosystem has hindered innovation.]]>
Tuist's plans https://pepicrft.me/blog/2025/04/02/tuist-plans 2025-04-02T00:00:00+00:00 2025-04-02T00:00:00+00:00 <![CDATA[

Since we decided to turn Tuist into a business to ensure we could continue supporting teams and individuals, we’ve thought a lot about what we want Tuist to look like in the future.

With codebases growing larger and the speed of business demands increasing, we believe Tuist should position itself as a productivity tool for app developers. The “app developers” part is crucial. While some challenges extend beyond app development, we believe app development is where we originated, where we understand the challenges thoroughly, and where we can deliver the most value. Other ecosystems already have established productivity solutions, so venturing into them would be counterproductive.

We believe we should cover the entire lifecycle, from the early stages of an app to its distribution and the scaling of its development. Apple provides some basic tools, but they are insufficient. Developers must handle initial plumbing themselves and eventually delegate it to what companies call “platform teams.” We want to be that virtual platform team and eliminate all plumbing work. Tuist needs to feel “plug & play.” Its adoption should be as simple as installing a GitHub App in your repository. The developer experience we aim to enable already exists in other ecosystems. The web has Vercel. React Native has Expo. We want to play a similar role in native development.

You might think you don’t need this—that you can grab the tools yourself and wire everything to build your own experiences. I understand. Many people felt the same about managing their servers and using Kubernetes to orchestrate their own deployments. But let’s be honest: that’s fun to build, but not fun to interact with every day. All the energy spent maintaining such a stack is energy not directed toward solving problems. I like to say that your building momentum is frequently interrupted in app development. There’s something magical about doing a git push and seeing a URL to preview your changes shortly after.

Now the question is how to enable that future. We are limited in resources and capital. Some might see this as a limitation, but I see it as an asset. Limits force us to think creatively, and they’ve helped us realize that we have the most valuable asset in the developer ecosystem: a community. Communities take significant time and human capital to build, and at Tuist, we’ve invested heavily in that since we wrote the first line of code in 2017.

I touched on the role of communities in the commoditization trap. The long-story-short version is that a community is the best asset a company can have these days. For instance, Mobile CI is plateauing because they failed to build a community. Every company builds communities differently. We want to grow ours on the same foundation that gave birth to the community: open source.

We had to close-source part of Tuist because the project was at risk of free-riding, but we want to return to everything being open source, and we have a plan in place for it. But first, let’s talk about open source for a bit.

We want to continue developing open-source technologies. We are shifting the focus away from project generation, so expect innovation in other app spaces. These are gifts to the community to foster innovation. We’ll then figure out how those technologies can be extended by leveraging the capabilities of a server to build a revenue source for the company. A server brings three elements:

  • A remotely accessible database
  • A public HTTP interface to interact with
  • Background jobs

We envision a model where the client-side technology can work independently without the server. You can build your own server using our client-side open-source components if you want—they’ll be permissively licensed. We’ll support any open-source efforts that align with this direction, like Lume’s virtualization solution.

We’ll also make part of the server open source with a permissive license. This means you can self-host the server too and not pay a dime for it. You’re right—we need to address this. Experience has taught us two key lessons:

  1. The needs of large enterprises differ greatly from those of small and medium companies.
  2. Large enterprises are the ones with significant capital to invest in these tools.

Instead of trying to capture 100% of a market—which traditional companies attempt, only to end up building vendor-locked ecosystems—what if we figure out how to capture value from the 20% of large companies that could bring 80% of our revenue (Pareto principle at work)? We could treat the remaining 80% as companies contributing other types of capital:

  • Feedback
  • Word-of-mouth marketing
  • Code contributions

Business owners often view everything through a financial lens because it’s measurable. But there are other forms of capital that make a company valuable, which we continue to struggle to quantify. The companies that value these are the most likely to thrive.

For Tuist, we’ll place some features or capabilities under a different license that large companies would need to pay for, while everything else would be MIT-licensed with a ready-to-deploy Docker instance. Note that over time, hosting Tuist will become more complex—not because we’ll force complexity, but because it will naturally evolve, especially as we invest in capabilities like low-latency cache servers and features based on running ephemeral builds. At that point, even that 80% might be inclined to pay for us to host it.

This isn’t a new model. You see it in GitLab, PostHog, Cal.com, and Grafana. It’s just not common in the app development productivity space because companies there often have a strong sales and business profile. We’re bringing a different take on business.

There’s one catch to the open-source model: besides marketing being more effective because you build a loyal community that spreads the word organically (as opposed to throwing money at marketing, which is expensive and doesn’t guarantee results), you open yourself to a diverse pool of talent and ideas that contribute to the project. Let me tell you, you can’t beat a product built by a community. We’ve learned that with Linux. I’m surprised we haven’t seen more of this in this space. That’s why we’re enabling it with Tuist.

So, picture a world where Tuist is open source, with some enterprise-related features or capabilities under a different license.

To enable that vision, we are focusing strongly on the following pillars:

Foundational blocks

We are investing heavily in automation and establishing the right foundations so people can build with them easily. Contributing to Tuist should feel like playing with LEGO. This is why we’re investing in a design system for the CLI and the web, Noora. Developers without much design experience can contribute features and design them themselves.

Standardizing data and making it accessible

Apple’s development environment is well-known for its proprietary formats. We’re going to standardize them and make them accessible via the web. Want to access the results of your last build? It’s just one request away. The web is a tool for accessibility, so we’re embracing it fully. As part of Tuist, we’re building an API—currently used only by the CLI—but we plan to productize it so people can build their own tools with their project data. The API will be part of the product, not just an implementation detail.

Virtualization capabilities

We’ll invest in developing the ability to run virtual ephemeral builds—first, to enable web-driven workflows like releasing an app or creating a preview with a button click, and second, to break the strong vendor lock-in organizations face with CI providers, allowing them to take control of their data and infrastructure.

Top-notch DX

Developer experience (DX) is front and center in Tuist. We can’t afford to degrade someone’s experience compared to Vercel, Supabase, or Linear. Tuist needs to be a joy to use—something whose value is conveyed through its design. We don’t want it to be a tool made by developers trying to be designers. No, we need to think deeply about design, crafting workflows and visual hierarchies that are delightful to navigate.

I’ve never been this excited about Tuist’s plans. I firmly believe this strategy will pay off in the long run. As mentioned, it requires significant human and time capital investment. But while other companies come and go, outcompeted by better open-source alternatives, we’ll keep rowing, building the best and most open development productivity tool for app developers.

]]>
<![CDATA[This is an stream of thoughts around the future of Tuist.]]>
The commoditization trap: why software needs community to thrive https://pepicrft.me/blog/2025/03/31/software-commoditizes 2025-03-31T00:00:00+00:00 2025-03-31T00:00:00+00:00 <![CDATA[

Whether you like it or not, software tends to commoditize—including yours. This is a key reality to understand if you’re building a software company.

Imagine a new market, say, one based on AI. You might be among the first companies to dive into AI-generated UI. Everyone’s impressed by this novel need you’re addressing, and it’s tempting to believe you’re uniquely positioned to dominate the market. But you’re not. Your company lacks an economic moat. You’re easy to replicate.

As I mentioned in If a business can be open source, it’ll be open source, an open-source solution is likely to emerge. It might take longer, but when it does, it will probably limit your ability to capture value—most likely due to its fairer pricing and superior product. Open source can afford to do this with far less capital involved.

Some companies try to defend against this by expanding their offerings. There’s no better example than CI (continuous integration) companies, which now provide a suite of development tools—from caching to test analysis—to stay competitive. But once again, many of these solutions are easily replicable in open source. Even more surprisingly, some don’t require a server at all. This expansion can be a tricky move: you risk drifting from what your users know you for, reframing yourself as something more comprehensive. That shift can take years for people to grasp. In fact, it’s a challenge we’re currently facing at Tuist, where people still see us as just a project generation tool.

Another approach companies take is capturing value through infrastructure. The software has value, but without well-managed infrastructure, that value isn’t fully realized. This depends on the type of software. For a native app with self-contained value, it’s trickier—though people are also less incentivized to open-source it in the same form. Your advantage lies in maximizing platform capabilities. You’ll always have an audience there, but don’t expect endless growth. Just ask Sketch. Betting on something other than the web might give you an early edge, but it can leave you at a disadvantage in a world that favors collaborative solutions over individual ones. People crave connection—they love the Figmas, Notions, and Slacks of software.

That said, providing value through infrastructure is easier to replicate if you’re up against a cloud giant like Amazon. All they need is enough incentive to target your market. That’s why Google acquired Firebase and Fastlane—they saw an opportunity in the mobile space, though it didn’t fully pan out as expected. I wouldn’t be surprised if Firebase gets sunsetted in the coming years. A strong example of capturing value through infrastructure is Supabase, which we use at Tuist. The software has value, but there’s even more in managing and scaling your database—something they handle for you. It’s not something we’d ever consider doing ourselves because we’re focused on building our product.

A better model is creating an ecosystem and a community around it—one built on long-term incentives, not short-term gains. This distinction matters because it’s easy to create the illusion of an ecosystem or community by throwing money at it. But the best ones take time to nurture. You can tap into basic human needs and desires—like the pursuit of higher status—but that only works until people realize meritocracy is a dystopian mirage. Take the “indie developer” dream, for instance. We all fantasize about making a living from our software. Many products target these communities, feeding the illusion that their tool is the key to success, just like the stories they’ve heard. But as YouTube has shown, it’s more complicated than that. It might work briefly, but it can quickly turn into deception.

In my view, the best model is the one GitHub, Strava, and, to some extent, Spotify have mastered. These platforms amass social capital that makes leaving emotionally unthinkable. A GitHub profile is a developer’s CV. A Strava profile showcases a healthy life and connections with others. Spotify’s profile curates your musical tastes with recommendations no one else can match. This model is incredibly hard to replicate today. It’s not just about a critical mass of users—you need a product designed to capture that value fast, before anyone else does. Years ago, who’d have thought our profiles on these platforms would become so valuable?

In a tech world moving at breakneck speed, where society grows more individualistic despite collaboration yielding better outcomes and more happiness, I believe open source—and its community component—offers a sustainable alternative. The economic moat lies in its community value, rooted in the idea of building shared commons, rather than struggling to stay afloat in an industry that commoditizes itself.

]]>
<![CDATA[Software commoditizes fast. Without an economic moat, open source and giants like Amazon can erode value. Build a community-driven ecosystem to endure.]]>
Mobile CI is plateauing https://pepicrft.me/blog/2025/03/25/mobile-ci-is-plateauing 2025-03-25T00:00:00+00:00 2025-03-25T00:00:00+00:00 <![CDATA[

We are considering solving some problems at Tuist that require virtualizing macOS environments. As part of this, I’ve invested some mental energy into understanding the finances and technology stack of the mobile CI landscape. What I’ve discovered is that the landscape is plateauing, CI companies are responding to this shift, and we might be on the verge of either a devaluation of CI companies or a revolution. To better understand the situation, let’s dive into the stack:

The stack

Technologies can be broken down into layers, and CI is no exception. Starting from the bottom:

  • Machines: We need environments—physical or virtual—where a set of steps can run. This is already offered as a service by most cloud providers. The availability of Apple hardware has been somewhat limited, but this is changing as more players enter the space.
  • Virtualization: CI services run builds in disposable, virtualized environments to prevent data leakage across builds. In the Linux world, this is highly commoditized with tools like Docker or Podman. For macOS environments, Apple took the first step toward commoditization by releasing the Virtualization Framework. Tart followed with a source-available, Docker-like solution, while Lume and macvm joined the game with open-source-licensed alternatives.
  • Orchestration: For Linux environments, where provisioning new environments is fast, orchestration is typically handled by the cloud provider. However, macOS is a different story. Since macOS images are not lightweight, you need a system that can provision a fleet of Apple hardware, configure the environments, load them with the appropriate images, and make them available for use.
  • User Layer: At the topmost layer are the features users interact with directly (beyond just the UI). This includes viewing logs, retrieving them along with build artifacts, and parsing and executing pipelines.

If you use a CI service, you probably don’t think about this structure. But once you understand how it’s organized, it’s striking to realize how close we are to full commoditization. Let me highlight some key developments that led me to this conclusion.

The commoditization of the space

In 2019, GitHub released GitHub Actions, which introduced hosted runners alongside the concept of “bring your own runners.” You could either provide your own runners or use partners that integrate directly into your GitHub organization. This meant GitHub would handle the user layer and the entire stack—unless you chose to bring your own.

Let’s be honest: it’s hard to compete with GitHub’s user-layer experience. It’s embedded where collaboration happens, and its proximity enables features that CI providers can’t replicate—such as declaring permissions for the exposed GITHUB_TOKEN. GitHub also built a rich ecosystem of reusable actions to base your workflows on.

This shift gave rise to companies like Cirrus Labs, Cirun, and Depot, which handle the runner provisioning for you. From the layers above, they manage orchestration, while GitHub takes care of the rest. The adoption process is remarkably straightforward, and there’s no need to migrate pipelines from one proprietary format to another.

GitHub isn’t alone in this trend—GitLab and Forgejo also support bringing your own runners.

Another recent change is in virtualization. Tart brought Docker-like concepts to Apple’s Virtualization Framework, but now it faces competition from permissively licensed alternatives like Lume and Curie. Virtualization is getting cheaper. While Tart may still lead in capabilities, open-source projects have a knack for catching up quickly due to community contributions. I believe it’s only a matter of time before they’re on par.

What’s keeping people with CI providers?

I ask myself this question daily, and I think the answer is straightforward: vendor lock-in. By design, users are tied to platforms they chose years ago through proprietary YAML formats that are costly to migrate and ecosystems of steps that tightly couple their automation to the service.

But people are waking up. Dagger is leading the charge by proposing that automation shouldn’t be tied to a single company. Pipelines should be portable, just like OCI images. Absolutely! Dagger builds on a foundation that doesn’t yet seamlessly support macOS-dependent builds (due to its approach to virtualizing steps), but there are still many ways to make automation portable. I wrote about this in Tuist’s blog. Your automation should belong to you, not a company.

CI providers know they must offer more to differentiate themselves from GitHub Actions. However, they often lack the expertise to meet developers where they are. Instead, they double down on vendor lock-in with serverless solutions that could simply be open-source CLIs. This confuses users—CI companies were supposed to focus on CI, but now they’re tackling signing, release management, and security promises that large enterprises obsess over. Meanwhile, solutions like Runway are emerging, focusing on doing one thing exceptionally well and easily capturing customers.

Plateauing

Looking at the layers above, orchestration is the next to commoditize. We’re one open-source project away from a service where you input an AWS or Scaleway key, install a GitHub app, and you’re set. This is already happening in other domains, like app hosting and database hosting. If an orchestration layer goes open-source and invites companies to collaborate on building the best plug-and-play solution for your Git forge, CI providers could lose market share quickly.

I predict this will happen. Orchestrating virtualized macOS environments will become cheaper, potentially even offered as a foundational service for others to build upon. This could spark more innovation in the space—an area where mobile CI has lagged behind the web, largely due to innovation being locked in proprietary systems. That’s why I’m excited about this shift. We need more players thinking creatively, and commoditization enables that accessibility.

At Tuist, we’ll ride this wave of commoditization and explore how virtualization can solve some of our needs and challenges while delivering a better developer experience.

]]>
<![CDATA[In this blog post I share how we might be on the verge of a revolution in mobile CI.]]>
If a business can be open source, it'll be open source https://pepicrft.me/blog/2025/03/24/open-source-business 2025-03-24T00:00:00+00:00 2025-03-24T00:00:00+00:00 <![CDATA[

If you’ve been following me for a while, you might already know that I’m a huge advocate of open source. My relationship with it has evolved as the project gained popularity, most recently becoming the foundation upon which we are building an open core company. In this post, I’d like to share what’s so special about it and why I believe it’s the foundation for building the most thriving and long-lasting companies. Let’s dive right in.

What is open-source?

Open source is a philosophy of releasing software that embraces a set of principles to grant users more freedoms than their closed-source counterparts. The definition of open source is maintained by the Open Source Initiative (OSI), which determines which licenses comply with it. These licenses include a series of clauses that the distributed software must follow. Some are very permissive, like the MIT license, which only requires you to credit the project, while others, like the AGPL, fall into a copyleft category, requiring your distributed software to adopt the same license.

Much of the software and infrastructure we build today runs on open source. From the many Shopify stores powered by Ruby on Rails to the countless servers in data centers running Linux-based distributions, it’s everywhere. From a business perspective, it’s a tool to reduce costs and attract developers. For developers, it’s an altruistic endeavor and a way to build their resumes.

Open source is unique because when people gather to create something for fun, wonderful things can happen—especially when contributors join from around the world. Linus Torvalds explored this in his book Just for Fun. The challenge, however, is that doing something for fun often requires a financial safety net, which many lack. If you’re fortunate enough to have spare time after work, you might dedicate some to it. Doing so as part of your full-time job is rare, and it can even backfire—you might be hired for the reputation your open source project brings, only to be told you can keep working on it in your free time. I’ve seen examples of this already, but I’ll save that for a future post.

If your project succeeds, you might find yourself grappling with the economic dynamics of open source. This can feel uncomfortable, as if you’re betraying your community by not initially sharing that it could become the foundation for a business. Money is something everyone needs to make a living, yet it’s a topic we often shy away from—especially in the context of a project that was open and “free.” But in many cases, monetization is necessary to avoid burnout or letting the project stagnate.

As a developer, business is often framed around sustainability, but what about from a founder’s or business-oriented perspective? What makes open source companies so unique?

Building thriving & long-term businesses

In a traditional business—tech or otherwise—the production and capture of value strongly correlate with your investment in your workforce (human or AI) or marketing campaigns. Moreover, the diversity of ideas is limited to the diversity of talent within your organization. For global companies, this can be complicated, so many restrict themselves to the country where their legal entity is based, significantly limiting diversity. This is why so many founders are keen on creating a “network effect”—a topic I’ll explore in future articles. Value capture often balloons due to social dynamics, as we are status-seeking creatures.

Open source changes that. The talent pool becomes the entire world. GitHub laid the foundation for this by offering free services to open source projects and recently even made GitHub Copilot agents available to review code, further reducing the investment needed to maintain the open source layer.

This has several implications. First, it diversifies ideas. Friction is removed from proposing solutions and sharing problems—it’s just an issue away. Second, developers enjoy contributing to open source, and that excitement naturally evolves into a network effect. If you don’t believe me, look at how people gather worldwide to discuss the open source project Supabase. Finally, your brand can outshine competitors through developers’ appreciation of your contributions to the commons, fostering more innovation in your business domain.

Rethinking business with open-source

If you come from a traditional business background, opening up one of your assets might seem daunting. But trust me—despite how scary it sounds, you can dominate a market with it. Look at Grafana if you need proof.

This clicked for me recently after watching a talk by the CEO of Penpot, Pablo Ruiz-Múzquiz, who discussed the business model they’re embracing for their open source project. Businesses are about producing value and capturing some of it back. Closed-source companies focus on capturing as much value as possible—especially when they hit a ceiling and struggle to produce more. This is why many are excited about AI: it opens new opportunities to create and capture value. Producing more value requires creativity and innovation, which can stall if you’re limited to your internal talent pool. That’s why you see CEOs pushing their teams to experiment with AI. It’s more involved than that, though—it requires a culture of innovation and experimentation, something open source communities excel at.

In contrast, open source captures little to no value compared to closed source. However, it can produce significantly more. Even if you capture less, the net gain is often higher than that of closed-source counterparts. Plus, your ability to produce new forms of value is far more agile. This is why more businesses are embracing open source foundations with curiosity, challenging well-established industries.

Capturing value requires drawing a line—what will people pay for? If you study open source companies or watch the Penpot talk, you’ll see several models. Choosing one depends on your product’s nature and the company you want to build. Some charge for services; others, like Sentry, charge for hosting because self-hosting is complex. Most people prefer paying for expertise—and supporting the project—over managing it themselves. This model often leads to fairer pricing, as exorbitant rates would drive users to self-host or spawn competing hosting services, undermining your business.

The 20/80 principle

The Pareto principle can help you decide where to draw the line. In companies, 80% of revenue often comes from 20% of customers. This is why many products eventually focus heavily on B2B offerings—large enterprises have the capital and willingness to pay for tailored solutions. With this in mind, you can use the principle to determine what should be paid: the needs of large enterprises.

Where that line falls depends on your customer. At Tuist, we’re still figuring ours out. Some draw it between simple and advanced features, like GitLab, which offers community and enterprise versions. Paid features might include single sign-on, for example. Cal.com follows a similar model, often placing enterprise features in a separate directory (e.g., /ee) with a different license. It’s still open, but hosting it requires payment—otherwise, you’d face legal liability.

In other cases, value lies not in the software but in the infrastructure. Take Supabase: the complexity is in running and scaling databases automatically, so you don’t have to think about it. This only works if your project’s nature aligns—for instance, Tuist is easy to host, so this model would jeopardize our business.

Some opt for licenses outside OSI’s definition. Sentry, for example, proposed the Fair license, which restricts competition to protect against free-riders. While complex hosting might deter individual users, it’s trivial for giants like AWS or Google Cloud, who could outscale you. We nearly faced this with Bitrise offering caching for Tuist as a service. The downside? Non-OSI licenses—or projects requiring a Contributor License Agreement (CLA)—may deter developer contributions, as they feel less “open.”

One intriguing model is Penpot’s Open Nitrate Model, which embraces “charging the controller.” Instead of gating advanced features, they believe everyone should access them, and you only pay for a certain level of control. It’s called the Open Nitrate Model because a feature becomes paid if you can answer “No” to these three questions (forming a “NO3” element):

  1. Will this capability limit new users from discovering and using Glossia?
  2. Will this capability particularly benefit advanced users?
  3. Is this capability relatively trivial to build?

The paid version is a closed-source extension of an extensible open source foundation—first by you, then by anyone else interested.

Let’s talk about the 20%

What about the 20% who self-host and don’t pay? That might feel uncomfortable, right? But here’s the thing: they wouldn’t have paid anyway. Instead, they contribute differently—sharing ideas, reporting bugs, fixing issues, and spreading the word. Not all contributions are financial, and these are incredibly valuable. They’re what set you apart from competitors. For those from traditional business backgrounds, this is the hardest shift. When you view everything through a financial lens, non-paying users seem illogical. But in tech, intangible contributions often outweigh everything else.

Some things you should keep in mind

Regardless of the model you choose—which depends entirely on your context—here are a few key considerations.

First, over-communicate your thinking about the open source side, the community, and the business. The more transparent you are, the better your decisions will be received. At Tuist, we waited too long to socialize our maintenance struggles. By the time we proposed building a business to sustain the project, some didn’t like it. Even with clear communication, there’ll be churn—and that’s okay. Free is always better, but a dedicated team improving the project daily is even more valuable.

Second, own your trademark. In an open source company, much of your intangible value ties to your brand and community. Protect this asset to prevent free-riders from exploiting it. While code can be forked, building a brand and community takes years and can’t be easily replicated.

Closing words

In the coming years, I believe we’ll see more open source businesses offering alternatives to closed-source incumbents. They take longer to establish, requiring community and brand-building through human interaction—no shortcuts there. Supabase is an impressive example of rapid community growth, but it’s an outlier. Ever heard the saying, “If something can be built in JavaScript, it’ll eventually be built in JavaScript”? I think the same applies to open source: if a company can be open source, it will be. The longer you delay embracing it in your domain, the more disruptive the open source alternative will be when it arrives.

]]>
<![CDATA[In this blog post I share why open source businesses are the most thriving and long-lasting companies.]]>
Nightly builds are the wrong solution to the right problem https://pepicrft.me/blog/2025/03/19/nightly-builds 2025-03-19T00:00:00+00:00 2025-03-19T00:00:00+00:00 <![CDATA[

Nightly builds in app development are the wrong solution to the problem. We keep cargo-culting because:

  1. People are familiar with the term.
  2. You don’t need to go back to first principles.

This happens not just in the context of app development productivity, but everywhere else. Someone uncovers a new problem or need, for which they build a solution—not necessarily the best one today, but the best one at the time, considering the constraints then. Then, you see a stream of companies jumping into the new markets. Eventually, everything commoditizes, and we get stuck with something that doesn’t feel right.

Rinse and repeat.

This is how nightly builds feel to me. The sunsetting of App Center and everyone’s rush to profit from it with yet another solution like App Center smells fishy to me.

But what would be the alternative? If we go back to first principles, nightly builds solve testing changes and providing feedback. However, nightly builds are detached in time and space from where the changes originated, often pull requests. Pull requests are the place where conversations around changes happen. Trying to do something distant from that creates a source of friction and a foundation for even more complexity.

Feedback needs to happen in PRs.

And that requires creating builds (which in Tuist we refer to as previews) quickly, when needed, and from anywhere. Let’s break that down:

  1. Fast: You don’t want to wait half an hour for a preview because, by the time you get the build, the PR might have already been merged. That’s why we built binary caching, and we plan to invest in it further.
  2. When needed: You should have control over when you want a preview and of what, because, sadly, macOS computing resources are still expensive. You don’t want to depend on some pipeline being configured to get what you need.
  3. From anywhere: By posting a comment on a PR, sending an email, asking your LLM of choice, or tapping a button in a mobile app.

We are actively working on #1, and we are building technology and infrastructure to enable #2 and #3.

Are previews nightly builds? No, they are not. They have in common that the token of exchange is an installable build. The former aligns with collaboration expectations, while the latter makes collaboration less enticing.

Does it suck from a business perspective? Oh yeah! We need to market a new concept and bring people back to the problem that originates the need for something like nightly builds. But this is the type of company we are building—one that focuses on the problems, strives to build the best solutions, and challenges the status quo, adapting as the environment changes. A lot has changed since app builds were proposed, and it’s time for something different.

]]>
<![CDATA[In this blog post I share why today nightly builds are the wrong solution to the problem, and the alternative that we are proposing.]]>
Setting up Docker DinD in Forgejo Actions https://pepicrft.me/blog/2025/03/16/docker-dind-forgejo-actions 2025-03-16T00:00:00+00:00 2025-03-16T00:00:00+00:00 <![CDATA[

I spent a fair amount of time today trying to get Docker DinD working in Forgejo Actions, so I thought I’d share the steps for my future self or anyone running into a similar need.

If you want to build a Docker image from Forgejo Actions when using Docker as a runner, you’ll have to use Docker-in-Docker.

Steps

  1. The first thing you’ll need to do is enable privileged mode when launching task containers. This is done by setting the attribute container.privileged to true in your runner’s config.yml file.

    Note: This has security implications, so use it with caution. Before running actions for external contributions, ensure that they are not malicious.

  2. Once you’ve made this change, restart the runner.

  3. The next step is configuring your pipeline to start a sidecar service with Docker DinD:

name: MyProject
on:
  push:
    branches:
      - main
  pull_request: {}
jobs:
  build-image:
    name: Build image
    runs-on: debian-latest
    container:
      image: node:20-bookworm
      options: >-
        --privileged
      env:
        DOCKER_HOST: "tcp://docker:2375"
        DOCKER_TLS_CERTDIR: ""
    services:
      docker:
        image: docker:24.0.5-dind
        options: >-
          --privileged
        env:
          DOCKER_TLS_CERTDIR: ""
    steps:
      - uses: actions/checkout@v3
      - name: Install Docker
        run: |
          apt-get update
          apt-get install -y ca-certificates curl gnupg
          install -m 0755 -d /etc/apt/keyrings
          curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
          chmod a+r /etc/apt/keyrings/docker.asc
          # Add the repository to Apt sources:
          echo \
            "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
            $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
            tee /etc/apt/sources.list.d/docker.list > /dev/null
          apt-get update
          apt-get install -y docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
      - name: Build image
        run: |
          docker build -t my-project .

Note: The above pipeline is designed for Debian as the OS, so you might need to tweak it according to your specific requirements.

That’s all you need. The --privileged flag is one of the most important elements - without it, things simply won’t work.

]]>
<![CDATA[Learn how to use Docker from your Forgejo Actions pipeline.]]>
Dogfood if you can https://pepicrft.me/blog/2025/03/12/dogfooding 2025-03-12T00:00:00+00:00 2025-03-12T00:00:00+00:00 <![CDATA[

I’ve noticed that when using various products, I frequently encounter frustrating bugs that seem obvious - issues I would have fixed immediately had it been my own product. This is precisely why I’m drawn to open source software: the ability to directly implement fixes and submit pull requests to repositories.

When developers regularly use their own products (dogfooding), they naturally identify and address pain points. Without this practice, teams must rely solely on empathy for users’ challenges. In our increasingly individualistic society - partly influenced by current social media dynamics - genuine empathy is becoming less common.

Entire ecosystems suffer from this disconnect. Take localization tools, for instance - using them often leaves me wondering if their creators have ever experienced them from a user’s perspective. Investigation typically reveals they don’t localize their own tools, which explains the subpar user experience.

While dogfooding can be challenging in some contexts, leadership should integrate it into company culture whenever possible. During my time at Shopify, though developers rarely maintained their own stores, they utilized a tool called “tophat” that simplified testing new changes. This led to more meaningful code reviews based on actual usage experience rather than just inspecting code or confirming passing tests.

At Tuist, we build our CLI and app using our own tools, using them daily. This creates a natural alignment between the improvements we identify and the needs of our users. While users typically only report major blockers, daily usage reveals numerous small opportunities for enhancement that collectively create exceptional products no competitor offers.

For example, we recently began improving our CLI’s user interface, not because of user complaints, but because our own daily usage showed clear opportunities for enhancement. This approach to continuous refinement is what makes products like Linear superior to alternatives like Jira.

Developer tools are uniquely positioned to benefit from this practice. Developers at GitHub use GitHub; developers at GitLab use GitLab. Companies that create developer tools without using them internally are missing a crucial opportunity, and their products inevitably reflect this disconnect.

]]>
<![CDATA[If you can dogfood your product, do it. It's a great way to build a better product.]]>
Lynx is an invitation for UI frameworks to support mobile development https://pepicrft.me/blog/2025/03/10/lynx 2025-03-10T00:00:00+00:00 2025-03-10T00:00:00+00:00 <![CDATA[

A week ago, ByteDance announced the release of Lynx, a technology for building mobile apps using Web technologies. ByteDance had been using it to power many of their apps, and they decided to package it up and open-source it. Having Lynx enter the space with a new approach is great news for the community.

Something that slightly bothered me about the alternatives to native development is that they were strongly dependent on Google in the case of Flutter, or Expo, which Meta recommends as the framework to use because they deemed it’s not their responsibility to address that part for the community. These companies became dominant players in those spaces respectively, and that came at the cost of killing competitiveness, which fosters innovation.

Lynx challenges the status quo, in the same way AI is challenging Apple’s ability to catch up with the rest of the industry. And it does so by presenting a technology that’s UI framework agnostic. It comes with support for React, but people are already working on adding support for Vue, and I expect SolidJS, Svelte, and others to follow soon.

In other words, Lynx paved the way for other frameworks to support mobile development, unlike React Native, which coupled its solution to React as a framework. It also comes with a fast Rust toolchain, so technologies building upon it don’t have to re-invent that part. This is an area where Meta said, “We use different tooling at our scale, so we are not going to bother to support a toolchain,” and if you ask me, that’s a shame. Lynx recognized that both go hand in hand, and that’s amazing.

It’s still early days for Lynx, so the future is a bit uncertain, but if I had to make a bet, I strongly believe the fact that it allows other UI frameworks to build upon it will create a unique ecosystem of tools, resources, and libraries that React Native won’t be able to compete with.

It’s a matter of time, but if ByteDance does well at building a thriving community, I think it can happen. The React Native ecosystem could have gone a similar path, owning an integrated toolchain, and providing the ecosystem with the resources to build great packages, but instead sold it to Expo. Expo is amazing. I think having only Expo is terrible.

With Tuist transforming into a platform, I want to play with Lynx to see where we could bring value on the server-side, for example easing the distribution of previews, providing sign-in to the bundling process, or conceptually compressing mobile intricacies. Or maybe helping them break the dependency on CocoaPods.

The future of mobile development is bright.

]]>
<![CDATA[The appearance of Lynx is an invitation for UI frameworks to support mobile development.]]>
AI as a tool to reduce OSS maintenance costs https://pepicrft.me/blog/2025/03/09/ai-oss-cost 2025-03-09T00:00:00+00:00 2025-03-09T00:00:00+00:00 <![CDATA[

GitHub recently introduced GitHub Copilot as a reviewer tool for pull requests. I gave it a shot with low expectations and had a pleasant surprise. The reviews you get could easily be mistaken for those from a human reviewer.

This made me think about the role that AI could play in reducing the cost of maintaining OSS. As you might know, the classic challenge of open source is the cost of maintenance. This is, in fact, the reason why we started working on evolving Tuist into a business.

Issues and pull requests pile up, you don’t have the time to review them all, and you have to process the emotions that come with feeling like you’re lagging behind or not meeting the community’s expectations. But if most of this can be delegated to AI, that changes everything.

I believe we, open source maintainers, should invest in having the right CI checks to ensure the code that is merged is of the quality we want to ship, along with rules to guide GitHub Copilot to do its best work. How those rules are provided to AI agents is still an evolving territory, but once it settles down, I think it makes sense to invest in providing agents with that context.

Who knows… maybe we’ll get to a point where GitHub Copilot can also reject PRs that don’t align with the direction/vision of the project, or merge others automatically. I think that would be fantastic.

In the context of open source companies, a subject I’ve been thinking about a lot lately, this approach gives you access to a developer workforce that’s not limited by your location or your number of employees. It means you can move much faster than your competitors, at a much lower cost, allowing you to focus on building revenue from large enterprises while innovating and watching your community foundation grow.

I don’t know about you, but I think this is a unique strength. When I think of companies working in the same space as Tuist, none are approaching business this way, so I believe we’re in a good position to take advantage of this opportunity.

]]>
<![CDATA[How AI can help you maintain your open source project.]]>
Own your community https://pepicrft.me/blog/2025/03/06/own-your-community 2025-03-06T00:00:00+00:00 2025-03-06T00:00:00+00:00 <![CDATA[

For years, many tech companies have made us dependent on their platforms through the value that we created within them. The moment we signed up, we agreed to terms of service that made them owners of our activity: our photos, messages, files… We trusted corporations. But as time has shown, they are not held accountable for what they do with all of that data. You gave them the rights, so they simply do what’s necessary to align with incentives they don’t disclose, which are far from the classic “make the world a better place.”

Note this isn’t true of all companies, but it’s really hard to find one that stays true to a set of values. Personally, I don’t care about the personal data they’ve collected from me, or the value I’ve generated on platforms like X or GitHub. Years ago, I had a different connection with the idea of reputation, but breaking dependency on that was very liberating. I don’t share anything of value in those places, and if something has value, I make sure I use technology that ensures I’m the owner of my data—like Obsidian or Git forges where I can take a Git repo elsewhere if the platform becomes enshittified. Seeing X, Reddit, Strava, and many others building walls was a wake-up call for me. I still use them, but I don’t engage with them beyond posting or reading small things here and there.

With Tuist, it’s a different story. We are generating a lot of value—not just through the code that we bundle and release as a CLI or a publicly accessible server, but also through resources, ideas, and conversations. For many years, we didn’t think much about where the value resided, or whether we were owners of it. Similarly, we had a bit of a wake-up call. @marek is more thoughtful in that regard. We moved our conversations from Slack to Discourse. We haven’t fully completed the transition, but we encourage more discussions to happen there. For some people, this is uncomfortable. We are breaking the fast-paced conversation cycle that companies have conditioned us to expect, but the slower pace and the ownership of data and community experience are priceless.

I see many open source projects betting on Discord, and I’d wager we’re not far from witnessing an enshittification similar to what began with Slack. If I were to start a community again, I would absolutely build it on something like Discourse. Someone might argue that it’s costly, but it really just requires getting a server, connecting to it via SSH, and running a command. Sure, it’s not a one-click experience, but you’re choosing between surrendering to corporate control versus nurturing a community who will appreciate—years from now—being able to search for something and find what they’re looking for. If you care about your community’s long-term health, you should care about owning the value that community produces.

We’ve applied this to other areas too. We’re close to moving off Google Workspace. We self-host many of the services we use and keep the data in our own databases. It’s amazing how many quality open source projects exist that we can contribute to. It’s about long-term investment over short-term convenience as the product of an uncertain future you don’t control. I don’t regret for a moment the move we made, and we’ll continue to invest in this direction.

If you’re just getting started and don’t have the resources, it might make sense to initially be a “product” (which usually means not paying), but eventually flip the switch and become the owner of your digital assets. You’ll be thankful for having made such a move.

]]>
<![CDATA[You'd better own your community.]]>
China is succeeding at what everyone else is failing at: localization https://pepicrft.me/blog/2025/03/05/china 2025-03-05T00:00:00+00:00 2025-03-05T00:00:00+00:00 <![CDATA[

I remember when I was young, everyone had this stereotype in mind that China copies what everyone else is doing. I’m not a fan of stereotypes, but I think they usually reveal something deeper.

In the case of China, I think we are witnessing the reality behind the stereotype unfolding. They are proving to be amazing at localization. When one thinks about localization, it’s tempting to think about languages. But localization goes beyond that. It involves understanding the culture that you are building for and adapting your solution to fit the expectations and needs of the target audience.

This takes a lot of empathy that I believe we are losing in what people usually refer to as “the West.” Here it’s all about you: your success, your entrepreneurship journey, your ability to raise capital, every quantifiable aspect of your life and business. So without empathy, the awareness of the importance of localization is lost.

China has that, perhaps due to their political system. And as open source has shown, which has a community component to it and therefore more empathy embedded in it, you can easily outcompete solutions that lack that, close their systems, and try to solve everything by throwing capital at the problem.

I wouldn’t say Chinese copy. Chinese learn and adapt. We build models and impose them on everyone. If we look around, we see plenty of examples of this unfolding. From DeepSeek open-sourcing their models to, most recently, ByteDance open-sourcing a technology to build mobile apps, Lynx.

The last one is particularly worthy of analysis. Meta individualistically thought that it’s not their concern to package a technology in a ready-to-use solution for the community, so ByteDance went ahead and bundled framework and technology and packaged it in a beautifully productized solution.

They understood that wealth is concentrating in “the West” and that people don’t want to feel they are losing their purchasing power, so they adapted many of their products to make sure people continue to feel they can purchase the same or more than before.

The perception towards the Chinese and Chinese products is changing, and “the West,” particularly North America, can do little about it. Or maybe they’re localizing when it’s too late, like Amazon presenting a Chinese-like storefront, Amazon Haul.

Years ago, the argument they still use to this day—that the Chinese are bad—is getting weaker and weaker, especially as we see Donald Trump’s administration’s real interests. It’s two political regimes with different stories but similar goals.

So I dare to say the war with China has already started. The US is like Leonardo DiCaprio sinking in an ocean that the Chinese have become familiar with and have learned how to navigate better. They can try to tax the world, but that will only prompt the world to reconsider their dependency on their system.

]]>
<![CDATA[China has learned to localize their products and services to fit the needs of their target audience.]]>
From open source to open core business https://pepicrft.me/blog/2025/03/04/tuist-mental-models 2025-03-04T00:00:00+00:00 2025-03-04T00:00:00+00:00 <![CDATA[

Tuist started as a pure client-side tool. Through project generation, it made Git conflicts less frequent and modular Xcode projects easier to manage. That work was the foundation to build a strong brand and a community around it. But it made it clear that this approach isn’t the most suitable from a sustainability perspective. That’s the reason why open-sourcing client apps is more rare than server-side ones. But a closed-source CLI was out of the question.

For context, project generation continues to be the reason why many projects come to Tuist—most of which, surprisingly, are trying to find a developer experience that’s better than SwiftPM‘s.

Once you reach the inflection point of having many people using the tool (therefore demanding energy from you) while you’re treating it as a side gig, you have to consider doing something about it if you don’t want to burn out. In many cases, developers don’t do anything, and projects stagnate. Look around; I’m sure you’ll find some examples. We were once suggested to move Tuist to a foundation, but foundations don’t solve sustainability. So in our case, we decided to evolve Tuist into an open core business.

Transitions like this are not easy. You are going from open source and free to partially open source with some paid features. It’s natural to think, “But you didn’t tell me about this,” but we didn’t know either that Tuist would end up being largely adopted. So it’s either we do something or let it die. I know it sounds harsh, but it’s the truth.

The month that followed came with a lot of learning and questioning around what we’d like the company to be like. Open source is in our DNA, so as you can imagine, that influenced how we are shaping the company.

The first conclusion we reached is that we had to introduce solving problems that required a server to be solved. Capturing value through a server (in other words, asking people to pay for it) aligns better with people’s mental models. It’s still tricky to this date because, since people interface through the CLI, they find it rare that they have to have a paid subscription bound to the account they authenticated with through the CLI. Yet I believe this will shift as we continue to make Tuist server-centered. Solutions like cache, selective testing, previews, or analytics were born to solve server-solvable needs.

Our limited capital and the cost of getting a business off the ground led us to making the server closed source to avoid free-riding until we could better understand the market and how to monetize it. We did so contrary to our principle of keeping things open, and as you can imagine, the idea of making it open source again is something we’ve been pondering since.

When I tell this to developers and other business founders, they look at me and think, “Are you crazy? Opening the sources? You won’t build a business like that.” But after a lot of reading, discussing, and learning about the models of companies like GitLab or PostHog, I believe more than ever that this model is feasible. Let me unfold that for you.

But first things first: the motivations. We believe the best products are built by communities of people from all over the world. Despite your efforts of hiring diverse teams, you are limited to your team of 10, 20, or 100 people to build a product that addresses everyone’s needs. Our work in the CLI taught us that. Once you experience it, you want to apply it everywhere. The human force you get access to through this model is unbeatable by throwing capital at the problem. Look at Wikipedia, Discourse, or Grafana. Elon Musk is annoyed by Wikipedia, but you can’t compete with a group of people motivated by a common goal.

Second, it’s important to understand one thing: businesses generate most of their revenue from large enterprises. I bet it follows the Pareto Principle, where 20% of your customers generate 80% of your revenue. So with that in mind, the question is how to have the server open source while still capturing the revenue from the 20% of customers. Well… this is what we are trying to understand. There are many good references out there, like GitLab, which charges for advanced features, or Penpot, which charges for control. Our hunch is that our model will be a mix of charging for advanced features or control and for hosting a complex infrastructure.

And here comes an interesting twist: what about the 80% of users? This is where traditional business people get uncomfortable. When you look at this from a monetary angle, they might feel like you are losing an opportunity to capture value there. But what if the value they contribute back is not monetary, but marketing and contributions to the product, helping make it better for everyone? When I first reached this idea by myself, it blew my mind. I could finally understand why something like Discourse or Grafana is so successful and why few to none can compete with them. Most of the people in that 80% group wouldn’t have paid anyway.

Let’s take all this framing further.

We are focusing on shifting value to the server, understanding the needs of the 20% of our customers, and building a mental model around where the line can be drawn. But not just that—in that future where Tuist becomes a community effort, we need to make ourselves as dispensable as we can be. In other words, can we shape a world where we can openly iterate on the product with a minimum cost for us? Definitely.

So we are focused on putting the right foundational pieces to ease contributions. If you look at our work in the past weeks, part of our focus has been on building a design system foundation for the CLI and for the web app, such that contributing to Tuist feels like LEGO. You’ve got an idea for a new dashboard feature to build? Sure! Go ahead, prototype something, and open a PR. Our designers will play more of a reviewer role than an active contribution one. And what’s even better than all of this is that GitHub is doubling down on tools to leverage LLMs to review PRs. A few tests that we’ve done yield impressive results, so our coding will gear towards providing the right rules so agents can do a great job, and we are confident that the work does what it’s supposed to do and aligns with the practices of the repo.

Besides what this means for the community, we’d build the first open source app developer productivity platform for app developers. Suddenly, it positions us with limited capital but with a world of contributions to expand Tuist in two directions:

  • Into other app development lifecycle phases, for example, release automation.
  • Into other app development technologies, like React Native or Android.

In a traditional model, the above is done by raising endless capital, but that usually comes at the cost of potential misalignment of incentives and products “enshittifying” themselves. We don’t want that for Tuist. Hence, we are embracing a model that allows us to create an innovative development environment where the community is an intrinsic part of it, with a world of opportunities to help teams build better apps.

I’m damn excited about this model!.

]]>
<![CDATA[In this blog post I unfold the journey of Tuist from an open source tool to an open core business.]]>
A Mise formula for Swift Package development https://pepicrft.me/blog/2025/02/25/mise-formula-for-swift-packages 2025-02-25T00:00:00+00:00 2025-02-25T00:00:00+00:00 <![CDATA[

If you develop Swift packages from macOS, and support Linux, you might be interested in the following formula to easily build and test your Swift packages in the host OS, macOS, and Linux.

Whenever I create a new package, I create two Mise tasks, mise/tasks/build.sh and mise/tasks/test.sh. Then using usage annotations in the scripts, I can easily add support for using --linux to indicate a different target OS:

#!/usr/bin/env bash
#MISE description="Build the project"
#USAGE flag "-l --linux"

set -eo pipefail

if [ "$usage_linux" = "true" ]; then
    if command -v podman &> /dev/null; then
        CONTAINER_ENGINE="podman"
    else
        CONTAINER_ENGINE="docker"
    fi

    $CONTAINER_ENGINE run --rm \
        --volume "$MISE_PROJECT_ROOT:/package" \
        --workdir "/package" \
        swiftlang/swift:nightly-6.0-focal \
        /bin/bash -c \
        "swift build --build-path ./.build/linux"
else
    swift build --configuration release
fi

And that’s it. You can now do:

mise run build # Build in the host (macOS)
mise run build --linux # Build in Linux
]]>
<![CDATA[Learn how to easily build and test your Swift packages in the host OS, macOS, and Linux.]]>
If Swift app development teams had the data... https://pepicrft.me/blog/2025/02/25/missing-insights 2025-02-25T00:00:00+00:00 2025-02-25T00:00:00+00:00 <![CDATA[

When chatting with organizations that build native Swift apps with Xcode, a common denominator I’ve found is their limited insights into their development environment.

As you might know, solutions start with understanding. Without it, solutions are chosen blindly and based on assumptions.

The organizations that can afford it throw more engineering resources at the problem, hoping everyone will become more motivated and productive. Others might suggest, “Let’s adopt React Native because I’ve seen company X successfully implementing it.” Or one engineer might propose swapping the build system completely with something like Bazel.

But the data is there. It’s just not pleasant to obtain and work with.

Want to know more about .xcactivitylog and the .xcresult schema? Search for it on your preferred search engine. You’ll most likely end up finding some parser on GitHub. The community has to reverse-engineer these formats, just as we did with the .pbxproj format to build XcodeProj.

There’s an unfounded obsession with keeping everything close to Xcode, as if development didn’t extend beyond your local macOS machine and your Xcode.app instance. Look at Xcode Cloud—it’s built right into Xcode. Beautiful. But the capabilities of CI services haven’t fundamentally changed. We’re just doing the same things, or even worse, but now it’s official and native.

It feels as if we’re stuck on this idea of “it’s native” while watching our builds take an insane amount of time, or our flaky tests blocking people without really knowing how serious the problem is.

So Tuist is jumping in because no one else is:

  1. We’re collecting the data for you
  2. We’re making it accessible
  3. We’re helping you act on it
  4. And potentially, we’ll be the ones acting on it (e.g., auto-disabling flaky tests)

For the first point, we’re learning about those proprietary formats to collect and push them to a server. Yes! A server. We love native apps too, and we’ll build a native interface to the Tuist platform, but that should be an interface, not the core. A server also means a database, so you can understand how the data evolves over time, and the possibility to interact with other services, integrating with things that happen outside of developers’ environments and Xcode. We can’t disregard that there’s a lot happening out there, because otherwise we end up with a developer experience that’s lagging behind in a world where LLM is challenging how we code.

We’re standardizing, documenting, presenting, and making the data available to you via an API. Your project, your data.

It’s that simple.

We’re just a group of developers passionate about this domain. It’s not our aim to monetize your data. Our business is helping you solve your problems. Period.

With all our learnings from past years and current interactions with customers, we’ll codify that knowledge into a dashboard and a set of tools to make development enjoyable.

I don’t understand why this hasn’t happened before, but on the upside, it’s a great opportunity for us to build something different—something focused on developer experience.

The data is there. We’re just going to use it.

]]>
<![CDATA[What are we waiting to help Swift app development teams be productive?]]>
CI is commoditizing https://pepicrft.me/blog/2025/02/24/ci-is-commoditizing 2025-02-24T00:00:00+00:00 2025-02-24T00:00:00+00:00 <![CDATA[

I remember coming across the idea of Mobile DevOps and wondering: what are they talking about? Isn’t Mobile DevOps just Fastlane? Why do we need a new term?

I later learned that it was just a marketing term, and a vendor-locking tool to prevent customers from switching to other tools in a market where there are already many options available. The same model is repeating across other CI companies that are rushing to build more than just CI. For example, have you noticed that they all rushed to present an App Center alternative?

Many tech companies, including those in the CI space, typically follow this pattern. They run the infinite game of venture capita. They follow the loss-leading script, first bringing and vendor-locking customers into a reasonably-priced solution, and then squeezing as much money as possible from them. Product and innovation become secondary as they focus on the value-cost trade-off.

As someone who likes to see new ideas flourishing and long-standing challenges addressed, I find it sad.

But software and communities that might form around them can be catalysts for change, and I believe we are entering a bit of a wake-up call in this space. Let me unfold that for you.

GitHub Actions started changing the game. First, they built a solution where developers could build and share steps (i.e., GitHub Actions) with others. Sure, platforms other platforms have it too. But think about it for a second. Would you rather have an action in a repository that’s part of your CV on GitHub, or commit it to a centralized repository that’s more distant from your GitHub profile? It’s a subtle but important difference.

They then integrated the UI (e.g., logs) very tightly into the platform. You don’t need to leave the platform to see the execution and results of your CI builds. It’s right there, in the same place where your code is. And because it’s so close, annotating PRs from your builds is as easy as following some conventions in your logs (e.g., workflow commands).

And last but not least, they embraced the freedom to bring your own hosts. This not only became freedom for companies and developers to customize their solution but for companies like CirrusLabs or Depot to provide hosted runners.

So if your Git forge, most likely GitHub and GitLab, already provides a much better CI experience than other players, gives you access to a marketplace of steps (which, by the way, Dagger is commoditizing and making CI-platform-agnostic), and many companies are emerging to provide hosted runners, what are you left with as a CI company?

I’ll tell you… You need to share the illusion that you provide more than what you really do, and hope that the years of investing in vendor-locking will prevent companies from leaving you. Ehem, Mobile DevOps. But this model is falling apart. It’s a matter of time. And we’ll see more companies iterating on the idea that they are a CI company. Give it some time…

How are we doing things differently at Tuist? First, we are designing a company that fosters innovation over seeking value-cost tradeoffs. Innovation is in our DNA. And yes, this will mean we’ll make bets that won’t bring any return, but that’s fine. We want to bring some innovation to the CI space, and you’ll see that in the following months.

Second, we are embracing people’s freedom to choose. We are not about capturing as much as we can from the value that we generate. It’s about generating way more value than traditional businesses. And we are going to do that through open source, so we expect to open-source our server and everything that makes Tuist possible. We’ll charge for maintaining the services up and running, and the community will be able to contribute to the project, from the CLI to the server. This will make Tuist stand out over competitors that look at openness with fear. We believe this is also the path to get to Android, React Native, and Flutter.

And last but not least, we are placing a strong bet on making everything collaborative. From the moment we have an idea for something we’d like to build, it becomes a shaping process where the community participates. We want people to feel that Tuist is as much theirs as it is ours.

In the following months, we’ll see a lot of changes, not just in the CI space, but in app developer tooling, and we are going to drive the change at Tuist.

]]>
<![CDATA[It's time to rethink the CI market.]]>
Science macht besser https://pepicrft.me/blog/2025/02/19/surgery-day 2025-02-19T00:00:00+00:00 2025-02-19T00:00:00+00:00 <![CDATA[

It’s 7:00. In six hours, I’ll undergo a five-hour surgery. I’m excited, but at the same time, emotionally exhausted. I just want it to happen already and finally recover from the accident I had while jogging on September 21th.

I’ve spent all this time visiting countless specialists, many of whom were in Germany. I’d like to believe I was just unlucky, but truthfully, I felt abandoned. From MRIs that never happened because they were deemed “unjustified” to doctors telling my wife to take care of me so that I could lose weight and strengthen the muscles that might compensate for my torn ligaments.

“Natur macht besser,” a neurosurgeon said, suggesting I let a nerve—which hasn’t shown any signs of recovery—heal on its own.

Another doctor seemed more surprised by the fact that I had painted nails rather than my wife. And let’s not even talk about the dismissive comments:

  • “Forget about running again—only in the swimming pool.”
  • The casual bro-like chitchat about a German footballer who played most of his career without ligaments.

I would have appreciated a simple, honest response like:

“This is urgent, but I don’t have the resources, experience, or willingness to try a solution.”

That would have been completely fine. No one is an expert in everything.

But was it racism? I doubt it. I’d say it’s more about being on the conservative side of the medical spectrum, leading to a loss of surgical expertise. There was also a bit of the classic “I’ve never seen this before” mental short circuit. So what? The human body is complex. Interestingly, the people that couldn’t admit they didn’t know about something were male doctors.

And then there’s the specialization issue—“I do muscles,” “I do bones.” Welcome to the inefficiencies of the system. As if dealing with the pain and the looming possibility of a nerve that won’t recover wasn’t enough, I also had to coordinate everything myself.

I’d be lying if I said this whole experience hasn’t made me question what I’m doing in Germany. Sure, I had noticed bureaucratic inefficiencies before, but when it comes to health, the stakes are much higher. My taxes feel worthless—at least on the healthcare front.

Here in Spain, everything feels so much easier. Yes, it helps that I’m a native speaker, but more than that, they guide you through the system until they find a solution. They take responsibility for coordinating your care. I’m going through the private system here—since it was the only way to find a specialist capable of handling my case—but the public system works the same way.

People actually want to help. They generally understand what you’re going through and do their best to make the process more bearable. This is how I always imagined healthcare should be, but Germany’s system failed me.

Wish me luck. I’ll be fasting for the next six hours, and if everything goes well, I’ll update you tomorrow.

Take care of yourselves.

]]>
<![CDATA[My experience with healthcare in Germany.]]>
Tuist’s vision for an open, cross-platform future https://pepicrft.me/blog/2025/02/14/open-source-business 2025-02-14T00:00:00+00:00 2025-02-14T00:00:00+00:00 <![CDATA[

A business model is the balance between producing and capturing value. Since open source is not a business model, it focuses entirely on producing value. In fact, it can generate significantly more value, and at a faster pace, than proprietary alternatives. However, as an open-source project gains popularity, maintenance costs increase, forcing projects to find ways to capture value in order to sustain themselves. That was the case with Tuist.

We started as a client-side, value-production-only solution, but as maintenance demands grew, we had to explore ways to capture value to sustain development. Due to the nature of our technology—where most of the value resided in the client—and the lack of a registered brand, we suffered from free riding. To protect the project’s future, we had to draw a line and partially close-source some of the client-side value. We have never regretted that decision, but we do miss being fully open source. Now, we’re going back.

Our goal is to become the go-to productivity platform for app development—not just for the Apple ecosystem, but also for Android, React Native, and Flutter. Achieving this with limited capital and resources is nearly impossible. However, by turning it into a community-driven effort, we can build a unique solution in the ecosystem, much like Grafana has for dashboards, Supabase for databases, or PostHog for analytics.

But where do we draw the line? You might wonder. The complexity of Tuist will lie in hosting—not because we’ll deliberately add complexity, but because it will be an inherent aspect of the platform. This complexity will become evident when we introduce remote execution and CAS. Additionally, we may differentiate our product by offering certain features exclusively for large enterprises, similar to GitLab.

Indie developers and small companies won’t concern themselves with hosting—it’s not their focus. Moreover, they won’t pay unless they use the platform extensively, in which case their payment will be proportional to the value they receive. Large enterprises, on the other hand, might opt for self-hosting but will be encouraged to purchase a license for access to enterprise-only features like SSO.

We don’t just want to build a developer productivity platform for app developers—we want it to be the most valuable solution. And let’s be honest: only open source can get us there.

]]>
<![CDATA[Open source is the only way to build the most valuable developer productivity platform.]]>
The missing commoditization https://pepicrft.me/blog/2025/02/06/missing-commoditization 2025-02-06T00:00:00+00:00 2025-02-06T00:00:00+00:00 <![CDATA[

Have you noticed that other ecosystems have surpassed Apple’s app development experience in terms of developer tooling? Take Vercel, for example—you can preview a website and add comments in just a few clicks. Or look at Replit, which lets you build and deploy a web app using LLMs in seconds. I can’t help but wonder: why isn’t there more innovation in this space? To be honest, bringing fresh ideas and doing things differently is what drives me at Tuist.

We’re still stuck with the same CI providers we used years ago, dealing with YAML pipelines that are a nightmare to debug, and locked into vendor ecosystems that make it hard to escape predatory sales practices. And what about all those old Ruby scripts no one dares to touch? “They work, so just leave them,” people say. Remote automation and CI are strongly coupled: you automate, push, and check if it works. Want something more intuitive, like a simple “click to release” feature? Not happening. Instead, you build in your CI environment and push the artifacts elsewhere. Need better insights into your projects? Here’s a scattered set of libraries to glue together, a server to maintain, or a build system to replace. Or even worse—an app that locks the data away, when all your manager really wants is a simple link to track team progress.

It’s too much plumbing. It’s not fun. And while Apple has made some progress here and there, they’re largely preoccupied with business priorities.

Ironically, the companies best positioned to enter uncontested markets—thanks to their IP and financial capital—are the ones suffering from the innovator’s dilemma. They’re too distracted trying to fit their products into the AI narrative. But real innovation starts by looking at problems from a fresh perspective, challenging the status quo, and reimagining what already exists.

At Tuist, we want to be a catalyst for innovation. Our approach? Commoditizing a foundational piece we believe will unlock new ideas: remote virtualization of development environments.

I know, that sounds abstract. But think about Linux containers or web browsers. You can launch software in a browser using WebAssembly or in a Linux machine with Docker. But macOS? Forget about it. We’ve wasted years reinventing virtualization—something every CI provider has had to solve. Yet, most see virtualization as just an enabler for CI services, rather than a tool to build new developer experiences.

Now, imagine if the cost of running virtualized environments dropped significantly, thanks to open-source technology and a simple system for orchestrating a pool of hardware. This isn’t a new concept—cloud providers already solved it for running apps and serverless functions. But no one has applied it to improving developer experiences.

Just yesterday, Vercel introduced a new runtime called Fluid. Fluid is how things should be: spinning up virtualized macOS environments seamlessly.

And then, the magic happens:

  • You can release an app with a single click from the list of commits on your Tuist dashboard.
  • You can run any workflow locally or remotely with just a flag.
  • You can sign your app remotely, letting us handle all the complexities for you.

We’re assembling the pieces to make this future a reality. I can’t wait to see Tuist help the Apple ecosystem catch up with the web. We’re making Tuist the go-to CLI for teams, building the best, most powerful, and beautifully designed dashboard that people love using. We chose Elixir for its robust runtime, which will be crucial in simplifying our solution without compromising power. And we’re developing a few missing technologies, which we’ll also open source.

We’re almost there. We just need to rethink the problem and bring down some costs to make it happen. Will it be expensive to develop? Oh yeah—especially for a small company like us. But once we release it, I bet Tuist will dominate these new spaces.

]]>
<![CDATA[I talk about the lack of innovation in the Apple ecosystem and how Tuist is reimagining developer experiences.]]>
Where does the value lie in open-source businesses? https://pepicrft.me/blog/2025/02/01/open-source-business 2025-02-01T00:00:00+00:00 2025-02-01T00:00:00+00:00 <![CDATA[

I’ve always enjoyed listening to or reading about open-source projects that find a path to sustainability. This morning, I listened to a podcast (The Business of Open Source) about xWiki, an open-source knowledge management tool for companies. It got me thinking about how open-source businesses require constant reflection on their core purpose and value proposition.

At Tuist, we believe that the best way to build a productivity platform is through openness. Openness fosters accountability, which keeps us aligned with our mission and pushes us to deliver our best work because everything we do is out in the open. However, openness also presents a challenge: where do we create business value, and how do we monetize it to ensure we can continue doing what we love?

Years ago, Tuist’s value was primarily in its project generation feature, which we now consider a commodity—a gift to the community. Building on that foundation, we introduced optimizations like selective testing and binary caching, which span both the client and server. This marked an inflection point for the business, as server-side value opened up new revenue opportunities. We’ve discussed eventually open-sourcing the server as well, but in its current state—where hosting is simple by design (a necessity given our small team)—organizations might choose to self-host without seeing the value in supporting us financially.

There are interesting models to consider, such as the Fair Source Licenses, but this would distance us from a pure open-source model. Another option is dual licensing, like GitLab, where a community edition is extended with paid features. However, I have mixed feelings about this approach because it adds complexity to a product that’s already pioneering a new model in the ecosystem. For instance, Tuist started as a CLI tool, but now there’s a community server and potentially an enterprise server? It feels like too much.

My hope is that, eventually, the value will lie in hosting, maintaining, and scaling the business. At that point, we could even adopt a free software license for the server. Large enterprises often perceive self-hosting as a risk or too complex, making them more inclined to pay for a managed solution. For smaller companies and indie developers, we could offer the service for free up to a certain usage threshold and charge only when usage exceeds that limit. There will always be a group of users who prefer to self-host, and that’s fine—they contribute to improving the product and even help with marketing.

xWiki took the path of monetizing through support, but we are a product-oriented company. Our goal is to find a balanced SaaS solution that maximizes openness while ensuring sustainability. The exact model will evolve as we learn and adapt to shifting value propositions. What’s clear to us is that we want to continue contributing open-source tools to the ecosystem while presenting an alternative model for building sustainable open-source businesses.

]]>
<![CDATA[I talk about where we think the value of Tuist lies, and how that's evolving towards a sustainable business model.]]>
Communities https://pepicrft.me/blog/2025/01/28/communities 2025-01-28T00:00:00+00:00 2025-01-28T00:00:00+00:00 <![CDATA[

A community is the greatest asset a company can have. Communities consist of people who are genuinely excited about what you’re building—so much so that they tell others about it, creating a contagious effect. The most successful tech companies today share one thing in common: they all have a strong community.

You don’t need to be open source to build a community. In fact, communities can form around a database hosting service or a subscriptions SDK. The key is to find something that brings people together. Whether it’s contributing to a shared cause, being part of a movement of indie developers, or simply rallying around a tool that directly impacts their bottom line, like the SDK they use, communities thrive on shared passion and purpose.

Few marketing books emphasize communities as a marketing tool, but let me tell you this: it’s the most powerful marketing tool you can have. Recognizing this requires courage—especially when the easier route is to spend money plastering your company’s logo on newsletters or conference banners. The moment you stop spending on those efforts, the marketing impact disappears. While paid promotions might increase brand awareness, it’s nowhere near as impactful as having a community that’s genuinely excited about what you’re doing and talking about it organically.

Paying people to talk about you isn’t effective marketing. It’s marketing from a position of privilege, where money is used to try to buy influence. A far better approach, inspired by the constraints of limited capital, is to invest your human capital into building a community of people who are genuinely excited about what you’re creating. It’s about being human, fostering connections, and creating something people truly care about.

At Tuist, we’re constantly exploring ways to grow and nurture our community because we firmly believe it will be the key to building a lasting product that stands the test of time. Open source contributions are one of the pillars of our community, but we don’t want that to be the only one. We’re considering developer experience (DX) excellence as another pillar, much like Linear has done with their focus on UI and UX craftsmanship. This is why we’re investing in a design system. Additionally, we’re committed to supporting the Swift ecosystem by contributing to shared resources and initiatives—like our “Swift Stories” newsletter—without expecting anything in return.

Many people may not know about us yet, but that’s a feature, not a bug. We believe building a strong community takes time, and we’re convinced that taking shortcuts would only harm the health and authenticity of the community we’re striving to create.

]]>
<![CDATA[Communities are the best tool to build long-lasting companies.]]>
Tuist is evolving https://pepicrft.me/blog/2025/01/26/tuist-is-evolving 2025-01-26T00:00:00+00:00 2025-01-26T00:00:00+00:00 <![CDATA[

As we continue to build a company around Tuist to build more open source sustainably, we are transitioning mental models that solidified in the minds of developers. It’s a natural part of evolving a product, but it might be confusing if you are not close to the evolution. Here’s a summary of what’s happening:

  • Tuist is not just a project generator. Project generation is one of the solutions that we offer along many others. We see project generation as a component–a commodity that we might extract from Tuist once we have the resources to execute on it.
  • The CLI is becoming more of a frontend to a development platform, but it’ll not be the only one. There’s a web-based dashboard and a macOS app, and thanks to our documented REST API other frontends might emerge.
  • We are removing the dependency on generated projects to benefit from the Tuist solutions. Our long-term goal is that you can plug Tuist into your Xcode projects or Swift packages as they are.
  • Our focus continues to be on development productivity, although there are early thoughts about supporting teams more directly with the quality of the apps that they build. We are also expanding our focus to support developers from the moment they have an idea, until they publish it on the store.
  • We started with a strong focus on the Apple ecosystem, but the problems that we solve span other ecosystems, so we might expand onto Android, React Native, and Flutter in the near future.
  • There continues to be an appetite to make the server code source available. We might execute on it if the business thrives. We believe this model will lead to shaping the best and more diverse solution in the space.

Tuist will look more like Vercel or Expo. Because developers will expend a lot of time in the dashboard understanding, interacting, and optimizing their development, we deem crucial to provide them with the best experience there. Therefore we are designing a beautiful design system, Noora, that all the features will build upon. We want to build the best DX and most beautiful developer tool for all developers, while building a company around it that embraces innovation and openness in ways that we haven’t seen before in the ecosystem.

]]>
<![CDATA[In this blog post I talk about how Tuist is evolving and how some mental models are transitioning.]]>
Whom do you trust? https://pepicrft.me/blog/2025/01/26/whom-do-you-trust 2025-01-26T00:00:00+00:00 2025-01-26T00:00:00+00:00 <![CDATA[

I became somewhat disillusioned with the tech industry. I conceive technology as a tool to support us, humans. Whether that is by giving us a tool to write down our notes, or a framework to create a website like this one in which I’m writing. But when embedded in capitalism, which is what the tech industry is, it can become the least human thing. And this has become more evident to me, specially with the recent events in the US. It’s all about extracting as much monetary value as possible from people. We are seen as exploitable resources.

The most obvious example of this, which I’m extremely annoyed by, is the dopamine addiction that social networks have created. It’s a silent pandemic. When are we going to stop that? Another one is the ad industry. I was watching yesterday a movie in one of those streaming platforms, where I have a subscription, and despite paying for it, I was bombarded with ads. Then there’s this whole AI hype. I get it, LLMs are useful for summarizing, but the environmental cost for society is something we can’t ignore. Or most recently, local apps, like note-taking tools, that suddenly become subscription-based, or use proprietary formats to lock you in.

It doesn’t need to be like that, but those patterns are becoming more common. I strongly believe we can reverse that trend. For a long time I thought open-source was the answer, but after seeing what Wordpress is turning into, I’m not so sure anymore. Governance and openness are two different things. I use Obsidian to write my notes, which is a closed-source tool. I get why they develop it that way (learned through Tuist). If you make a pure client-side tool open source there’s no way you can monetize it to make its development financially sustainable. But what I like about them is that by design and embracement of standards, they give you the freedom to leave. And that’s the thing, you can really build a business around a technology, and respect the people’s freedom and rights at the same time.

So, whom do you trust? It’s hard to say. It’s not an easy question because you don’t have all the information. But before betting on a particular technology, I do a bit of research to understand the design principles (e.g. standards over proprietary technology), their openness, their governance model and funding structure (this is the most difficult to find). And then based on that I make a decision. I sometimes wish governments did that for you. I can do that because I have the priviledge to have learned about these things due to my close relationship with the tech industry. But if you don’t, you should know you might end up mentally sick if you use Instagram and TikTok without limits, in the same one you are reminded that alcohol and tobacco are harmful. But as I said earlier, if anything, with Trump’s presidency, this is becoming even more unregeulated.

I’ll do what I can in my circles and with Marek through our work on Tuist. We don’t need to be like everyone else and jump on the bandwagon. We can build human and long-lasting technology, and not put people in the position of exploitation.

]]>
<![CDATA[It's hard to trust solutions in the tech industry these days. In this blog post I talk about my disillusionment with the tech industry and how I make decisions about the technologies I use.]]>
A convention for frictionless reproduction projects with Mise https://pepicrft.me/blog/2025/01/11/mise-reproduction-projects 2025-01-11T00:00:00+00:00 2025-01-11T00:00:00+00:00 <![CDATA[

When reporting issues to FOSS projects, developers are often asked to provide a minimal reproduction project. They usually come as a pair of a tar file and a set of steps to reproduce the issue., some of which might be about installing and activating specific versions of dev tools (e.g. I was using Rust version 1.82.0). Maintainers then have to pull the tar file, extract it, and go through the steps ensuring that they have the same environment as the reporter. Not only this adds a bit of friction to the reproduction process, but makes it prone to version inconsistencies, which can lead to additional back-and-forth between the reporter and the maintainer. Imagine we could simplify the process down to a single command:

mise run reproduce

Yesterday, while tinkering with some Mise features, I realized Mise has the ingredients to make this possible:

  • The capability to install and activate specific versions of dev tools.
  • An API to define and run tasks.

Let’s see how we can leverage Mise to create a convention for frictionless reproduction projects.

Creating a reproduction project

  1. Create a new directory: mkdir project.
  2. Create a .mise.toml file with the following content:
[tools]
# Add your tools here

[hooks]
enter = "mise install"
  1. Create a .mise/tasks/reproduce.sh file with the reproduction steps:
#!/usr/bin/env bash
# mise description = "Reproduce the project"

# Mise tasks: https://mise.jdx.dev/tasks/file-tasks.html
# Note the root directory is denoted by the env. variable $MISE_PROJECT_ROOT
# Your reproduction steps go here

And that’s all you need to create a reproduction project with Mise. The maintainer can pull the tar file, extract it, and run mise run reproduce to reproduce the issue.

A ready-to-use gist

I’ve created a gist with a script that automates the steps above. You can easily run it with the following command:

curl https://gist.githubusercontent.com/pepicrft/12a2fc6433338489888d660d66d8d0b1/raw/907f33c43287c19162c79b2426015c516d4c3cd7/reproduce.sh | sh
]]>
<![CDATA[I just realized Mise has the ingredients to make reproducing issues a breeze, and this is how you can leverage it.]]>
Reverting some social recent behavioural and emotional patterns https://pepicrft.me/blog/2025/01/04/reverting-social-behaviours 2025-01-04T00:00:00+00:00 2025-01-04T00:00:00+00:00 <![CDATA[

I’ve noticed that my interactions and publications on social networks these days tend to be pessimistic, accompanied by anger, frustration, and competitiveness. Overall, they seem to trigger primitive behaviors in me that don’t feel natural when I’m outside that context. Unfortunately, this behavior has started to leak into the offline world.

It wasn’t always like this. I used to be quite positive and enthusiastic—a friend to friends, supportive in my interactions, and always leaning toward collaboration and doing things together. I don’t think that part of me is gone; it’s just overshadowed by these other behaviors and more primitive feelings.

It’s hard to pinpoint where this shift began. I believe social networks’ optimization for extracting these behaviors from people might have played a role. My previous working environment, which was filled with certain toxic patterns, also influenced me because I didn’t know how to set boundaries at the time. Then, of course, there was the pandemic, which isolated all of us, and let’s not forget the many winters in Berlin—those probably didn’t help either.

But here’s the thing: I believe this is something I can change, and I’ll start by leaning into positivity and constructivism. Sure, the world isn’t perfect, and there are things we’d rather not think about, but if we all collaborate more and push back against the individualism that’s been ingrained in us, I think beautiful things can happen. I want to be a catalyst for that change.

Bear with me—it’ll take time—but I really want to do this because it’ll improve my overall well-being. Being a grumpy person on the Internet hurts me deeply in my personal life, and I don’t want that anymore.

]]>
<![CDATA[We are evolving Tuist from a CLI to a platform and in this blog post I share some thoughts on how we are doing it.]]>
From CLI to platform https://pepicrft.me/blog/2024/12/30/from-cli-to-platform 2024-12-30T00:00:00+00:00 2024-12-30T00:00:00+00:00 <![CDATA[

One of the aspects I enjoy most about our work with Tuist is how we continuously evolve the toolchain as we deepen our understanding of the problem space and incorporate feedback from the community. It’s an infinite game—a journey of constant learning and improvement.

Tuist has gone through three major phases:

  1. Xcode Project Generator: What we now call Tuist Projects.
  2. Xcode Project Manager: Introducing CLI commands like tuist run and tuist graph.
  3. Xcode Project Optimizer: Featuring tools such as Tuist Cache and Tuist Selective Testing.

Now, we are entering a new phase. Tuist is becoming a platform—a cohesive story that ties all the pieces together. Imagine a Vercel or Expo for Swift app development. A platform that guides you from the spark of an idea to scaling your app for millions of users. A trusted partner empowering you to build the best apps faster.

We aim to streamline the toolchain by peeling back years of accumulated layers of indirection. Our ultimate vision reduces the stack to just GitHub (or any Git forge), Tuist, and Xcode. Tuist will act as a platform integrated into your repository, making the magic happen. And by magic, we mean pure joy:

  • Signing? We handle it seamlessly—it just works.
  • Remote builds? Done in our environments, hassle-free.
  • Analytics? Integrated with ease. Grafana? Plug it in.
  • Releases? Managed through a user-friendly UI.

Think of Tuist as an extension of Xcode that bridges the gaps and connects the community with Apple’s ecosystem. Together, we’re building the future of app development.

]]>
<![CDATA[We are evolving Tuist from a CLI to a platform and in this blog post I share some thoughts on how we are doing it.]]>
Learnings from logging in a Swift CLI https://pepicrft.me/blog/2024/12/24/clis-ui 2024-12-24T00:00:00+00:00 2024-12-24T00:00:00+00:00 <![CDATA[

When building CLIs, it’s common to conflate UI and logging. That’s something we did at Tuist, most likely because in both cases, text is taken as input. So it’s natural to think that they should be the same thing or that UI is a subset of logging. However, as we worked towards improving the developer experience of the CLI, we realized that it’s better to treat both as separate elements, even though they both work with text and there might be some overlap.

Logging is useful to debug the execution of the program, especially in situations where an invocation didn’t yield the expected result. Although often those logs are forwarded through the standard pipelines, they don’t need to be. They can be forwarded to OSLog or to a file in the file system. Apple’s Swift Log package is designed with this in mind and allows setting up what they call logging backends or handlers.

At Tuist, we dynamically plug one backend at runtime based on the user preferences at invocation time. The preferences are modeled based on two variables: how quiet or verbose they want logs to be, and where they want things to be logged.

This leads to the following scenario. The default logging configuration pipes out logs (except the verbose ones) through the standard pipelines, so developers have a sense of progress without too much noise. The problem? If things fail, developers need to run the command again, opting into verbosity with --verbose. But what if the issue is not easily reproducible, and that was the only opportunity to capture what happened? Well, the opportunity is gone. Plus, having to run the same command again just to see it fail with more detailed logs is not the best experience.

I think we should approach this differently. The default is right. You want to see a concise output that indicates how things are progressing throughout the execution. However, we should also have a second handler/backend that forwards the most verbose version of the logs to oslog and to a file in the file system. Why? Because on completion, you can point people to the logs, and they can use filtering tools provided by the Console app to get what they need. By doing that, once it completes, if you need the logs for anything—for example, to debug a failure—you have the link to the file right there, so you don’t need to run it again.

Is that enough? I don’t think so. I started the blog post mentioning that we conflate logging and UI, but that they should be different things. When I think of logs, I think of traces that tell a story of how things are being executed. But where does an interactive prompt fit into it? It’s not a trace. You use the terminal capabilities with cursors to make it feel interactive, but from the logging perspective, you are only interested in two things: something is being prompted, and the user responded to the prompt. So in non-interactive CLIs, you might just merge UI into logging, but in more interactive CLIs like Tuist’s, I think it’s better to treat it as something independent.

UIs are for developers using the tool. Logs are usually for developers debugging the tool. The needs are different. When you design the text to be an output for the user, there are traits like formatting and spacing that are very important. These traits are not relevant in the context of logs. All you care about in logs is understanding the sequence of events. Therefore, separating the two things forces you to think more deeply about the presentation layer of your CLI. I like to say that text output is the UI layer of CLIs—SwiftUI, if you will. The UI is something you might also be interested in testing with snapshot testing techniques, in the same way you do with your SwiftUI views. It’s tightly connected to the DX, and you don’t want it to be an afterthought.

So at Tuist, we are correcting the course. We are drawing a line between logging and UI, plugging our verbose logs to oslog, and revisiting the UI of every component to ensure the experience of each command from the UI standpoint is the most consistent and beautiful that we can ship.

]]>
<![CDATA[Some thoughts on how to treat logging and UI in a Swift CLI.]]>
Us or them https://pepicrft.me/blog/2024/12/19/us-or-them 2024-12-19T00:00:00+00:00 2024-12-19T00:00:00+00:00 <![CDATA[

Continuing with my thoughts on building a company, I want to discuss a pattern I’ve noticed among companies and influencers, and how we are shaping Tuist to do things differently.

Social networks have made us more individualistic. Much of the content shared online revolves around “me”: the lessons I’ve learned, the products I’ve built, the places I’ve traveled to… It’s as if these platforms have hit the narcissism button, and we’ve forgotten the value of collaboration. Interestingly, research shows that collaboration leads to better outcomes. However, when the primary goal is to feed an algorithm designed to capture attention, collaboration might feel like a wasted effort.

Unfortunately, this pattern is also apparent in how many tech companies operate today. Months are spent working behind closed doors on what could be the next big success, only to emerge and hope for “product-market fit” to materialize. It’s like throwing a dart and hoping it sticks. They buy expensive domains, design flashy websites, hire influencers to promote their product, and hope that self-promotion will eventually lead to success. Even small wins are often packaged into narratives to attract more capital—much like how chickens are overfed to grow rapidly. The result is either building a big business or failing fast.

What surprises me is how secondary the actual problem—and the people experiencing it—has become in this equation. Many companies are busy building “serverless X” without truly addressing what it means or whom it serves. It’s reminiscent of the clickbait culture on YouTube, where influencers use catchy thumbnails and titles to reel you in. Companies employ similar tactics: “Pay for what you use.” Sounds appealing, doesn’t it? Yet, you might end up paying more without even realizing it. This lack of ethics is becoming normalized—anything goes as long as it stays within legal boundaries.

Building a different kind of company requires only a small yet significant shift: focusing on people instead of the company itself. But this isn’t an easy shift. It demands overcoming ego and suppressing narcissistic impulses. It requires listening, becoming a platform for others, and enabling their success. It’s costly because it introduces a social component to the business, which might feel unfamiliar to more logical thinkers. Yet, the value it brings far outweighs the costs.

When I think of one of Tuist’s key strengths, it’s our strong focus on people. These values stem from our open-source roots and permeate how we shape the business. Every decision we make prioritizes what the community wants, rather than what we want to sell. For example, in 2025, we’re launching a newsletter, Swift Stories, to curate ideas from the community that might otherwise go unnoticed. We open-source components like XcodeGraph and XcodeProj to empower the community and foster innovation, rather than keeping them private as competitive advantages. Another example is our localization efforts—seemingly small initiatives that make our project more inclusive and accessible, but carry significant meaning.

When you focus on people, remarkable things happen—things that aren’t often talked about.

  1. You don’t have to search for “product-market fit.” By creating a safe space, people naturally share what they need and what they’d pay for.
  2. Customers gravitate toward you because they’re inspired by your values and prefer supporting you over competitors.

Every customer we’ve gained so far has come from our community. Some have even told us they’d prefer a CI service from us over others simply because of our approach. This shift in focus has a profound impact. For example, our most recent blog post from Trendyol happened without us prompting them. When things occur organically, they carry a unique energy that money can’t replicate.

If you’re building a company, I encourage you to put people first. It’s challenging and may feel like swimming against the current, but believe me—people value connection and collaboration. They’ll appreciate your authenticity and the unique way you do things.

]]>
<![CDATA[Focusing on others over self-promotion is a powerful way to build a company.]]>
The paradox of OSS https://pepicrft.me/blog/2024/12/16/oss-paradox 2024-12-16T00:00:00+00:00 2024-12-16T00:00:00+00:00 <![CDATA[

Building open-source software (OSS) is a privilege. You need a very precious asset that’s scarce in the world: time. Some people have it, for whatever reason—that’s outside the scope of this post. Others don’t, yet from an unprivileged position, they still manage to squeeze out some time to contribute to OSS. They often do so because they believe in its value and the freedom it brings to the world.

Open-source is better than closed-source software. We’ve learned that story throughout the history of software. Yet, we get irritated when open-source mentions the word “sustainability”. Let’s pause for a moment.

If a software is closed-source, which comes with high risks for the person or company using it, they are comfortable not only with the idea of paying for it but also with being a product of the company. But if the software is open-source, which significantly reduces the risks of using it, people are rarely comfortable with the idea of paying for it.

The paradox of OSS

Let’s break down this paradox in the context of developer tools.

You are willing to pay for a service to bring CI/CD to your project. You come across a closed-source service that everyone talks about. They don’t list the price on their website, and after some back and forth with the sales team, you end up paying an expensive subscription. Your product leans on proprietary design decisions because you are a product of the company.

Your entire company has migrated to this service, and two years down the line—in the spirit of meeting their VCs’ expectations and knowing that moving away would be extremely difficult—they multiply the price by 10.

Back then, you had the alternative of paying for the cloud service of an open-source project. They offered a cloud plan to sustain the development of the project. But you could not wrap your head around the idea of paying for something whose source code is available. Not many people were talking about it at the time, so it felt more natural to pay for that flashy closed-source service that could afford to pour millions into marketing and associating their brand with the open-source community.

Now you find yourself having to decide between paying 10 times more for the same service or stopping business development for months to migrate to a service you could have paid for initially.

Supporting OSS Sustainability

If you come across an open-source project figuring out sustainability, offer them a hand. Sustainability can take many shapes:

  1. It can be an AGPL-3.0 license with a dual-license for enterprise. And despite what you’ve heard, AGPL-3.0 is not a viral license. Businesses want you to believe that, but a license only enters into effect when you distribute the software, which is not the case when you use it internally. In case of doubt about what “distribution” means in this context, the developers behind the project are usually happy to clarify.
  2. Sustainability can also mean building cloud services, like we are doing with Tuist. If we were privileged enough to have the time to build open-source without worrying about money, we’d happily do it, but unfortunately, that’s not the case.

We need to draw a line between what’s free and what’s paid. We do this for three simple reasons:

  1. We want the project to thrive and continue supporting users.
  2. We want to do more open-source work to advance the Swift ecosystem.
  3. We want to move the ecosystem away from closed-source tools.

Making informed decisions

When choosing a tool, consider the following:

  1. Open-source, even if not fully, is better than closed-source. You eliminate risks for your business.
  2. If you can afford it, pay for the open-source tools you use. It’s a way to support the developers, the ecosystem, and even your brand.
  3. If you can’t afford it, contribute to the project in other ways. Report bugs, write documentation, or spread the word about it.

We are not privileged to have the time to build open-source without worrying about money. But we hope to reach a point where we don’t have to think about finances, so we can gift more open-source tools to the Swift ecosystem and foster innovation through the commoditization of the tools we use to build software.

]]>
<![CDATA[In this post, we discuss the paradox of open-source software and how you can support its sustainability.]]>
If it can be open-source, it'll be open source https://pepicrft.me/blog/2024/12/11/it-ll-be-open-source 2024-12-11T00:00:00+00:00 2024-12-11T00:00:00+00:00 <![CDATA[

Whether businesses like it or not, open and collaborative approaches to building technology are the foundation for creating long-lasting solutions. In other words, if a business’s value lies solely in closed-source software and nothing else, chances are it will be disrupted by a competitor leveraging open-source software to build a better product.

I emphasized “solely” because some businesses provide value beyond their software, either through infrastructure or connections to external systems that are difficult to replicate. For example, an open-source e-commerce platform that competes with Shopify can be built. In fact, it exists. But what about card readers? Payment processing? Taxes?

On the other hand, Calendly? Done. DocuSign? Done too. GitHub? Absolutely—here’s one example, and another. Even CI, telemetry, and analytics have open-source counterparts.

What do all these successful open-source products have in common? Something money can’t easily buy: stronger communities and better products. When you embrace open collaboration and welcome ideas from diverse perspectives, your product becomes more inclusive and innovative.

The alternative? Designing what you think is the best product from a conference room in San Francisco, assuming it solves problems for people in South Korea. Spoiler alert: it probably doesn’t.

And here’s the best part: AI is reducing the cost of maintaining open-source software, a cost that has traditionally been higher than developing behind closed doors.

At Tuist, we want to set a precedent in the Apple developer tooling space. Too often, closed-source and proprietary solutions emerge,
plastering “contact sales” buttons on their websites and finding shady ways to lock developers into their ecosystems. But it doesn’t have to be this way.

We’re committed to building open solutions for scaling challenges and making them accessible to everyone. We’ll continue to commoditize tools—not just for accessibility but to push the ecosystem to innovate. Because we believe the time for closed-source developer tools is over.

]]>
<![CDATA[Open source business models are the future of software development.]]>
Programming languages are just tools https://pepicrft.me/blog/2024/12/02/programming-languages-are-tools 2024-12-02T00:00:00+00:00 2024-12-02T00:00:00+00:00 <![CDATA[

I came across You Have One Voice, and it made me think about how my relationship with technology is evolving.

When I was younger, I was very passionate about, and sometimes even religious about Swift. You know when you try to take the language everywhere, that was me. However, as I’ve grown older, and got to work in other technology stacks, I’ve realized that Swift is just a tool. I worked with Ruby building CLIs, then I built CLIs with JavaScript and NodeJS, and tinkered with web technologies like Ruby on Rails in between.

This distancing of Swift felt at times like a betrayal and a lose of identity, and other times like a liberation. My wife loves to say that learning spoken languages opens your mind, and I think it applies to programming languages as well. I have my opinions and preferences, and love to learn from the communities around them, but I’m not married to any of them. I’m seeing them as tools to solve problems.

The mistake that I sometimes make, that’s mentioned in the blog post, and that I should work hard on stop doing, is comparing and commeting on how one programming language is better than the other. It’s just a subjective opinion, and it’s not helpin anyone. All programming languages have their pros and cons, and are surrounded by social and technical contexts that make them unique.

The world is better of if we all work on making our tools better, and cross-pollinate ideas between communities. So I’ll stop comparing and commenting on how one programming language is better than the other. If I love something about Elixir, I’ll say it out loud, because I think it’s important to share the joy, but I won’t do so in the expense of another language.

]]>
<![CDATA[I'm working on seeing programming languages as tools to solve problems, and not engaging in comparing and commenting on how one programming language is better than the other.]]>
Growth, growth, growth https://pepicrft.me/blog/2024/11/28/growth-growth-growth 2024-11-28T00:00:00+00:00 2024-11-28T00:00:00+00:00 <![CDATA[

We’ve become obsessed with growth in building businesses. It has turned into the metric that most aspire to: more users, more revenue, more investment. If you’re not growing, it’s seen as a sign you’re doing something wrong.

But when growth becomes the ultimate goal, the people you’re building for often become an afterthought. Instead of creating for them, you use them as tools to achieve growth. This dehumanizes them, leading to strategies that often feel exploitative.

For example, some businesses publicly criticize others’ work to exploit “schadenfreude.” They actively look for people making mistakes to capitalize on the audience’s curiosity for negativity—using it as a marketing tool. How crazy is that? Sure, anything is possible in business, but come on—it’s perfectly fine to grow more slowly if it means contributing to a better, healthier society instead of making it worse.

Another practice I find exploitative is what I call “open-washing.” Some companies, desperate for shortcuts to growth, think they can buy everything with money—including the intangible assets of open-source projects, such as their brands and communities. They treat these assets as mere marketing tools. They throw a few crumbs in the form of donations or sponsorships and then loudly claim to be “an open-source company.” It’s not much different from brands sending cheap promotional items to influencers, hoping for exposure. This is capital-rich companies taking advantage of the human-centered work of others who truly know how to build and nurture communities.

This insatiable appetite for growth shows up in many areas—not just marketing. Products are riddled with “contact sales” buttons, proprietary technologies designed to create vendor lock-in, and intentionally opaque pricing models that exploit customers’ negotiation skills. The list goes on. Sadly, when you search for companies that define success differently, the options are few and far between. These companies exist, but they’re rare.

We could follow these same exploitative paths ourselves, but we refuse to fuel those strategies. In fact, we’re actively working to reverse these models. We believe we need healthier, more human ecosystems—where individualism gives way to community, proprietary technology is replaced by open solutions, and people’s agency to choose the best tools for their needs is respected rather than manipulated.

This shift is only possible if we prioritize people’s joy over growth—and that’s exactly what we’re doing at Tuist.

]]>
<![CDATA[Growth is a metric many businesses aspire to, but it's not the model that we align with at Tuist. We believe in prioritizing people's joy over growth.]]>
Setting up a Forgejo runner in Hetzner https://pepicrft.me/blog/2024/11/24/forgejo-runner-in-hetzner 2024-11-24T00:00:00+00:00 2024-11-24T00:00:00+00:00 <![CDATA[

Weeks ago, I wrote about configuring a Hetzner server with Woodpecker to run CI jobs. At the time, I was not aware of the existence of Forgejo actions, which are integrated right into the open source Git forge that powers Codeberg, where I’m currently hosting my personal website and code crafts. After giving it a try, I decided to switch to Forgejo actions for my CI/CD needs. This blog post is the counterpart to the previous one, where I’ll show you how to set up a Forgejo runner in Hetzner. Let’s dive in!

The first thing that you’ll need is a Hetzner server. It can be a server from any other provider, which provides you with a Linux machine and SSH access to it. In my case I selected the x86 (Intel/AMD) CX22 machine in Germany with Ubuntu 24.04. Note that the steps that follow assume Ubuntu as the operating system.

Once the machine is up and running, SSH into it using its IP address and the root user. Once in, install Docker, which Forgejo runner can use to virtualize the execution of CI/CD pipelines:

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

You can verify that Docker is installed correctly by running:

sudo docker run hello-world

Then run the following command to ensure the engine starts on boot:

systemctl enable docker

Install the Forgejo runner

The next step is installing the Forgejo runner.

sudo apt-get install -y curl jq wget
RUNNER_VERSION=$(curl -s https://code.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq -r '.tag_name' | cut -c 2-)
wget -O /usr/local/bin/forgejo-runner "https://code.forgejo.org/forgejo/runner/releases/download/v$RUNNER_VERSION/forgejo-runner-$RUNNER_VERSION-linux-amd64"
chmod +x /usr/local/bin/forgejo-runner

Once the runner is installed, you’ll have to register it. To do so, you’ll need a registration token, which you can get from your user or organization’s settings on Codeberg under the “Actions” tab. Then you can run /usr/local/bin/forgejo-runner register, which will guide you through the registration process.

INFO Enter the Forgejo instance URL (for example, https://next.forgejo.org/):
https://codeberg.org
INFO Enter the runner token:
xxxxxxxxxx
INFO Enter the runner name (if set empty, use hostname: ci):
ci
INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm):
ubuntu-22.04:docker://node:20-bullseye
INFO Registering runner, name=ci, instance=https://codeberg.org, labels=[docker:docker://ubuntu:22.04].
DEBU Successfully pinged the Forgejo instance server
INFO Runner registered successfully.

The registration will create a .runner file in the current working directory, which if you haven’t changed it, will be /root. The file will contain the runner’s integration configuration.

You can then run the following command to create a configuration file to configure the runner’s runtime behavior:

/usr/local/bin/forgejo-runner generate-config > config.yml

You might want to modify the runner.capacity attribute from the configuration file to specify the maximum number of jobs the runner can handle concurrently.

The last step is to configure the runner as a systemd service, which allows it to start automatically when the machine boots up. You’ll have to create the following file at /etc/systemd/system/forgejo-runner.service:

[Unit]
Description=Forgejo Runner
After=network.target

[Service]
WorkingDirectory=/root
ExecStart=/usr/local/bin/forgejo-runner daemon --config /root/config.yml
Restart=always

[Install]
WantedBy=multi-user.target

Once the file is created, you can enable and start the service by running:

systemctl daemon-reload
systemctl enable forgejo-runner
systemctl start forgejo-runner

After completing all the above steps, you should see the runner showing up in the “Runners” tab of your Codeberg organization or user’s “Actions” settings.

]]>
<![CDATA[Learn how to configure a Hetzner server as a Forgejo runner to run CI/CD jobs for your projects hosted on Codeberg.]]>
Tuist is a product https://pepicrft.me/blog/2024/11/22/tuist-is-a-product 2024-11-22T00:00:00+00:00 2024-11-22T00:00:00+00:00 <![CDATA[

I listened to a podcast interview with Solomon Hykes, and I liked it a lot. Many of the points he touched on are things we’ve experienced or discussed in shaping the Tuist product. It was great to hear and learn from the experiences of someone else.

Solomon Hykes, the founder of Docker and now Dagger, shares our love for building developer products. Both Docker and Dagger are open-source businesses, meaning their core revolves around open source, community, and open discussions—even when those discussions lean more toward the business side of things. I believe this is how the best and most enduring companies are built. However, as he points out, achieving this is not easy. Why is that?

He differentiates between products and components. Components are open-source, standardized pieces that become industry commodities. They often evolve from a product once they’ve matured. It makes sense for such components to belong to a foundation once they’ve reached critical mass. Examples of components include Kubernetes and containerd. Typically, their licenses and trademarks, which are usually managed by foundations, are permissive. This allows businesses to use them as enablers or even integrate them as part of their offerings.

However, Dagger and Docker are not components—they are products. What’s the distinction? Products are fully integrated developer experiences that deliver value, often have ecosystems around them, and monetize certain features to fund further development. This funding often extends to investments in components. Solomon emphasizes this distinction as critical for building great developer tools, and we agree.

Some organizations may dislike this approach because it prevents them from leveraging communities and streams of capital to outcompete you and extract value from the community. They often argue, “You’re not open enough,” as Bitrise did when they suggested that Tuist should be part of the Mobile Native Foundation. It was amusing to see them use the same tactics that cloud providers once used against Docker. We don’t have anything against the foundation; in fact, we’re considering extracting Tuist’s generation logic into a component. However, Tuist itself is not a commodity—it’s a product, most of which is open source. This distinction is subtle but important.

Since Tuist is a product, it has features like the login command to authenticate with Tuist itself, the vendor behind the product. We see being a product as a critical distinction with significant implications. For example, it’s the reason we’re gradually transforming Tuist into an integrated extension of Apple’s tooling. This evolution wouldn’t have been possible if Tuist were a commodity, which would have led to stagnation. Simply put, we don’t want Tuist to stagnate.

We expect some organizations to dislike this model, and that’s fine. But there are many others who understand the need for this approach—especially from a financial perspective—and are eager to participate in shaping Tuist and its community. Like Docker in its early days, this is where our focus should be. Only by focusing here can we create a truly great product.

]]>
<![CDATA[How I choose the technology I use]]>
Whom do I trust? https://pepicrft.me/blog/2024/11/19/whom-do-i-trust 2024-11-19T00:00:00+00:00 2024-11-19T00:00:00+00:00 <![CDATA[

Once while reading the book “Hacking Capitalism,” I read a sentence that struck me: “The tech industry is technology embedded in capitalism.” I had never thought about it that way, but it made total sense. The book focuses on that fact from the perspective of a worker in that system and how to “hack” it to have a healthy relationship with it, yet I believe there’s also another angle worth reflecting on: the perspective of the people who consume technology. I don’t like to call that group users, because I believe it dehumanizes them.

I always conceived of technology as an extension of our lives to make certain things more convenient. For a long time, I assumed good intentions from the makers of those technologies, from the executives of Facebook to those from Apple. Isn’t it amazing to have the tools to be connected to so many people? Or hardware like iPhones that lower the entry barrier to technology? “We aim to make the world a better place.” I also bought into this story when I joined Shopify and stayed there for many years. Their version of making the world a better place was making e-commerce better for everyone. I’d call my 20s a phase of illusion with technology. How naive I was…

Fast forward to today, what I feel is a lot of disappointment with the tech industry. Seeing the leaders whom I long respected celebrating Trump becoming the winner makes me sad. Years ago, they didn’t dare to talk about such things publicly. But we’ve normalized a reality where they feel comfortable doing so. It’s all about money, and which millionaire isn’t happy with their taxes being cut? And the worst part is, many of us are so exhausted, confused, and driven by primitive emotions whose buttons were pressed by these same people, that we are unable to think clearly. This is not the tech industry I want to be part of. But whom do I trust?

If anything, I think this whole experience has taught me whom to trust and why. First and foremost, I trust the tools that don’t push solutions that could work offline into a service behind a subscription for the sake of monetization. For instance, I’m on board with paying a license for a macOS note-taking app. But a subscription? No way. Recently, I came across an app that had note features built in and forced you to pay once you went above x notes. Sorry, not for me.

Then from those solutions, I prefer the ones that choose standard formats and/or expose APIs to enable interoperability. I believe someone should stay with a service because they like it, not because the service made it hard for them to leave. As we are building Tuist, we see that pattern in the developer tooling space, and we are working on changing it. We believe the right to freely choose the solution to one’s problems should be respected. When a service respects that, it signals what values they stand for. Sure, this can change, but once you stand for certain values, changing them is unlikely because it would damage your reputation. This is why I choose tools like Logseq or iA Writer that use plain markdown files, which I can store in a Git repository for synchronization.

If the solution is open source, even better. I’ll gladly support the developer or the organization behind it financially. This often means a drop in UX compared to the closed-source alternatives, but I’ve learned to appreciate that instead of being negative about it. I care much more these days about the values than the presentation of the solution itself. Note that open source doesn’t necessarily mean longevity. But if they steer the direction of a popular tool poorly, it’s likely that the community will fork and continue its development. I can also contribute my own ideas and code to improve the tools that I use, which I think is amazing.

I’m fine if the tool is VC-backed as long as the investment is reasonable relative to the value the tool is generating and its market. When a tool has been thrown onto a hyper-growth path, I can tell because their desperation to meet investors’ expectations is often reflected in the tool becoming a Swiss army knife, releasing one feature after another. Alternative models are possible, but rare. The reason why I’m fine with tools that receive investment is that there are creators who are not financially privileged enough to quit their jobs and start their own businesses without initial funding. Investment solves this, and good investment can yield amazing outcomes. Penpot and Zed are examples of that, and I’m optimistic they’ll get off the ground business-wise and become references in their space.

Choosing the right technology that’s future-proof, respects people’s values, and is fair is getting trickier, but I now have my own framework, and it has been working great for me. It often means I don’t jump on the hyper-marketed trains that come and go, but I’m fine with that. I’m no longer excited by the shiny technology but by the human one.

]]>
<![CDATA[How I choose the technology I use]]>
I am out of X https://pepicrft.me/blog/2024/11/17/i-am-out-of-x 2024-11-17T00:00:00+00:00 2024-11-17T00:00:00+00:00 <![CDATA[

I’m completely out of X. I still keep my account there so no one takes the handle or in case someone wants to reach me. We’ve done the same with the Tuist account.

For my personal account, leaving was easier than I expected. Sure, my account held value through the connections and reputation I’d built over the years—but so what? In the end, that value is abstract and can evaporate overnight. The constant pressure to build a reputation and stay active for the sake of “growing a brand” was seriously affecting my mental health. I was barely present in my real life because most of my mental energy was spent figuring out what to post next. These past weeks off X have been the healthiest and most present moments I’ve had in a long time.

Tuist’s account was harder to step away from. Every company in the developer tooling space seems focused on building their reputation and brand by throwing money at the problem—buying badges or paying for visibility in someone else’s feed. Since we’re working to get a business off the ground, the thought that haunted us was: Will we be forgotten? But the more we reflected on it, the clearer it became that there are other, better ways for people to discover us. In fact, those ways align more closely with the values we want our company to represent. A side benefit is that we’re not contributing to a platform that’s causing harm to society. Sure, as a business, making money is important—but we believe moral and social responsibility are equally important and should hold the same weight.

So, we decided to do the same for the Tuist account. We’re now active on Bluesky, Mastodon, and other platforms, as well as our community forum. These alternatives are SEO-friendly and eliminate barriers, making it easier for anyone to find our content and ideas without being manipulated by algorithms. We’ve noticed some players in our space adopt tactics similar to insurance companies, pressing the buttons of fear and other primal emotions. We’ve chosen a different path: putting out good content, being open about the problems we’re solving, and inviting people into the process. We trust that approach—and the people it reaches—to spread our message naturally. We don’t need an X account for that.

Personally, I plan to use this blog more actively too. I’m considering building a small Phoenix publishing tool to create internet “digital gardens”—simple, personal spaces that support syndication and other formats, like photos or save-for-later articles. It would be fun to work on and a great way to share ideas.

The internet is amazing without walls. We’ve decided we won’t contribute to building them or making society worse.

]]>
<![CDATA[I find it a hostile place for my mental health and society and I don't want to contribute to it.]]>
Set up a Woodpecker CI in Hetzner server for your Codeberg account https://pepicrft.me/blog/2024/11/05/woodpecker-ci-for-codeberg 2024-11-05T00:00:00+00:00 2024-11-05T00:00:00+00:00 <![CDATA[

Lately, I’ve been reducing my reliance on closed-source platforms and replacing them with open-source alternatives. One of those platforms is GitHub, which I’ve used for years. While GitHub will remain the home for projects like Tuist—since we’ve built a community there—I don’t really need it for my personal projects, which at this point mainly include my website, the one you’re reading right now.

So, I thought, why not move it to an open-source Git forge backed by a non-profit organization? That’s exactly what I did. The website is now hosted on Codeberg.

Unfortunately, as you might expect, these transitions come with some costs. Platforms like GitHub have invested heavily in providing a high level of convenience and free services, such as CI for open-source repositories. Losing those perks is frustrating, right?

On Codeberg, I have issues, pull requests, and a file explorer—but no CI. So, how am I going to continuously deploy my website on every commit to main? #shade

So my nerdy self couldn’t resist and explored what a self-managed CI solution integrated with Codeberg would look like. This blog post is a documentation of the setup I came up with, which I expect to be useful to my future self, and also to anyone else who might be in the same boat.

Woodpecker CI

One of the best aspects of open-source is that there are powerful alternatives for nearly everything. Codeberg recommends Woodpecker CI, a robust CI solution written in Go. Sure, the UI might not be its strongest feature, but it gets the job done incredibly well. It even includes advanced features like auto-scaling, which might interest more experienced users.

Our setup will involve a publicly accessible server running the Woodpecker server, the Woodpecker agent, and CI workflows within Docker containers. Additionally, we’ll configure a reverse proxy to handle TLS termination with Let’s Encrypt certificates.

Installing Docker on the Host

Having obtained a server that’s accessible via SSH, which in my case is one of the cheapest options from Hetzner with the x86_64 architecture and Ubuntu pre-installed, you can install Docker by following the official documentation. To verify that Docker is up and running, you can run the command:

sudo docker run hello-world

The success of the command will indicate a successful installation.

Note about the connection: You’ll need to have SSH access to the server. In my case, I use VSCode’s SSH capabilities not only to open a terminal session but also to edit files directly on the remote server.

Creating a Docker Compose File

To orchestrate the launching of all the services, we are going to use Docker Compose. Create a file at /opt/woodpecker/docker-compose.yml and add the following content:

services:
  traefik:
    image: "traefik:v3.1"
    container_name: "traefik"
    command:
      - "--log.level=TRACE"
      - "--api.insecure=true"
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=true"
      - "--entryPoints.web.address=:80"
      - "--entryPoints.websecure.address=:443"
      - "--certificatesresolvers.ci.acme.httpchallenge=true"
      - "--certificatesresolvers.ci.acme.httpchallenge.entrypoint=web"
      - "[email protected]"
      - "--certificatesresolvers.ci.acme.storage=/letsencrypt/acme.json"
    ports:
      - "443:443"
      - "80:80"
    volumes:
      - "./letsencrypt:/letsencrypt"

  woodpecker-server:
    image: woodpeckerci/woodpecker-server:v2.7.1
    container_name: woodpecker-server
    ports:
      - 8000:8000
    labels:
      # Web secure
      - "traefik.http.routers.woodpecker-secure.rule=Host(`ci.pepicrft.me`)"
      - "traefik.http.routers.woodpecker-secure.entrypoints=websecure"
      - "traefik.http.routers.woodpecker-secure.tls.certresolver=ci"
      - "traefik.http.routers.woodpecker-secure.tls=true"
      - "traefik.http.services.woodpecker-secure.loadbalancer.server.port=8000"
      # Web
      - "traefik.http.routers.woodpecker-http.rule=Host(`ci.pepicrft.me`)"
      - "traefik.http.routers.woodpecker-http.entrypoints=web"
      - "traefik.http.routers.woodpecker-http.middlewares=redirect-to-https"
      # Redirect middleware
      - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
    volumes:
      - woodpecker-server-data:/var/lib/woodpecker/
    environment:
      - WOODPECKER_ADMIN=pepicrft
      - WOODPECKER_OPEN=true
      - WOODPECKER_HOST=https://ci.pepicrft.me
      - WOODPECKER_FORGEJO=true
      - WOODPECKER_FORGEJO_URL=https://codeberg.org
      - WOODPECKER_FORGEJO_CLIENT=client_id
      - WOODPECKER_FORGEJO_SECRET=secret
      - WOODPECKER_AGENT_SECRET=agent_secret
      - WOODPECKER_SERVER_ADDR=:8000

  woodpecker-agent:
    image: woodpeckerci/woodpecker-agent:latest
    command: agent
    restart: always
    depends_on:
      - woodpecker-server
    volumes:
      - woodpecker-agent-config:/etc/woodpecker
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WOODPECKER_SERVER=woodpecker-server:9000
      - WOODPECKER_AGENT_SECRET=agent_secret
      - WOODPECKER_MAX_WORKFLOWS=8
volumes:
  woodpecker-server-data:
  woodpecker-agent-config:

Now that we have three services, Traefik, Woodpecker Server, and Woodpecker Agent. Traefik acts as a reverse proxy and a TLS terminator for Woodpecker Server. One cool thing about Traefik is that the configuration can be passed through CLI arguments or Docker labels, which makes it super easy to manage the configuration of the services. Note that we are mounting the /opt/woodpecker/letsencrypt directory to store the Let’s Encrypt certificates and reuse them across runs.

The second service is the Woodpecker Server. This one starts the HTTP service that orchestrates the CI/CD pipelines and provides a web interface to manage the workflows. In my case, I’m using Codeberg, which uses the Forgejo Git forge, but you can use GitLab, GitHub, and the handful of others available out there. To authenticate, you’ll need to create an OAuth application in your Git forge account and use the client ID and secret in the WOODPECKER_FORGEJO_CLIENT and WOODPECKER_FORGEJO_SECRET environment variables. Or the respective ones for the other Git forges.

You’ll also need to create an agent secret and use it in the WOODPECKER_AGENT_SECRET environment variable. You can do so by running the following command:

openssl rand -base64 32

And last but not least, you’ll need the agent service, which is the one that will run the CI/CD pipelines. Note in the environment variables that we are setting WOODPECKER_SERVER to instruct the agent to connect to the server we just started. And WOODPECKER_MAX_WORKFLOWS to limit the number of workflows that can run concurrently.

One thing that’s important to call out is the following volume that we mount:

- /var/run/docker.sock:/var/run/docker.sock

If your workflows don’t need Docker, you can skip this volume. However, if you plan to use Docker from your workflows, you’ll need to mount the Docker socket. In my case, I need it for the deployment pipelines because I’m deploying the website using OCI images built using Docker.

Note that with that line you are escalating the permissions of the woodpecker-agent service to have access to the Docker socket, which runs on the host machine. So I’d recommend that you configure the workflows from branches opened by external contributors to require approval from maintainers before running the workflows.

Systemd Service

To run the services we just created, we can create a Systemd service. Create a file at /etc/systemd/system/[email protected] and add the following content:

[Unit]
Description=%i service with docker compose
Requires=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/opt/%i
ExecStart=/usr/bin/docker compose up -d --remove-orphans
ExecStop=/usr/bin/docker compose down

[Install]
WantedBy=default.target

The service will start the Docker Compose file we created at /opt/%i/docker-compose.yml and remove the containers when the service is stopped. Note that the systemd service is generic, which means you can use it with other Docker Compose files by just changing the directory.

To start the service, run the following command:

sudo systemctl start docker-compose@woodpecker

And voilà! You should now have a fully functional CI/CD service running on your server.

DNS

Remember to point a DNS A record to the Hetzner server’s public IP address. In my case, that was ci.pepicrft.me.

]]>
<![CDATA[A guide to setting up a self-managed Woodpecker CI on a Hetzner server for continuous deployment of a website hosted on Codeberg.]]>
If system APIs where awaitable https://pepicrft.me/blog/2024/10/25/if-system-apis-were-awaitable 2024-10-25T00:00:00+00:00 2024-10-25T00:00:00+00:00 <![CDATA[

Swift’s async/await concurrency reminds me a lot of when a similar pattern was introduced in JavaScript. Syntactically, it was a game-changer, making the code much easier to reason about. But performance-wise, it didn’t meet the expectation that sprinkling async and await statements everywhere would make code faster.

Many APIs back then remained synchronous, wrapped or not in an awaitable promise—similar to how some Foundation APIs, like FileManager, are still synchronous today. Wrapping calls to FileManager in something awaitable doesn’t actually make a difference since FileManager will still block the system thread until it finishes. This means you can’t reuse that thread to do something else while FileManager is busy.

In other words, to truly take advantage of async/await, the underlying APIs need to be designed to leverage it. Otherwise, you’re just wrapping synchronous code, gaining syntactic benefits without performance improvements.

This is why we began replacing FileManager with NIOFileSystem, which is designed with concurrency in mind. Unfortunately, its design has some flaws that lead to worse performance than FileManager’s blocking API. Plus, it doesn’t shield consumers from the limited number of available handles.

I’d love to see Apple invest resources in revisiting foundational APIs, so code built on top of them can fully utilize hardware capabilities without requiring too much additional effort.

]]>
<![CDATA[Swift’s async/await concurrency is a game-changer, but to fully leverage it, foundational APIs need to be designed with concurrency in mind.]]>
Concurrent work in non-concurrent brains https://pepicrft.me/blog/2024/10/19/concurrent-brains 2024-10-19T00:00:00+00:00 2024-10-19T00:00:00+00:00 <![CDATA[

I’ve noticed that I often try to hold more in my mind than I can handle.

When I have an open PR waiting for review, I start working on something else, but that PR still lingers in my thoughts. If, while working on the new task, a Slack message comes through, I pause what I’m doing to respond, all while keeping track of the pause so I can return to it. If an idea pops into my head, I let it simmer for a bit. On days when I let my brain juggle multiple things like this, I end up feeling mentally exhausted.

So, what am I doing to manage this? I’m practicing holding fewer things in my mind by using a queue system. When I finish a task, like reviewing a PR, I add it to the queue. If a new idea comes up, I put it in a queue for later. If a support request comes in, I queue that as well. Instead of a single queue, I have multiple ones, and I allocate specific time slots for each. My email inbox is a queue that I only process once a day. The same goes for support issues and PRs. I don’t let them distract me throughout the day.

The downside to this approach is that it can feel like working on a production line—doing one thing after another until the day is over. What about the creative work our brains thrive on? I make sure to leave space for that too. The chain-like tasks are important but often not the most exciting. Creative work, on the other hand, is the most exciting but doesn’t always have an immediate impact on the project at hand. Balancing these two types of work is key to maintaining my mental well-being, and I’m continuously working on finding that balance.

What about you? How do you manage your mental input and workload?

]]>
<![CDATA[Trying to process concurrent tasks in a non-concurrent brain can be exhausting. Here's how I manage it.]]>
Small but sexy https://pepicrft.me/blog/2024/10/19/small-but-sexy 2024-10-19T00:00:00+00:00 2024-10-19T00:00:00+00:00 <![CDATA[

When observing the tech industry today, one can’t help but notice an obsession with hyper-growth. This trend is often fueled by the expectations of venture capitalists who seek high returns in a short timeframe. However, this obsession frequently leads to inferior products, where design decisions are driven by sales and marketing rather than user experience. It results in short-sighted technical choices that prioritize flashy, trendy technologies over battle-tested, future-proof standards.

Recently, I’ve examined the dashboards of various tools in our space and was astonished by the neglect evident in their design. They bombard users with banners urging upgrades to higher plans, overwhelming amounts of competing information, and a persistent push to contact sales. While I understand the need to generate revenue, such an obsession often sacrifices the craftsmanship of the product.

At Tuist, we refuse to compromise on quality. This commitment is deeply rooted in our principles. We prioritize creating a product that sparks joy, which is why we’ve invested in hiring a product designer, even within our limited budget. We believe that aesthetics are crucial for evoking positive emotions when using our tool. As a result, our growth may not match the rapid pace of other players in the industry, but that’s not our goal. We are focused on building for the long term, emphasizing slow and steady growth driven by product quality, our open-source contributions, and ongoing community support.

Our relationship with open source is genuine; it’s not merely a marketing strategy like brands paying influencers to promote their products. We are committed to leveraging open-source as a means to drive innovation in the industry, dedicating our time and resources to make it a reality.

We are turning Tuist into a commercial business so that we can do more MIT open source for the Apple ecosystem.

This mindset also shapes our technical decisions. As a small team, we recognize the need to keep our technology stack simple, as we cannot afford the complexity of maintaining a convoluted system. There’s a crucial distinction between a system that is inherently simple and one that merely appears simple because a third-party service manages the complexity. For this reason, we’ve chosen Elixir, and we couldn’t be happier with our decision.

Embracing simplicity has significant advantages, particularly for our on-premise customers who need to self-host the server. The requirements are minimal: a server, a PostgreSQL database, and S3 storage. If scaling is necessary, it’s as easy as adding more cores and memory to your instance. Furthermore, we prioritize standards across the board. Our stack doesn’t rely on a build tool; we write vanilla CSS and JavaScript—yes, no Tailwind. This approach is liberating, as the web platform is designed for longevity, and we intend to leverage that durability.

Do we worry about breaking changes in Next.js? Not at all. What about Tailwind or TypeScript? We remain unconcerned. While others may spend time updating their toolchains, we focus on building useful features with our straightforward CSS and JavaScript. Our productivity has never been higher, and once we establish our design system using web components, creating new features in Tuist will feel as effortless as playing with LEGO.

Eventually, we plan to open-source our server, allowing you to contribute and extend it as you wish. We are steadily building momentum that will become unstoppable.

Will we reach every corner of the app development ecosystem? I doubt it. However, we will gradually convert teams into believers in our craft and contributors to a common vision that advances the industry. Will this take years? Absolutely—so what? We are designing our company to maximize momentum with minimal resources while ensuring that everyone involved in making Tuist possible enjoys the best professional experience of their lives as they build our product.

]]>
<![CDATA[The tech industry is obsessed with hyper-growth, but at Tuist, we prioritize quality over quantity. We are committed to building a product that sparks joy, investing in design, and embracing simplicity. Our focus on standards and open-source contributions drives our long-term growth.]]>
Brains are complex https://pepicrft.me/blog/2024/10/15/brains-are-complex 2024-10-15T00:00:00+00:00 2024-10-15T00:00:00+00:00 <![CDATA[

Brains are complex. I don’t understand mine. I sometimes try to understand what it happens in it, and I can’t. It’s reaches limits, and that’s ok, so I’m trying to be nice with it.

Trying to find reasons why it sometimes feel exhausted is pointless. When it happens, I take breaks. Specially now, where as part of building Tuist, I feel I’m constantly switching between different areas of the brain.

When I’m responding emails, it puts my brain in a mode that’s not inspired to code. When I’m coding, it’s not inspired to be social. And if I force it, it’s not natural so I exhaust it and reach frustration. Tricky, isn’t it?

Impostor syndrome invades me a lot too. I boycot myself. What if Tuist doesn’t work? What if we don’t reach the point of financial sustainability that we are aspiring to have? What if we disappoint the organizations that are betting on us? What if I don’t have the mental clarity to pull this off?

And social networks don’t help here. I’d isolate myself from the world and focus on the craft, which brings me a lot of joy, but how would people know about Tuist? But if I spend too much time in it, envy and comparison start to creep in. Are we doing enough? Should we do more?

I don’t have answers. But I know what makes my brain feel good, so I’m giving it what it needs. I enjoy creating things. I enjoy writing. I enjoy going for long walks. I enjoy sleeping a good siesta. So I’m prioritizing those things throughout my day, because otherwise, I’m not good to anyone.

Brains are complex, treat yours well. I didn’t listen to mine for a long time, and it’s time to change that.

]]>
<![CDATA[In this blog post I reflect on the complexity of the brain and how I'm trying to be nice with mine.]]>
Makers and takers https://pepicrft.me/blog/2024/10/13/makers-and-takers 2024-10-13T00:00:00+00:00 2024-10-13T00:00:00+00:00 <![CDATA[

I recently came across this piece by the creator of Drupal, Dries Buytaert, and it helped me articulate, through mental models, how I’ve been thinking about open source sustainability.

Open source and free software are non-excludable and, contrary to what many believe, rivalrous—because the resources available to maintain and improve them are limited. This makes them a common good, a concept I first encountered while reading the book Working in Public.

However, I hadn’t fully realized that, from the perspective of open source companies like the one we are building at Tuist, open source is also rivalrous in another way: the shared resource is the customer, who cannot be shared by two companies at the same time.

Tuist has become a common good. And like any common good, it began to experience the tragedy of the commons—sustaining the project became increasingly difficult. When you reach that point, you must consider how to balance the makers and the takers. Otherwise, the project will die. This is something I’ve observed in many open source projects in the ecosystem where Tuist operates: they struggle to keep up with demand and slowly fade away.

In his blog post, and drawing on ideas from past research, Dries shares three patterns to address this issue, which we have considered at Tuist:

  1. Self-governance: This is unfeasible at a large scale where many takers have conflicting interests. Making this work would require most of the limited resources available to be spent on governance.

  2. Privatization: This is the approach we are currently exploring at Tuist, similar to what companies like Mozilla have done. Through our paid server features, we gain a commercial advantage over takers, while still creating a positive social impact for all users of the open source project, including the takers. In other words, privatization allows for a win-win scenario.

  3. Centralization: This approach mirrors how governments manage common goods (e.g., highways). In open source, we see this in foundations that govern projects. The challenge with this model lies in the accuracy of monitoring and the effectiveness of rewarding (or sanctioning). As Dries notes:

    Because Open Source contribution comes in different forms, tracking and valuing Open Source contribution is a very difficult and expensive process, not to mention full of conflict. Running this centralized, government-like organization also needs to be paid for, and that can be its own challenge.

It’s reassuring to see that what we are experiencing at Tuist is not unique. The imbalance created by having more takers than makers is a common problem in open source, and there are ways to address it. We’ve discarded self-governance and centralization for now due to the costs involved. Instead, we are exploring privatization as a way to bring in funding to support continued open source development.

Not long ago, a new group of licenses emerged: fair. These licenses look promising as a way to explore open-sourcing the innovations we are bringing to the server.

]]>
<![CDATA[In this blog post I reflect on Dries Buytaert's piece about balancing makers and takers in open source, and how it relates to Tuist.]]>
Monkey brain https://pepicrft.me/blog/2024/10/08/monkey-brains 2024-10-08T00:00:00+00:00 2024-10-08T00:00:00+00:00 <![CDATA[

The other day, while watching a talk by DHH, he mentioned a term that stuck with me: monkey brains. He used it to describe our brain’s limited capacity to hold multiple things at once and emphasized the importance of conceptually compressing complexities to reduce the mental load.

Why bring this up? Because I think I’m stretching my brain to hold too many programming languages and paradigms. Over the past year, I’ve gone from being highly proficient in Swift to learning and familiarizing myself with Ruby, JavaScript (in the context of Node.js), and most recently, Elixir. I’ve also explored Rust and, more recently, Zig. There’s something valuable about staying informed—you gain new ideas and consolidate concepts across languages. For instance, many concepts in Swift are rooted in Rust, and learning Rust helps deepen your understanding of Swift. But this juggling act is becoming overwhelming for my “monkey brain.”

What makes it especially draining is that much of my mental energy is now focused on understanding the complexities of building a company. I’m asking my brain to absorb information like it did 10 years ago, which often leads to days of mental exhaustion—a situation I’m not happy about. I need to become more comfortable focusing on the essential knowledge required to build the company, while maintaining a sense of curiosity without the pressure to master everything. It’s simply not feasible.

At the same time, Tuist has the potential to make a significant impact in the Swift ecosystem, particularly in developer tools and packages. With more focus, we could capitalize on that opportunity. For example, I want to create a design system for the CLI, which would elevate the Tuist user experience and lay the groundwork for other CLIs. Or I could dive into Swift’s concurrency updates and develop a resource that benefits both us and the broader community. Unfortunately, I haven’t been able to pursue these goals because I’ve been preoccupied with learning other technologies. It’s time for a change.

Moving forward, I’ll concentrate on three areas: building the company, Elixir, and Swift. Going deep in these areas will enable me to make meaningful contributions to Tuist.

]]>
<![CDATA[My brain is juggling too many programming languages and paradigms. It's time to focus on the essentials.]]>
Licenses, governance, and trademarks in the open-source world https://pepicrft.me/blog/2024/10/05/oss-thoughts 2024-10-05T00:00:00+00:00 2024-10-05T00:00:00+00:00 <![CDATA[

You might have heard about the recent controversy between WordPress and WP Engine. As someone who has gone through a small-scale version of that, I’ve been reflecting a lot on the case. My conclusion is that while there are aspects we could blame on Automattic, some of the blame, though fair, arises from misunderstandings about the open-source world. To grasp this fully, we need to talk about three key things: licenses, trademarks, and governance.

Licenses

Licenses define what you can and cannot do with a piece of software. Some, like the MIT, Apache 2.0, or GPL, are approved by the Open Source Initiative. Others, like the group of Fair Source licenses, aren’t, yet they remain equally valid and enforceable.

It’s important to highlight three things:

  1. A license grants you rights over the code.
  2. However, it doesn’t give you the right to decide the project’s direction (often misunderstood).
  3. The existence of a trademark prevents you from using the project’s name freely (often overlooked).

Many conflicts stem from the mistaken belief that having rights over the software also grants rights over its direction, which is entirely incorrect. Companies and developers should remember that while disagreements over a project’s direction are inevitable, unlike with closed-source software, open-source projects allow you the freedom to fork the project and take it in the direction you desire. Though it’s often easier to lobby maintainers to change their stance, it’s not the only path. This is why companies like Shopify or GitHub ensure they have a voice in the direction of Ruby and Ruby on Rails or join foundations like the Rust Foundation.

The world is diverse, and different developers and companies have different visions for a project, which is perfectly fine. Expecting a single direction to please everyone is utopian. Disagreements over the project’s direction are normal and valid, and expressing them publicly is encouraged. However, depending on the project’s governance model, you might have more or less entitlement to influence its direction. Let’s delve into governance.

Governance

The governance of an open-source project refers to the rules and processes that define how decisions are made. Many projects lack formal governance, often resulting in the benevolent dictator model, where the project’s creator or a select group of maintainers have the final say. This is the model that Rails follows. I’m not here to debate which model is better, but I will say that the absence of governance is a problem and often the root of many conflicts.

At Tuist, we currently lack a formal governance model, so we implicitly follow the benevolent dictator approach. We are working to change that. Bitrise could have publicly raised concerns about our lack of governance, and that would have been a fair point. Due to this lack, they wrongly assumed our intentions:

“The maintainers are showing more interest in extracting revenue from their community than in making decisions that are best for the project and its end users.” — Zach Gray, Bitrise

This is completely false. Our focus has always been on making the project sustainable. The work we’ve done, not only since committing the first line of code but even after that blog post, is a testament to our true incentives. We closed off some parts of the code to prevent an unsustainable imbalance they were unwilling to help us solve.

Here’s the thing: When you’re just starting out, you don’t spend time crafting the perfect governance model or considering the legal implications of your licenses and trademarks. But as your project grows and garners community value, you’re inevitably drawn into capitalist dynamics you never wanted to be a part of. Suddenly, you need to learn and adapt to those dynamics quickly, with far fewer resources than the companies challenging you.

More democratic models are rare in open source. One exception is the Berlin-based Git forge Codeberg, which has a non-profit foundation backing it, with a board that makes decisions. Anyone can join the board and vote on matters. But this is unique—most projects follow the benevolent dictator model, and that’s okay. While the dictator may act democratically at times, they still have the final say.

If having a say in a project’s direction matters to you, consider this when choosing open-source software. If not, accept the risks of misalignment. You can voice your concerns publicly and try to lobby for changes, but don’t expect maintainers to comply.

To summarize:

  1. Governance determines how decisions are made in a project.
  2. The absence of governance usually implies a benevolent dictator model.
  3. Pure democracy is rare in open source. Benevolent dictators may adopt some democratic practices, but they have the final say.

Trademarks

Lastly, let’s talk about trademarks. Open-source projects develop a brand around their name. It’s an intangible asset that represents the project’s values and quality. Registering it is essential to protect it from misuse.

The creator of Docker acknowledged the mistake of not registering the trademark earlier and took a different approach with Dagger, registering the trademark early and publishing guidelines on its use. We did the same with Tuist. The Tuist trademark is owned by Tuist GmbH, and we’ve published guidelines for its usage. Without these protections, anyone could use the Tuist name to create a fork, misleading users into thinking it was the official project, potentially damaging the project’s reputation and years of work.

But note this: The trademark must be registered under a legal entity or person. If there’s no company or foundation behind the project, the creator must own it. The creator of WordPress initially registered it under his name, transferred it to the WordPress Foundation, and later granted rights to Automattic for commercialization.

Problems in this area often arise from the subjectivity around trademark usage. I can somewhat understand Matt’s concerns about the WordPress trademark’s use by WP Engine. Based on his comments, it seems he had been in talks with them for over a year, expressing these concerns. This reminds me of conversations we had with Bitrise about Tuist’s sustainability (we didn’t have a trademark at the time). They forked Tuist, directed their customers to use it, and had full freedom to do so, misleading their customers into thinking it was the official project. Naturally, customers didn’t want to rely on a fork not maintained by the original team, and Bitrise didn’t want to work with us to make the project sustainable. It’s a bit of the issue between Automattic and WP Engine but a small scale. Automattic wanted a share of WP Engine’s revenue or more active contributions to the project, and we only wanted to make the project sustainable. We could have reached good terms, and they’d have benefited from partnering with us, and we’d have benefited from their contributions. And since they were not only unwilling to help us, but also marketed their service as much better than ours, we had to act to protect our project.

In my opinion, having a trademark is crucial. If you maintain an open-source project, I recommend registering the trademark under a legal entity or person as soon as the project gains traction. You’ll thank me later.

To summarize:

  1. A trademark is an intangible asset that represents the project’s values and quality.
  2. Without it, misuse of the project’s name can harm its reputation.
  3. The trademark owner controls its usage.
  4. Guidelines for trademark use can be subjective, leading to conflicts.

Closing Words

Licenses, trademarks, and governance models are the three pillars of open-source projects.

As a maintainer, you shouldn’t neglect any of them. Define these aspects early, communicate them clearly (where we could have done better at Tuist), and ensure they are respected. Most importantly, iterate on them as necessary, as the environment and the needs of your project evolve.

As a consumer of open-source software, define what matters most to you. Choose projects that align with your values. If direction matters to you, find a way to influence it. Often, in a benevolent dictator model, contributing actively can give you that influence. If you don’t care about the direction, accept the risks and costs of potential misalignment. You can voice your disagreement, but don’t expect maintainers to change their minds. Remember, you often only have entitlement to the code, not to the direction.

Finally, open source is still better than closed source. You may disagree with the direction of an open-source project, but you can fork and maintain it yourself. You can’t say the same about closed-source software—companies providing critical services can shut down anytime, and you won’t get access to the code.

In my view, we need more open source and more education and awareness around these topics.

]]>
<![CDATA[I reflect on the recent WordPress and WP Engine controversy and how it relates to the open-source world.]]>
A non-concurrent design in a conrurrent world https://pepicrft.me/blog/2024/09/26/non-concurrent-design-in-a-concurrent-world 2024-09-26T00:00:00+00:00 2024-09-26T00:00:00+00:00 <![CDATA[

As you might know, concurrency is a hot topic in the Swift ecosystem. Everyone is trying to make their code “data-race safe”. Thanks to Mike’s exceptional work enabling strict concurrency Tuist’s main repositories, I haven’t had the chance to tinker much with it as to form an opinion on it. Yet, when I see the discussions on Mastodon, the feeling that I get is that it’s a complex topic that everyone is trying to figure out.

When I something complex, I like to go deep into understanding where the complexity comes from. Because only then I’ll be able to make an informed decision on how to approach it. Through Erlang, I realized that a problem space with the right modelling can make layers of complexity disappear. Erlang does it through their concept of processes. So my wonder with Swift and data-race safety was: “Is its complexity avoidable?”

Swift is in a bit of an unfair position to make that avoidable. First, it had to reconcile Objective-C’s OOP paradigm, which is known for embracing mutability, which is the culprit of data-races. Wouldn’t it have been awesome if they didn’t have to support compatibility with Objective-C? Definitively! But Apple and many developers couldn’t have afforded that.

Moreover, Swift was pushed beyond the Apple ecosystem, more specifically to the server-side. Server-side applications are known to be highly-concurrent, which means, data-race issues become more apparent. This explains Apple’s recent push for the actor model and structured concurrency, which you need to propagate the cancellation of work, when for example a request is cancelled. All this push makes sense when looked from the angle of Swift in the server-side, but from the perspective of an app developer, or in our case a CLI developer, it feels somewhat unnecessary.

Because Swift’s origins had a strong focus on Apple platforms, not as highly concurrent as servers, some language design patterns were not optimized for concurrency. Erlang started with processes, and everything built from there. Imagine if Swift had started with a similar concept, and built all the frameworks on it. The story would have likely been different.

So Apple has done a great job navigating how the language has been evolving and reaching new environments. However, the debt is accumulating and taking shape in the form of language complexity. Or at least that’s my perception. Perhaps in a few years, we’ll all become that familiar with it, that it won’t conceive it as complex anymore. This poses a very interesting question, somewhat philosophical about the future of Swift: Is trying Swift to do too much?

I understand why Apple is trying to take Swift to many places. And it’s quite impressive to see what the community is achieving with it. For example the PointFree folks coming up with patterns for cross-platform UIs, compiling Swift to Wasm, and potentially leveraging that soon for expanding Swift Macros, or running Swift on the server with projects like Vapor or Hummingbird. It’s truly inspiring. Yet, I can’t help but have a feeling that Apple is just partially supporting the idea that Swift can be a language for everything. I sense a bit of fear in involving the community more in the governance of the language and finding incentives to make it more community-driven. Some frameworks remain closed-source, and the language evolution is still very much driven by Apple’s needs.

The question is, is Apple going to change that and throw themselves full into taking Swift everywhere? Or will we forever be in this let’s try Swift here and there adding language features and needed without a big picture of how the language should evolve?

]]>
<![CDATA[I reflect on the complexity of Swift's concurrency model and how it could have been avoided.]]>
Should I join the show? https://pepicrft.me/blog/2024/09/25/should-i-be-part-of-the-show 2024-09-25T00:00:00+00:00 2024-09-25T00:00:00+00:00 <![CDATA[

When I enter X these days, it reminds me of Facebook, which I moved away from long ago. The feed is full of crap, with posts formatting their text in bold to get your attention. People step on each other in a macho-like manner: “Hey! You think you know about this topic, but let me tell you that I know more than you.” It feels like a show where people are feeding their narcissism, and X tries to get everyone’s eyes on debates. Not everything is like this, but most of the feed the algorithm designed for me.

As someone who devoted many years to X, I see X as an opportunity to promote the company I’m helping build, Tuist. Yet, it is challenging in such an unhealthy and noisy environment. People are getting used to the attention-seeking content that makes you sound boring. And so, you wonder if you should be doing like the others. “What are we doing here?” We don’t belong to this game, which is often our answer. But then “Are we putting building a thriving business at risk?” follows. To which I often conclude: “I doubt.” So, is my time there over?

I’ve been thinking a lot lately about running sprints versus marathons. These days, running sprints and feeding the platform are many of what’s encouraged. Running marathons is not encouraged at all. The quantity is more important than the quality. Keep feeding the system. But this feels so out of alignment with our values and the type of company we want to build with Tuist. We prime quantity over quality. We want to move slowly but steadily because that means perspective to have great ideas. We don’t need to capture people’s attention because attention would become our goal. And that, as a goal, leads to Frankenstein tools that mimic everyone’s attention needs. Solutions that are not about people’s needs but what they’d talk about. I call this Attention-Driven-Product-Design.

X brought me great things but also kindled the worst aspects of my personality. I experienced envy, unhealthy competition, anger, and grief, among others. Many days, I felt terrible at the end of the day. Also, there were days when I wasted my mental energy figuring out a catchy thing to share. It felt as if X had kidnapped my mental agency in a way.

“You need to find a good balance,” people say… Yet, why would I be there if Internet is such a fantastic place with much better alternatives? Sure! Mastodon or your website won’t have as much reach as an attention-seeking post on X, so what? The content on Mastodon, a forum or an RSS-subscribable blog, will resist time. Who knows… tomorrow someone might find something useful in a post I wrote five years ago. The same is very unlikely on X. The content will be accessible to anyone with a web browser, even without JS enabled. I embrace the web in its most essential form, placing the focus on the content. It’s beautiful and liberating to peel the layers that people created and embrace the Web in its more raw form.

But it takes time and conscious effort. There are many patterns that I need to revert and reminders about my values. I helps a lot to see everything as a marathon. When I do that, I’m mentally relieved, happier, and with a stronger sense of fulfillment.

]]>
<![CDATA[This blog post is a reflection in my relationship with a social network, X. I share my thoughts on the unhealthy environment that the platform has become and how it contrasts with the values of the company I'm building, Tuist. I also reflect on the benefits of the web in its raw form and the importance of running marathons over sprints.]]>
The iPad as a creativity device https://pepicrft.me/blog/2024/09/19/ipad 2024-09-19T00:00:00+00:00 2024-09-19T00:00:00+00:00 <![CDATA[

As a kid, I used to paint a lot—mostly with oils. It was the one activity that could keep me still. Unfortunately, over time, I let it slip away, replaced by more logical pursuits like programming and my studies, which took over the space that creative work once held.

In an effort to balance building a company and coding with activities unrelated to work, I’ve recently returned to painting—this time, using an iPad. I was genuinely surprised by the power of these devices, which I had previously used only for reading. It once seemed almost pointless to own one, but now I understand why creatives gravitate toward it, and why Apple markets it to them. It’s a remarkably powerful and portable tool for creative expression.

I’m even considering exploring video editing with it, using some short clips I recorded in Berlin—just to see how far I can stretch my creativity. It feels like there’s a part of my brain that’s been dormant, waiting to be reawakened. There’s also something truly beautiful about engaging in these activities without any expectation of outcome, unlike coding, which has become more outcome-driven now that there’s a company to build. If there’s one thing I’ve learned, it’s that my mind thrives when it has room to roam. To code for the joy of coding, to paint for the joy of painting, to walk for the joy of walking…

Anyway, I’m on a flight to Madrid to attend NSSpain, and I thought I’d share some thoughts on iPads after spending some time drawing on mine.

]]>
<![CDATA[“I’d never thought that I could use my iPad beyond reading with it."]]>
On taking shortcuts to build communities https://pepicrft.me/blog/2024/09/15/communities 2024-09-15T00:00:00+00:00 2024-09-15T00:00:00+00:00 <![CDATA[

Many companies aspire to build communities around their products, as these communities are often formed by true believers—people who have intrinsic motivation to contribute, such as by evangelizing the product wherever they go. However, I’ve noticed that many companies, especially in the developer tooling space, attempt to take shortcuts by throwing money at the problem to build these communities. Things don’t work like that.

Money can buy a false sense of community. People may gather because you send them freebies or because they follow what an influencer, paid by you, says about your product. But this engagement fades when the money stops flowing. Their motivation is purely extrinsic, and there’s nothing in your product that makes them stick around. You’re just a sales-oriented company. If you truly cared about community, you’d build your company around it from the start, not the other way around.

Building a community takes time. It requires putting people first, being open to them, and involving them in how your product is shaped. Interestingly, involving them also brings a diversity of ideas, which can improve your product. It also means building for the community without expecting anything in return. Using your privileged position to commoditize the space reveals a lot about your values. When you care about advancing the ecosystem and inviting others to participate, everyone wins—the community, the ecosystem, and ultimately, you. But when you only focus on your own gain, greed may become your worst enemy, whether that happens down the road, or after you’ve exited or gone public and can claim you’ve succeeded.

Many challenges faced by tech organizations aren’t technical, but social. It’s not just about how big or crowded a market is or how feasible a technical solution might be, but about how willing you are to focus on your community and gain their trust by building for them without expecting anything in return. I’ve noticed that many companies aren’t willing to do this, and that’s why they fail in the long run. It’s inconceivable for them to do something without an immediate return.

And this doesn’t necessarily mean going open source. Look at Fly.io—they offer incredibly valuable resources for free, which they create themselves, rather than paying for low-quality content like I’ve seen other companies do. They also hired and provided a safety net to maintainers of popular projects, like Phoenix in the Elixir ecosystem, indirectly supporting the community and the broader ecosystem. You can also earn a community’s respect and trust by betting on open standards instead of locking users into your product. This is especially important when building for developers, who understand the significance of standards and their role in the long-term health of the ecosystem. If your solution is walled off, developers might use it out of necessity, but sooner or later, someone will come along with a solution built on different values, and your entire platform could fall apart.

For seven years, we were privileged to have jobs while working on Tuist on the side. Our focus was on solving problems, advancing scalable app development, and building tools that others could build upon. Now, we are working on turning this into an open business that embraces the same values. It requires setting the right boundaries in certain areas, but if we get it right, we’ll continue building these tools while supporting the communities we serve. This isn’t about trying to dominate every corner of the app development space. It’s about building a healthy business that supports the communities we’re building for.

We are embedding seven years of open-source experience into how we’re shaping this company. Though our progress might be slow at first, we see this as a marathon. We’ll keep running and building for the long term.

]]>
<![CDATA[This post is about building communities around products and how many companies take shortcuts by throwing money at the problem. It's not about money, but about building for the community without expecting anything in return.]]>
The missing narrow waist in CI https://pepicrft.me/blog/2024/09/08/ci-narrow-waist 2024-09-08T00:00:00+00:00 2024-09-08T00:00:00+00:00 <![CDATA[

Did you notice how much development power CI companies have wasted on creating solutions that flood the market with nearly identical offerings? It’s a waste of talent that could have been directed toward innovation. The closest we’ve come to innovation—at least in the app development space—has been the concept of Mobile DevOps, if you can even call that innovation. In simple terms, they’ve attempted to create vendor lock-in through a proprietary automation layer built on a foundation of community steps, which developers have little incentive to maintain. It’s 2024, and the space is in desperate need of fresh ideas.

Remember the revolution that containers brought to the shipping industry? Suddenly, transportation methods adhered to a standard, allowing innovation at different layers. There are many examples of this on the internet, and the phenomenon is called the “narrow waist.” Narrow waists shift focus from redundant efforts to new forms of innovation.

What we are missing in automation is the equivalent of containers or narrow waists. Fortunately, Dagger is building just that. No more vendor lock-in through proprietary pipeline formats or walled-off automation experiences built into SaaS products. A brighter future for automation is on the horizon, and Tuist is betting on it.

Mobile DevOps? Free Automation Ops is better. It’s free because it gives organizations the freedom to choose and move across services with minimal costs. As a result, it forces existing CI providers to innovate. My guess is that many will fall victim to the sunk cost fallacy, throwing money at marketing their Mobile DevOps solutions. Good luck! We need to bring freedom back to organizations.

Dagger will not only be instrumental in gaining freedom but also in transitioning automation from languages like Ruby or bash scripts to languages such as Swift, Kotlin, or any other language of choice. Yes, you read that right: transitioning away from Fastlane is possible. Fastlane is fantastic, but when you blur the line between automation and CI—allowing workflows to run locally or remotely, as needed—the developer experience becomes truly magical. Developers will also have access to a community-driven ecosystem of steps from Dagger.

Tuist will enable Dagger pipelines to run in macOS environments, triggered by a CLI command or a GitHub event. It’s simple:

tuist workflows run test –remote xcode-15

This command means: “Take my project, run the workflow in a remote environment with Xcode 15, and make it feel like it’s running locally by forwarding the standard pipeline events.”

Want to see a running build? Easy:

tuist workflows logs 1234 –tail

There’s no need to build web-based terminal experiences when developers already spend their time in terminals.

In 2024 or early 2025, Xcode developers will be able to do this and choose a CI provider that maximizes their freedom.

]]>
<![CDATA[It’s time for innovation to happen in the CI space through the narrow waist that Dagger brings to the table.]]>
A different approach to building a software company https://pepicrft.me/blog/2024/09/06/building-a-different-company 2024-09-06T00:00:00+00:00 2024-09-06T00:00:00+00:00 <![CDATA[

Since Tuist’s inception as a business, it became clear that we were building a different type of company: one born from the love of crafting great software that people and companies would willingly pay for. We’ve proven that there’s still space for this type of company in an industry where businesses typically view success as a zero-sum game, enabled by streams of capital with the sole end goal of going public.

Building a company this way has both positive and negative aspects, depending on your perspective: we are limited in capital. This constraint influences some early decisions that we believe will have a significant impact on the company’s success in the market.

Technology Choices

We select technologies that are simple by default and embrace the standards of the platforms on which they run. This is why we chose Elixir and Phoenix over JavaScript web frameworks, and vanilla CSS over build or runtime abstractions of the language semantics. Choosing these technologies validated our belief that our industry might have gone too far with layers of abstractions, most of which are sustained by VC capital or struggling with OSS sustainability challenges. Thanks to this approach, we don’t need to spend days or weeks of development dealing with dependency updates that crash our app at run or build time, or because a framework maintainer decides to embrace a new trend and ship breaking changes. We use all that time to build Tuist.

Scalability and Cost-Effectiveness

Moreover, we make decisions that scale well and cheaply. When people often say that every technology scales, they’re right, but they usually forget to complete the sentence with: “if you throw enough money at it.” Since we don’t have that money, we choose technologies that scale cheaply. Elixir is a prime example of this approach. You can go a long way by adding more cores and CPUs to your machines. Will we ever need a different scaling method? Maybe, but we defer that decision as much as possible. We also stay away from “serverless” platforms that promise “pay-what-you-use” pricing, which often translates to “charge-even-more-than-with-a-single-server-running.” We view these concepts as new monetization strategies wrapped in attractive “go to the edge” narratives.

Open Source and Openness

Another key element in staying relevant without streams of capital is our commitment to open source and openness. While many players see it as a marketing tool, we embrace open source as a means to build more diverse software and foster new solutions that can build on new layers of commodities. We plan to take some of the innovation that has happened behind the walls of our competitors—often intentionally designed to vendor-lock customers—and offer it for free to the community. Because we don’t need to “exit” Tuist, we can afford to give organizations back some of the freedom and agency that was taken from them. We’ll monetize by integrating all these tools into a service that we’ll maintain and run for them. The underlying technologies will all be open source with permissive licenses, similar to Grafana’s approach.

Building on Open Source

We also build on open source solutions. We look for alternatives to mainstream services where we often end up being the product. We use Chatwoot over Intercom, CloudNext over Google Drive, Documenso over DocuSign, and Penpot over Figma. This approach not only supports the ecosystem of companies that embrace openness as their core values but also minimizes risks associated with depending on closed-source services. These services often make it difficult to export your accounts and data, limiting your freedom to move to the best solution for your business. We also support these projects either by using their self-hosted services or by donating to the people and organizations behind them.

Achievements and Future Outlook

With just one full-time developer, a part-time developer (myself), and a talented designer joining soon, we are proud of the quality and quantity of solutions we’ve been able to produce. Although mentally taxing at times, being limited in capital was key in embracing decision-making principles early on that will have a huge impact on our ability to stay relevant, discover new market opportunities, and respect our customers’ rights and freedoms.

Tuist is here to shake up the game.

]]>
<![CDATA[Tuist is doing things differently in the tech world. We’re building great software people actually want to pay for, without chasing big investor money. By keeping things simple, open-source, and scalable, we’re proving you don’t need millions in funding to make cool stuff that respects users’ freedom. It’s challenging, but we’re shaking up the game one line of code at a time.]]>
About mental health https://pepicrft.me/blog/2024/09/04/about-mental-health 2024-09-04T00:00:00+00:00 2024-09-04T00:00:00+00:00 <![CDATA[

It’s not the first time I’m talking about mental health, and it won’t be the last one. I do so to structure my thoughts and leave my story in the hope that people can avoid falling into the same traps that I did.

I’d describe my 20s as the decade of professional success: get a higher salary, climb the ladder, and get recognized in the industry. The dream was suddenly spoiled, and I was overwhelmed by many feelings. I was disappointed about the reality of the tech industry and how businesses work, which I hadn’t realized before. I also found myself with an identity toxically intertwined with my professional self. My work defined myself. Coding had evolved from a pure craft activity I loved into a tool to be recognized or successful in an industry filled with entrepreneurs and indie builders. I ought to be like them; I thought many times: work, work, work, and work. I made my health and the Pedro outside of work secondary. As bad as it sounds, I forgot about myself. I became addicated to working.

“Hacking Capitalism” by Kris Nóva started to open my eyes. It helped me better understand capitalism and the tech industry and shift the focus toward myself and my passion for the craft. However, we needed to turn Tuist into something we could make a living off, so I inevitably threw myself into the industry I had come out disillusioned. On one side, I conceived it as an opportunity to codify its values and a healthier approach to doing business with technology, one that values openness and empowers both the people who use our tools and the people who make it possible. However, you still need to navigate the unfair practices of the industry: people saying that you have no chance to succeed with two founders being technical, companies refusing to correspond to the value that they get through financial contributions, or feeling tiny against the giants that have endless streams of capital to outcompete you or take their privilege position to shit on you. It has been challenging to navigate, but I learned something about it. Perhaps I’m still too connected to work?

Another book further opened my eyes here. It’s called “Technofeudalism” by Yanis Varoufakis. It talks about the idea of cloud capital and how it tore down the walls that gave us freedom in this capitalistic world. The freedom to be someone else outside of work. The freedom to think without thinking about whether something should be posted. The freedom to take a nap without thinking about meetings. The freedom to have a timeless hobby. All of that is gone in this world that Technofeudalism is shaping, and I fell victim to that. This is not only due to the amount of thinking that went into work but also the peripheral X doom-scrolling and the ongoing feeling of having to work on my and Tuist’s brand to succeed in building a business. I succumbed to the cloud capital participating in it with my attention and time.

I think establishing limits there is needed for my mental health, and my mental health is, in fact, needed to build a healthy company, so I’m not doing either myself or Tuist a favor by trying to be always online and giving away investing in myself. So I started to change that. It won’t be easy because I have to revert to many patterns that I acquired over the years, but it’s a must-do to remain mentally healthy. I uninstalled Slack and removed my work email address from the phone. I’m also limiting my time on social channels and stopped using X. More and more, it’s becoming a very hostile environment. I like it in Mastodon, and while the reach of what we post there is smaller, I’m becoming comfortable with that. It’s not my role to capture people’s eyeballs with shiny posts on X, but to share humane stories that people can read at their own pace if they feel like it. This last thing aligns more with our values, but it feels odd having jumped on the Twitter and X train for many years.

I’m trying not to do things outside of work: resting, walking, or exercising. I’m reverting this idea that my life has to feel productive. I’m not a machine, and therefore, I shouldn’t treat myself like that. I have emotions, a complex brain that I need to take care of, a family to love and spend time with, and a life to live and enjoy instead of spending it jumping on the hamster wheels of becoming successful, whatever that means.

I want to live a simple life with the people I love. Really.

]]>
<![CDATA[This post is a follow-up on my previous post about mental health. I talk about how I'm trying to establish limits to protect my mental health and how that's needed to build a healthy company.]]>
An intrinsically motivated person in a extrinsically motivated environment https://pepicrft.me/blog/2024/08/24/intrinsic-motivations 2024-08-24T00:00:00+00:00 2024-08-24T00:00:00+00:00 <![CDATA[

Have you listened to Lex Fridman’s most recent interview with @levelsio? If not, I’d recommend giving it a listen.

I’ve been following Pieter for some time, and I really appreciate his ability to cut through the noise and focus on what truly matters. This is a skill I find difficult to acquire, partly due to the many years I’ve spent perfecting software craftsmanship, which often leads me to analysis paralysis. You know, before you build that tech piece, you have to find the perfect technology, set up continuous integration, choose the right framework, and by the time you have everything set up, you’ve lost the motivation to build the actual product. I’ve been there many times, and if you’re a software craftsman like me, I’m sure you’ve experienced it too.

You can’t imagine how many times I’ve wondered if I should be more scrappy in my work, focusing on making money quickly and monetizing the skills I’ve acquired over the years. I even feel guilty at times for not doing so. Am I foolish or what, Pedro? You might be like those people who are always waiting for the perfect moment to start a business and capitalize on the frenzy of the ecosystem, because if you don’t, someone else will. All that open-source work I’ve done is great, but it doesn’t pay the bills. That’s a mental struggle I often go through, especially when I see indie hackers like Pieter Levels discussing their MRRs and how they’re making a living from their work.

But you know what? My conclusion is often the same: I’m not like them. My motivations are intrinsic, and this is something I can’t ignore. When I hear them talk about people as mere tools for making money, I can’t help but feel disgusted. I know we live in a capitalist society and money is important, but I view it as a result of the work I do in supporting others, not as the sole goal.

The best technologies we’ve seen were created by people who worked for fun and were intrinsically motivated: Linux, Git, Wikipedia, Ghost, Ruby, and many others. If you ask me, that’s the kind of technology I want to build—one that leaves a lasting impact on the world and inspires others to do the same. The challenge is achieving this in a world where we are increasingly selling the image that we should be obsessed with money and fame. Those who chase extrinsic motivations often find it has a unique momentum that can’t be stopped. Microsoft knows it well.

So yeah, it’s tricky being an intrinsically motivated person in an extrinsically motivated environment, but the happiness of doing what you love is worth it.

]]>
<![CDATA[I admire Pieter Levels for focusing on what matters. My motivation is intrinsic, and while money is important, it’s not my sole goal.]]>
Open design https://pepicrft.me/blog/2024/08/20/open-design 2024-08-20T00:00:00+00:00 2024-08-20T00:00:00+00:00 <![CDATA[

I’ve said it a few times: Working in the open is fantastic. You open yourself to diverse ideas you wouldn’t otherwise have access to.
Sadly, other ecosystems don’t enjoy the foundational tools that we developers have access to, like GitHub or GitLab, which early on put the focus on making coding a social activity.

Ideas like code reviews through pull requests, which have a sophisticated diffing interface, or LICENSE.md, which makes it possible to bind contributions to a specific license, are rare in design. Figma and Pentpot are trying to lay out the pieces, but we still have some way to go. Imagine if designers built curricula through open design projects, which would be the counterpart to open source but in design. They could review each other’s work before merging it into a “branch” (or the equivalent of that concept in design) and be able to license their work under a specific license.

We were eager to explore this idea in Tuist, so we joined some communities, and we were lucky to find Asmit Malakannawar, who immediately connected with us, the ideas behind Tuist, and our vision for a fully open platform to extend the capabilities of native tools. Like us, Asmit turned out to be a huge advocate for open source. He is involved in the GNOME community, where he made significant contributions in the past. Marek and I could not believe we crossed paths with him, but this is the beauty of open source. You cross paths with very talented people who place a strong focus on the people and the craft, and they can impact the software in very distinct ways.

It’s been a few weeks since he’s taken on some ownership from us, exploring ideas that have been in our backlog for some time, and taking the lead design role in the Tuist project. As part of that work, we’ll explore the idea of open design, make designs open under a permissive license so that anyone can use them, including developers who will be able to extend the platform in the future. We’ll also contribute design-related pieces of technology to the Elixir ecosystem, which is a language we’re passionate about. We want to inspire more designers to get involved in open source, and make our small contribution to making “open design” a reality.

I won’t cease to repeat this: Being open, and balancing that with building a thriving business that can support working in the open, is a unique strength in a space where people have become secondary in how we do business. We are placing people at the forefront of our business, and inviting a diversity of ideas to give existing native tools the superpowers necessary to use them at scale.

Welcome, Asmit, to the team!

]]>
<![CDATA[Working in the open fosters diverse ideas. At Tuist, we're embracing open design and welcoming Asmit Malakannawar to help shape our vision.]]>
Fair Source: Sustainability with no customer risk https://pepicrft.me/blog/2024/08/13/open-tuist 2024-08-13T00:00:00+00:00 2024-08-13T00:00:00+00:00 <![CDATA[

As you might know, we aim to make Tuist a fully open project. A project with thriving businesses behind it that protects organizations’ and developers’ freedom and minimizes any risks when betting on us.

We achieve this by:

  1. Building on and embracing standards over proprietary formats.
  2. Exposing and documenting programmatic interfaces so developers can access the data from Tuist’s domain.
  3. Opening the code that powers the platform.

It sounds fantastic on paper, but in practice, 3. poses the most significant challenge. How do we prevent putting Tuist at risk?

Open Source and permissive licenses like MIT and Apache bring freedom but don’t protect the business. Companies can try to benefit from it in a predatory way without ensuring a healthy balance that benefits all stakeholders. They even dare to go wild publicly when you spoil their plans to protect Tuist.

Other organizations seek protection by adopting AGPLv3, which many companies have policies against, and selling dual licenses and enterprise features. Those businesses are often referred to as Open Core. However, companies are still facing litigation risks, which we do not want for our customers. So what’s the alternative?

I think the answer for Tuist is Sentry’s new license concept: Fair Source, particularly their Functional Source License (FSL). The license beautifully strikes a good balance between the freedom of Open Source and the protection of the business.

The code is available (Customer freedom). You can check out the code, use it, extend it, contribute to it, and host it as long as you don’t try to compete with the business (Business protection) so that the project can thrive and benefit everyone. And after two years, the code becomes MIT, so if the business dies, anyone can take it over and continue to thrive.

David Cramer puts it very well in his blog post Open Source is not a Business Model:

“If they choose AGPL they’ll deter all but the most determined competitors, but given its a GPL-based license, it might also scare off certain customers. If they choose MIT they might as well rely on thoughts and prayers, as nothing protects them from predatory companies. So what do they do? Well, most of them choose Closed Source.”

So I think a fully open Tuist platform could unfold this way:

  • We expand our current narrow TAM. The milestone here will be releasing Tuist Workflows. This continuous integration blurs the line between CI/CD, a build system, and local and remote environments, opening Tuist up to many ecosystems. Swift will become the ecosystem where we started and the tool to enable the best developer experience of the Tuist platform.
  • We open the server code under FSL allowing companies to self-host and charge companies for hosting the software ourselves, which we’ll be experts at.

Not everything needs to be FSL. Some components like the Tuist CLI or technologies we’ll develop to enable this vision can and will be MIT. They’ll be our gifts to inspire others to build thriving open businesses.

Marek and I are still sleeping on these ideas, but we intend to make this happen. We can only build the best in class productivity platform if we embrace openness.

]]>
<![CDATA[In this post, I explore the idea of a fully open Tuist platform.]]>
DX vendor locking https://pepicrft.me/blog/2024/08/05/dx-vendor-locking 2024-08-05T00:00:00+00:00 2024-08-05T00:00:00+00:00 <![CDATA[

Today, many internet companies have taken control of our content and data, placing it behind paywalls and subscription models to lock us into their ecosystems. Our photos are in the cloud, our music is behind a Spotify subscription, and our photo editing tools are accessible only through Adobe Cloud services. Isn’t this peculiar, especially considering the increasing power of our hardware? It becomes even more concerning when they use our data to train AI models to further maximize their profits.

This pattern, which originated in the consumer space, has also infiltrated the developer tooling space. Companies argue that runtimes and ecosystems like JavaScript are too complex, so they create coupled abstractions for their services. Tasks that can be solved client-side are often pushed to the server to sell services. While some problems genuinely require server-side solutions, those that can be handled on the client side should be, in my opinion.

Need a database? Use a remote one we manage for you, with a CLI to interact with. Implementing authentication? There’s a remote service for that too. Running your app in the same environment as production? Use a proprietary runtime accessible through the company’s CLI. Want to avoid environment setup? Use the browser, and we’ll provide an environment for you.

This not only causes a loss of control over your data and content but also over your development experience, costs, and the ability to craft software offline. I can’t subscribe to that model.

Elixir and Erlang might not be trendy, but they don’t create unnecessary complexity like JavaScript does with its “Complexity Abstraction As A Service” (CAAAS). Proprietary abstractions rarely claim to solve problems that can be easily addressed with a few lines of code in Elixir. Long-running server processes? Mainstream solutions suggest serverless, but serverless doesn’t handle background jobs well. So, there are additional services for that. You’d think production uses the same local NodeJS runtime, right? Not exactly; you need the platform’s proprietary environment. And the list goes on.

If this is the future of software development, I’m out. I’ll invest my time in learning and using tools and standards that allow me to craft software the way I want.

]]>
<![CDATA[In a world where companies are increasingly locking us into their ecosystems, it's important to invest in tools and standards that allow us to craft software the way we want.]]>
Where do I see myself in 10 years? https://pepicrft.me/blog/2024/08/05/where-do-i-see-myself 2024-08-05T00:00:00+00:00 2024-08-05T00:00:00+00:00 <![CDATA[

Where do I see myself in 10 years? That’s a question my therapist asked me a week ago. My answer was being healthy, living a simple life, and having a business where people feel inspired and creative.

In recent years, I’ve learned through life circumstances that a simple life brings a peace of mind that’s priceless. This is hard because sometimes the environment pushes you to seek happiness through material possessions, but those, in the end, bring additional problems that I wouldn’t have had in the first place. For example, I wouldn’t have felt angry at Apple wanting to charge me €250 to fix an AirPods Max that they broke with a software update if I hadn’t bought them. Now I use old earphones with cables, and not only do they last longer, but I don’t have the responsibility of having to charge them. I don’t have a car, and I don’t plan to buy one. I fell into the trap of buying an Apple Watch, and now I have to remember to charge it every night. I don’t know… life is simple, and we are pushed to make it more complicated. I think that complication puts us in mental exhaustion, which is the perfect ground for hyper-capitalists to keep pushing more stuff on us. Note that I say hyper-capitalists to differentiate from other forms of capitalism that bet on circular economies.

I never felt the need to buy to show off. If I was lucky enough to get a bit of extra money through my work, I saved it or took my parents on holiday in return for all they invested in me over the years. Their happiness is priceless and something that I’d pay for endlessly.

When I scroll through X or LinkedIn, a lot of what I see makes me allergic to those platforms. There is a definition of success based on how many impressions your publications got or how large your last round of investment was. They are pushing us to redefine happiness and success based on monetary and fame goals, and it’s very common to fall into that trap and pursue the same things, only to realize that you might end up being more miserable despite all those impressions of your posts. I feel allergic to that, but I must admit I fell into that a few times in the past. Those platforms were designed for that.

And now that we are building a company, it feels like rowing in a different direction, but it’s a lot of fun doing something based on our own intuition and definition of happiness. Companies build proprietary walls that we tear down. We make others angry by protecting a piece of software to ensure every organization can continue using it for as long as they can. We step into established domains like CI with the energy to do things differently. We share what and how we do it because we’d like to inspire others to do the same and build trust through transparency.

So yeah, a simple and social life optimizing for fun is what I see myself doing in 10 years. And I’ll push away anything that tries to steer my life in a different direction.

]]>
<![CDATA[This blog post reflects on the importance of living a simple life and the impact of social media on our definition of success.]]>
Continuous releases with automated changelog generation https://pepicrft.me/blog/2024/08/04/continuous-releases-with-automated-changelog-generation 2024-08-04T00:00:00+00:00 2024-08-04T00:00:00+00:00 <![CDATA[

As part of our work on Tuist, we open source small pieces of technology that aim to commoditize the development of Swift tools. An example of that is XcodeProj. Because those projects are not actively worked on, it was important to have a process that would automate the release of new versions when releasable changes were merged.

To achieve that, I used a tool that I discovered recently through Mise, git-cliff, which automates the generation of changelogs based on the repository’s local and remote history (e.g., GitHub pull requests). Once the tool is installed, something that you can achieve easily with Mise:

[tools]
"git-cliff" = "2.4.0"

You can initialize it by running:

git cliff --init github

The argument github instructs the initialization command to use the vendored GitHub template. You can check out this list of other templates. The command generates a cliff. toml with a default configuration, which in my case I left as is.

The workflow

We use GitHub Actions, so the first thing we’ll need in the release workflow at .github/workflows/release.yml is the configuration to run for every commit in the main branch:

on:
  push:
    branches:
      - main

As one of the first steps after checking out the repository, we’ll need to check whether a release is necessary. We can do that by comparing the persisted CHANGELOG.md, which I had previously generated with git cliff -o CHANGELOG.md, and the one that would be generated with the bumped version, which I can obtain with git cliff --bump:

- name: Check if there are releasable changes
  id: is-releasable
  run: |
    bumped_output=$(git cliff --bump)
    changelog_content=$(cat CHANGELOG.md)
    if [ "${bumped_output}" = "${changelog_content}" ]; then
      echo "should-release=false" >> $GITHUB_ENV
    else
      echo "should-release=true" >> $GITHUB_ENV
    fi

Note that I set git.filter_unconventional = true to only consider releasable those commits that follow the conventional commit format.

From there, we can obtain the next version (note that we skip if we shouldn’t release):

- name: Get next version
  id: next-version
  if: env.should-release == 'true'
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  run: echo "NEXT_VERSION=$(git cliff --bumped-version)" >> "$GITHUB_OUTPUT"

And the release notes:

- name: Get release notes
  id: release-notes
  if: env.should-release == 'true'
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  run: |
    echo "RELEASE_NOTES<<EOF" >> "$GITHUB_OUTPUT"
    git cliff --unreleased >> "$GITHUB_OUTPUT"
    echo "EOF" >> "$GITHUB_OUTPUT"

The remaining steps are just updating the CHANGELOG.md, committing the changes tagged with the version, pushing the changes upstream, and creating a release on GitHub.

- name: Update CHANGELOG.md
  if: env.should-release == 'true'
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
  run: git cliff --bump -o CHANGELOG.md
- name: Commit changes
  id: auto-commit-action
  uses: stefanzweifel/git-auto-commit-action@v5
  if: env.should-release == 'true'
  with:
    commit_options: '--allow-empty'
    tagging_message: ${{ steps.next-version.outputs.NEXT_VERSION }}
    skip_dirty_check: true
    commit_message: "[Release] Command ${{ steps.next-version.outputs.NEXT_VERSION }}"
- name: Create GitHub Release
  uses: softprops/action-gh-release@v2
  if: env.should-release == 'true'
  with:
    draft: false
    repository: tuist/Command
    name: ${{ steps.next-version.outputs.NEXT_VERSION }}
    tag_name: ${{ steps.next-version.outputs.NEXT_VERSION }}
    body: ${{ steps.changelog.outputs.CHANGELOG }}
    target_commitish: ${{ steps.auto-commit-action.outputs.commit_hash }}

I’m quite happy with the result, which you can check out completely in this file.

]]>
<![CDATA[Automate the continuous release of releasable changes with git-cliff and CI.]]>
Exploring commercial OSS https://pepicrft.me/blog/2024/08/02/commercial-oss 2024-08-02T00:00:00+00:00 2024-08-02T00:00:00+00:00 <![CDATA[

How and when we can make Tuist fully open source end to end is an ongoing conversation between Marek and me. We conceive open source as a key element in building the best platform to create better apps faster. Open source is freedom; it’s diversity of ideas, it’s accountability, and it’s empowering others to explore new ideas. However, doing this while remaining financially viable is where the challenge lies. In this blog post, I’d like to share with you our learnings, the mental models that we built around sustainability in open source, and how we see the future of Tuist.

If your software has low packaging, distribution, and scaling costs, you’ll have a hard time building a source of revenue for the project—at least with an open-source license. Examples of those are CLIs or client-side desktop or mobile apps. Anyone can easily repackage your app and CLI and throw resources that you might not have into monetizing it. We’ve seen many examples of this, from storage solutions like MongoDB to Tuist itself. It’s such a known thing that it’s gotten a name—being “Jeff’ed”—which comes from the idea of Jeff Bezos packaging your software and selling it as part of their cloud offering. I said earlier that this is a model that happens when the cost of scaling is low, but if you are an Amazon, that’s not really a problem because you have a large pool of resources to figure out scaling. They might be able to offer a better and cheaper solution, and there’s no way you can compete with them. Tuist was close to being Jeffed. Thanks to that, we learned an important lesson: companies can embrace this model without feeling any need to support the underlying pieces of technology. If this had happened at a different moment in the lifecycle of Tuist, we wouldn’t have minded, but at the time this happened, we were very limited in resources, which is something we were open to them about, and the additional bump in support and feature load would have put us at risk of burnout. So we had to put a pause on our idea of everything being open source.

There are some models to prevent this. Some embrace what’s called source available. They choose a license that’s not approved as an open source license by the Open Source Initiative, like Functional Source License or Fair Source. The code is available in those cases, but the license limits what companies and developers can do with it, usually putting the focus on financially protecting the project. For example, Tart limits the users that can use the license. I personally like the balance that this model achieves because small companies or indie developers can use it in their projects, but if they grow, which usually means they have financial resources to support the project, they can pay for another license. It’s called dual licensing. There’s one caveat to this model: you might get fewer contributions because you have to require developers to sign a contribution license agreement to own the rights of their contributions to be able to provide a dual license for that code. Those same rights can be used to even close the source code, change the license down the road, or sell those rights to another company. When using an open-source license and no CLA, in scenarios like those, the community has the freedom to fork the project and continue its development in a different direction and most likely under a different name. For example, when Terraform changed their license, limiting the freedom of the software, a bunch of companies with the support of the Linux Foundation forked the project under the name OpenTofu.

Soon after being close to being Jeff’ed, we learned through the creator of Docker, Solomon Hykes, about the importance of trademarks. He mentioned in a podcast that we developers tend to put a strong focus on the license and forget about trademarks, despite this one playing a crucial role in situations like the one described above. When someone builds a project, there’s value in the code, but also in the brand that emerges and the community around it. Terraform and OpenTofu have a point in time in common, but the brand of the former might have more value and support, which the latter does not. That’s why when forks happen, they either happen with enough support from a big portion of the community and companies, or you are prone to have a difficult time promoting it. Imagine if AWS had to sell Redis under a different name. Do you think people will be more inclined to get it from there vs from the organization that created it? A brand is something that takes many years to build, and shortcuts with money don’t usually yield good results, especially if the people that you build for are developers.

We have a trademark now in Europe and the US, but we didn’t have one back then, so a company took the freedom to fork the tool and use the brand to market an integration from their service. An open conversation with them about our concerns went nowhere, so the question of whether trademark guidelines would have nudged this in a different direction remains. I have high doubts we’d have reached an agreement, but at least we could have pushed to stop using the Tuist name under their umbrella of services. In hindsight, I think close-sourcing part of the code was the right move. That was not an easy decision emotionally, but we felt it was the right thing to do, and I’m proud of the job we’ve done to navigate that and think about a future where we can go back to everything being open source.

There are open source projects that fit the above model and remain with a permissive license and in some cases no trademark. In those cases, maintainers have a full-time job at a company and treat working on open-source as a gift for a community that they do in their free time. I respect that model and admire maintainers that are able to do it sustainably, especially if the project is successful, because the maintainer’s limited attention and time are impossible to scale. Hence why there are many cases of burnout. And at the end of the day, the project itself also suffers because they tend to stagnate, often with the hope of not increasing more the load of requests the maintainers receive. Tools like Fastlane, Mise, or many Swift Packages fit into that model.

There are companies like 37Signals that open source some of the tools that they develop internally to support the development of products that are not related to the tools they are open sourcing. For example, Ruby on Rails or the many utilities that the company open sources. I like this model. That open source is often a gift and is published with no strings attached. It’s a bit like we are doing well, and we are contributing some of our work to some commons for others to build businesses upon. However, the closed-source products miss the opportunity of bringing diversity of external contributions. I think it’s beautiful, but the more I familiarize myself with sustainability models, the more I realize that it might not be possible with all kinds of products. I’ll expand on this later on.

A different model, which is the one we are starting to embrace, is providing value from the server side. Because it comes with some costs and complexity of hosting and keeping up with the software updates, some companies will be inclined to pay, especially if the complexity of maintaining the service running is significant. An example of that is Cal.com. Most people will pay for their hosted version because they are not developers and most likely don’t know how to self-host. The ones that do know, and that’s ok. You might think that they are not contributing back, or at least financially, but there’s a chance that they do so through code. They sometimes also become evangelists of your software, which is an excellent marketing tool. Some projects extend this model with some features that are released under a paid license. Those are usually the ones that large enterprises need. They also do dual licensing for companies that have strict policies against copyleft licenses like AGPL3.0. However, do note that hosting AGPL3.0 software on your server is entirely compliant with the license. An Amazon could still take a service, host it, and sell it, but without owning a trademark, that’s a very unlikely thing to happen because they’d need to make a huge effort to understand software that they are not maintainers of and market it under a different name.

The challenge with the above model is that if your market is small, and it’s mostly developers, then you can find many companies, especially the ones outside of the US, that are willing to take the little risk of hosting AGPL3.0 software on their servers, and some are willing to go even further and self-host the pieces that are under a different license. At the end of the day, it’s impossible for you as the creator of the software to know if something like that is happening, and even if you knew, would you go the legal path against a small random company somewhere in the world? Cal.com can afford this because their market is huge, but ours is small still, so doing something like that will slow down reaching financial viability and be detrimental to the product. There are products like Supabase that can afford that, even if they haven’t captured a big chunk of the market yet because their value of the software lies also in the complexity of scaling a database. In our case, the value at the moment is mostly on the client, so until we have more value on the server and some complexities or enterprise features that make the product compelling to pay for, I doubt we’ll make the server open source. We believe we can get there, but it’s a matter of time to do that shift in value. What we know for sure is that we are not going to introduce complexity for the sake of making paying for Tuist compelling. For instance, once we step into the continuous integration space with Tuist Workflow, that might change the story. We could become the first CI service for app developers that’s open source, and the complexities associated to that are worth having company dealing with them. Note though that there are projects like YunoHost or Cloudron that make the installation of services like GitLab, that are in theory complex to run and update, a one-click action, so I believe having a few features for large enterprises under a different license will be key too.

We discarded other models like Pointfree’s of selling courses. We’ve learned a lot about Xcode and Xcode projects and that’s certainly a sellable thing, but we find joy in building a useful platform, so we went this path instead. Along the journey we’ve been open-sourcing, and we’ll continue to do so, MIT-licensed gifts to lift innovation at a layer above the layers that are common across similar businesses out there so that we don’t waste time building proprietary technologies in silos.

We see in 5 years an open-source Vercel or an Expo that bets on standards to support teams in the journey of turning an idea into something that they can share with people (not users). It’s a matter of shifting value to the server and expanding the market that we can open source everything. Can’t wait!

]]>
<![CDATA[This post touches on the models we looked at to draw inspiration to make Tuist financially viable.]]>
What is urgent? https://pepicrft.me/blog/2024/07/31/what-is-urgent 2024-07-31T00:00:00+00:00 2024-07-31T00:00:00+00:00 <![CDATA[

Over the years of being embedded in the tech industry, I’ve noticed that the meaning of urgency is morphing and is being used as a tool to capture our attention. And I think I’ve been a victim of this.

Everything feels urgent: that message on my WhatsApp, the email I just received, or those RSS updates I need to go through. If everything competes to be urgent, what is truly urgent?

I sometimes feel as if a concurrent world of urgencies were competing for my parallel mental processing. For non-tech-savvy people, concurrency and parallelism are not the same thing. I jump from one thing to another, considering everything urgent, and ending up exhausted at the end of the day. I think it’s normal, considering those jumps are sometimes between non-related domains.

I did not realize my tendency to consider many things urgent and my consequential mental exhaustion. I’ve been thinking about it for some time, but with the building of the company, I’m noticing I have to get better at identifying where the urgency lays, if any, and how to deal with it.

Suppose you wonder how it manifests. I can have the mail application closed, but it feels uncomfortable because I do so with the feeling that something urgent might come up. Or some message shows up from a person seeking support, and I feel like stopping what I’m doing to provide help. Or I have some spare time, filling it with checking social channels just in case something worth my attention is happening. Is it an addiction? Maybe? It’s as if my brain was constantly alert, waiting for the next urgent thing. It’s exhausting.

Brains are damn difficult to understand.

And to make matters worse, I’m terrible at establishing systems or processes. My brain is a bit chaotic, and I think some of my creativity is rooted in that chaos. In the connections that emerge from there. However, the mix is dangerous if you combine that with the constant feeling of urgency.

So, despite how comfortable it might feel, I’ll try to establish some system, both at work and at the personal level. First, I’ll treat the stream of things as a queue, where the focus is not on the urgency (or the importance) of something but on whether it is well-categorized. Then, I’ll devote the last 30 minutes to review, assign urgency and importance, and plan my next day based on that. This one step is the one I need to build a routine around.

If something comes up during the day, I’ll resist jumping into it and instead add it to the queue. Then, review it at the end of the day and repeat the process. And I’ll establish a boundary between work and personal life:

  • Work: Slack, GitHub, Desktop Mail
  • Personal: Telegram, WhatsApp, iPhone Mail, In-person

I used to have work things on my phone, but that blurred the line between work and personal life in a way that was not healthy for me or the company.

Will this work? It will if I commit to building the habit and stick to it. Whether I’ll be able to commit is a whole different story, but let’s talk about that in a future blog post.

If you are going through some mental struggles or learning about mental health and want to chat, I’m always up for it. Just reach out.

]]>
<![CDATA[With many things competing for our attention, what is truly urgent?]]>
Openness as a tool of trust https://pepicrft.me/blog/2024/07/22/openness-as-a-tool-of-trust 2024-07-22T00:00:00+00:00 2024-07-22T00:00:00+00:00 <![CDATA[

While reading a chapter about media in the book “How to Survive the modern world”, and in particular a chapter on the role of newspapers, I couldn’t avoid but think of the similar role that technology plays in our lives.

As newspapers developed, they took the role that centuries before letters. Letters served as a tool between royal courts and the distant parts of their kingdom to send information to be acted upon (e.g., grain shortage). Newspapers took this role and expanded it to the general public. When reaching someone’s doorstep in the morning, much of this information was liable to feel urgent and alarming. This led to a paradoxical state of mind in modern democratic states: one is at once extremely well-informed, deeply exercised–and completely powerless.

The sense of urgency and alarm that newspapers brought to the general public and the need to feel informed and up-to-date with the latest news became the perfect channel between governments and the public to share their views and influence public opinion. The public expected them to be impartial and objective–an accountability tool for the government. However, as history has proven in many countries, like Spain, they can be easily manipulated, leading to societies losing trust in both the media and the government. But what’s the solution?

We see similar patterns when we look at the tech industry, which is the result of technology embedded in capitalism. Through stories, we’ve been told about aspirations where technology will save us from our problems or make our lives easier. It became the modern newspaper that also made us well-connected and informed, yet with the same powerlessness. We were convinced that every problem should be solved with technology and were forced to trust the people behind it to make the right decisions. We put our privacy aside (except for the Germans), and we gave them our data to make our lives easier. We put all our photos and documents in unlimited cloud storage. We traded our free time in our pursuit of feeding the algorithms with reactions or content. Being a content creator was the new cool thing.

And we ended up in the same situation as with newspapers. Sooner or later, they can’t hide the power structures and the real interests behind them. The interest of a few people who want to become wealthier and more powerful at any cost, whether they are politicians or techno optimists (e/acc). They can go as far as postponing sustainability plans, laying off workers to increase market value, using slave labor to fine-tune their AI models, building the largest pyramid scheme in history, or laying out an echo chamber that leads to civil wars.

I don’t know about you, but I have low trust in many technology organizations and leaders, such as newspapers and politicians. The loss of trust grew as I gained perspective on the industry after Shopify’s move after some of my colleagues started to form a union. When I looked around, I noticed the number of proprietary technologies, hyper-growth companies, the lack of transparency in the industry, and the ongoing re-shaping of the story to keep us engaged, from the metaverse to the AI through the blockchain. Connected to the Internet, but more exhausted than ever. Something is terribly wrong. I want a technology that’s closer to how Aaron Swartz envisioned it. We need to stop this.

We might feel powerless in front of these giants. But I strongly believe there’s one tool that can help steer the ship in the right direction: openness. When a government or a business is open about its decisions and actions and builds solutions that are also open in nature, we can build trust through transparency and not through the words of a PR department or a newspaper. We’d still rely on them telling the truth, but we’d tear down the walls to make them accountable for their actions. We wouldn’t rely on institutions realizing that damage is serious and worth reporting. If Meta had shaped their systems in the open, a public debate would have surfaced that what they were doing would have caused a lot of harm. They could have connected us without making us more individualist, polarized, and addicted. Who knows, maybe Twitter wouldn’t be the shitshow it is today, where a clown does whatever he wants.

Moreover, openness has the power to attract communities of people to build wonderful technology. Look at Linux, Wodpress, Wikipedia, Ghost, Plausible, Codeberg. Suddenly Microsoft, and many others are interested in open-source, yet do little to support the most critical pieces of today’s infrastructure. There’s something beautiful in the idea of a community of people building something together, where everyone can contribute and benefit from it. That’s why Marek and I don’t believe Tuist is a traditional company. We want it to be an open and calm company with a community of people who share the same values and want to build something great together.

I made openness and the usage of standards requirements when choosing the tools I use these days. I still have a long way to go, but I’m happy with the progress I’ve made. I can right away recognize which tools I’d rather stay away from. For example, you won’t see me something like Notion. Or I’m migrating from Figma to Penpot.

I gave governments and technologies my trust, and they failed me. It’s my turn to take it back and put it in the hands of the people who deserve it. I believe healthier, long-lasting, and human technology is possible if we embrace openness, both as a consumer and as a producer. And I’m going to do my best to make it happen.

]]>
<![CDATA[A healthy relationship with technology is possible if we embrace openness.]]>
How synchronized groups work at the .pbxproj level https://pepicrft.me/blog/2024/07/20/how-synchronized-groups-work-at-the-pbxproj-level 2024-07-20T00:00:00+00:00 2024-07-20T00:00:00+00:00 <![CDATA[

I started working on supporting Xcode 16’s features in XcodeProj. One of those features is internal “synchronized groups”, which Apple introduced to minimize git conflicts in Xcode projects. In a nutshell, they replace many references to files in the file system with a reference to a folder containing a set of files that are part of a target. Xcode dynamically synchronizes the files, hence the name, in the same way packages are synchronized when needed.

I’m excited about it because it addresses an issue that has remained unaddressed for many years and that motivated many people to adopt Tuist or SPM as a project manager. However, I wonder if adding more dynamically-resolved features, like the resolution of packages,

Whether that’ll lead to the best developer experience is something we’ll see down the road, but that’s not the point of this blog post.

If you wonder what the changes mean at the .pbxproj file level, a format I got very familiar with thanks to my work on Tuist, here’s an overview of what I learned today. Groups can now have a child of type PBXFileSystemSynchronizedRootGroup. They reference a folder whose files will be automatically synced and tied to a target. For example, there are a bunch of Swift source files. In the example below, that folder is Frameworks, relative to the parent containing this group:

/* Begin PBXFileSystemSynchronizedRootGroup section */
        6C14E0CA2C4BC30C00635BF3 /* Framework */ = {
      isa = PBXFileSystemSynchronizedRootGroup; 
      exceptions = (6C14E0E62C4BC38800635BF3 /* PBXFileSystemSynchronizedBuildFileExceptionSet */, ); 
      explicitFileTypes = {}; 
      explicitFolders = (); 
      path = Framework; 
      sourceTree = "<group>"; 
      };
/* End PBXFileSystemSynchronizedRootGroup section */

The glue between that object and the targets happens in the new fileSystemSynchronizedGroups property of the PBXTarget object.

Then Apple also introduced another object, PBXFileSystemSynchronizedBuildFileExceptionSet, which is used to provide “exceptions.” Those exceptions are used for three things, and maybe others that I’m yet to uncover:

  • Filtering out certain files and folders.
  • Overriding the default file type.
  • Configuring the type visibility.

Those are the only two models that are needed to enable this functionality. Once we land this feature in XcoeProj, I’ll explore what the API could look like in Tuist for users who want to opt into it, although the value is not that significative in this case since git conflicts are not that much of an issue in Tuist land.

]]>
<![CDATA[Learn about the new PBXObject types introduced in Xcode 16 to support synchronized groups.]]>
The specified item could not be found in the keychain https://pepicrft.me/blog/2024/07/20/keychain-item-not-found 2024-07-20T00:00:00+00:00 2024-07-20T00:00:00+00:00 <![CDATA[

Trying to codesign a macOS CLI on CI using certificates in a temporary keychain yielded the following error:

The specified item could not be found in the keychain

It turns out that even when passing the keychain with flags like --keychain, if the keychain is not made the default one, it fails. Therefore, I ended up making it the default:

security create-keychain -p $KEYCHAIN_PASSWORD $TMP_KEYCHAIN_PATH
security default-keychain -s $TMP_KEYCHAIN_PATH
security import $CERTIFICATE_PATH -P $CERTIFICATE_PASSWORD -A

So, this blog post is a note for my future self because I’m sure I’ll come across this again. Or perhaps for anyone coming across the same issue.

]]>
<![CDATA[Fix the not found keychain item issue by making the keychain the system default]]>
The best products tell stories https://pepicrft.me/blog/2024/07/15/best-products-tell-stories 2024-07-15T00:00:00+00:00 2024-07-15T00:00:00+00:00 <![CDATA[

As part of our work in Tuist, we have to put a lot of thinking into how the Tuist is presented as a product. As developers, we might think that it’s just a matter of solving a problem or a bunch of problems, and putting them under an umbrella name along with documentation and a website. However, when I think of the products that I love and use every day, all of them have something in common: they tell stories.

Humans love stories. Stories bring us together. Capitalism, democracy, and religion are all stories that we tell ourselves. We need stories to identify ourselves with, and developer tools are no exception.

But what’s Tuist story you might wonder? At first, Tuist was a solution to a problem: Xcode projects are hard to maintain. We didn’t have a story back then, but organizations felt so much pain working with Xcode, that the pain led to the adoption of Tuist. And we kept adding on top of that, so as you can imagine, the story was not clear. What is Tuist? A project generator? A project generator and a Fastlane? A CLI for automation? A tool to optimize the development experience?

When there’s no story, or the story is not clear, people don’t know how to identify themselves with the product. Note that I prefer to talk about people and not users because the term users dehumanizes the relationship between the product and the people using it.

So for the past few months, I’ve been thinking about what the story of Tuist should be, and I’m starting to see it. The common denominator of all the features we’ve been adding is that we want to make developers happier and more productive. We believe this leads to better apps. We also believe there’s a possibility to take this to other platforms, but as of today, that feels more like a new chapter in the story.

But “happiness” and “productivity” feel too abstract. I’m already happy with my Xcode project. I don’t need you. It’s very natural to feel that way. Hence why we are presenting it as something developers are familiar with. We aim to become a virtual platform team. Or like the AI jargon likes to say: a copilot.

But that’s not enough. By talking about platform teams, developers might have a vague idea of the role that Tuist plays, but how do you present the features? Are you going to say that you are just providing tools X, Y, and Z where X, Y, and Z are just some random marketing names that you came up with but that mean nothing to the developer? That doesn’t sound like a good idea.

It’s already an effort for developers to understand what a virtual platform team means, so we shouldn’t add to that. Therefore, we are going to build on stories and mental models that developers are already familiar with. We are going to use the same terminology that developers use, and place them in various phases of the app development lifecycle: start, build, share, and measure.

You don’t need to explain what those are. Every developer, even non-Apple developers should be familiar with them. Tuist is just integrating into mental models to increase the level of happiness and productivity.

So we are currently in the process of iterating in the story or coming up with one since I wouldn’t say we had a well-defined one before. And going forward, we are going to make sure we treat it as a dynamic organism that evolves with the product and the people using it. I’m very excited about this new chapter, and all the possibilities that it opens up.

]]>
<![CDATA[In this blog post I share how stories help people connect with the products they use.]]>
Does it need to be in JavaScript? https://pepicrft.me/blog/2024/07/13/does-it-need-to-be-in-js 2024-07-13T00:00:00+00:00 2024-07-13T00:00:00+00:00 <![CDATA[

I believe the reason why React and the many JS-based UI paradigms became so popular is because they brought closer markup, style, and behavior making components an atomic and shareable unit of composition. It’s unquestionable that it’s a mental model that developers like, so much, that other technologies started introducing the idea of components.

Similarly, Tailwind enabled the portability of styled components. And developers loved that. So much, so that it’s rare to see a UI tool kit or design system that’s not built on Tailwind. Developers prefer to learn the semantics of an abstraction, instead of the semantics of the underlying standard. It’s bizarre when you think of it, but portability is key, and it’s becoming even more apparent with all those AI tools that generate a website for you.

The problem with it is that if your stack is not JS, you feel left out. Developers find themselves having to decide between having access to that ecosystem of paradigms and reusable component kits at the cost of adding a JavaScript runtime to their stack, or not adding any JavaScript runtime and having limited access to resources and DX. Many end up leading on the former, which often also means building an SPA, and you know all the implications that come with it. The cost is high.

For some time I’ve been thinking about whether JavaScript is required to enable that experience, or if it was more of an accidental implementation detail because React started with a focus on the client (browser) where a JavaScript runtime is available by default. The more I thought about it, the more it became clear that it’s very likely an accidental implementation detail. At the end of the day, those technologies are functions from components and data to HTML, CSS, and JS (or Wasm). So why not have the compiler in let’s say Rust, with bindings for the many languages and runtimes with popular web frameworks like Ruby, Python, and Elixir? I believe pushing the solution one level down and making it runtime-agnostic would suddenly decide on technology more of a language preference than anything else.

Such a solution could come with a registry to share components. Imagine a design agency bundling a design system and distributing it to you to consume regardless of your technology stack. We’d save all those organizations an insane amount of time and money having to port their solutions to the multiple technologies they support. It’d become the missing narrow-waist in how people build UI for the web. Or imagine something like Storybook too. It’d unlock a whole new world of possibilities and perhaps a new type of discipline: UI developers, which would feel comfortable writing components and iterating on them without the distractions and complexities of a full-stack application.

Where am I going with this? I don’t know, but I want to tinker a bit with the idea in my spare time and see how far can go. It’s also an excuse to write a bit of Rust :).

Happy Saturday.

]]>
<![CDATA[Is JavaScript necessary to enable atomic and reusable components on the web?]]>
Falling behind https://pepicrft.me/blog/2024/07/13/falling-behind 2024-07-13T00:00:00+00:00 2024-07-13T00:00:00+00:00 <![CDATA[

Falling behind

It’s impossible to live these days without the ongoing feeling of falling behind:

  • News you have to read.
  • Podcasts that you have to listen to.
  • Recommended restaurants to try.
  • X posts to catch up with.
  • Emails and Slack messages to read.
  • GitHub notifications to go through.
  • Travel experiences to experience.
  • Technologies to catch up with.
  • Workouts to do.
  • Kilos to lose to match the stereotypes.
  • WhatsApp audio messages to listen.

I feel the world around me has got so hectic so quickly, that my brain hasn’t been able to scale. In fact, I don’t think it should. Trying to scale puts me in the perfect mode that capitalism likes: over-consumerism. But what about our mental exhaustion? That’s something only the person suffering cares about.

These days I feel a lot like falling behind, like jumping from one thing to another until I go to sleep. Everything ends up being irritating as a consequence.

And this is a mode I don’t like. I’m aware I don’t like to go through my days that way, nor that is something healthy, but somehow I find it hard to escape. When I manage to ignore all of that and put the focus on myself, which for instance happens a lot when I’m flying, then I have the most mental relief and joy. It’s a forced disconnection that I wish I could force myself at any time without having to step on a plane. It’s a gift being able to do that in a so connected world.

So I’m figuring my way out, but I don’t have the formula yet. I’ll try to go through the pain of reverting some mental patterns that are the consequence of many years of mindlessly embracing a lifestyle that has proven not to be healthy at all. Hopefully, I’ll get through it and restore some mental sanity.

]]>
<![CDATA[In this blog post I reflect on this feeling of falling behind that I've been experiencing lately.]]>
When you are open https://pepicrft.me/blog/2024/07/09/when-you-are-open 2024-07-09T00:00:00+00:00 2024-07-09T00:00:00+00:00 <![CDATA[

Openness is often perceived as a business risk, but I look at it differently.

When you are open, you are more broadly accountable for your work, and your craft is at stake, so people show the best version of themselves.

When you are open, people can build trust through your work and not through third-party auditing companies and certificates. If you had anything to hide, you wouldn’t open it.

When you are open, you can invite external people to contribute, which means a diversity of ideas flowing into the project, which leads to better solutions.

When you are open, anyone can find the information they need to have the agency to make decisions. The organization is more agile.

When you are open, people might end up using your work without paying for it, not worrying about complying with it legally, but those who wouldn’t have paid for the software in the first place become evangelists instead.

When you are open, you inspire others to create more open businesses on commodities you might have created.

When you are open, you might make some people uncomfortable, but they decide to go a different path.

We chose This one for Tuist because we want to build a different type of company—openness for the win.

]]>
<![CDATA[While many companies see openness as a business risk, I see it as an opportunity to build a different type of company.]]>
It's a marathon https://pepicrft.me/blog/2024/07/04/its-a-marathon 2024-07-04T00:00:00+00:00 2024-07-04T00:00:00+00:00 <![CDATA[

As we continue to work on turning Tuist into a long-term sustainable business, I’ve been learning a lot about the importance of patience and perseverance.

This is something that I used to have years ago, but that I lost when I was embedded in a working culture of making great decisions fast with tight deadlines and a very competitive environment. When something spans several days, weeks, or months, I feel like I’m not making progress. Part of me thinks that those are steps in the right direction, but the other part of me feels like I’m not moving fast enough. And because we are limited in resources, I wonder if we are making the right decisions. Suddenly I’m flooded with doubts and fears.

When I look around, I see companies in a similar space that are moving fast, raising money, and hiring people, part of me feels like we are falling behind. However, if I look closely, all I see are attempts to run sprints one after the other, fear of being open and transparent about their business, obsession with growth at all costs, complex pricing models to squeeze as much money as possible from customers…

We are running a marathon, and that takes time. And I need to get comfortable with that regaining the patience and perseverance that I once had.

And when I calm down, I notice I can show the best version of myself. I’m inspired to help, to bet on unique ideas, to be open and transparent, and to build a company that I’m proud of. It’ll just take a bit of emotional work to get there. But I’m confident that we’ll get there.

]]>
<![CDATA[With Tuist we are running a marathon, not a sprint. It's time to regain patience and perseverance.]]>
The technology does matter https://pepicrft.me/blog/2024/07/04/technology-matters 2024-07-04T00:00:00+00:00 2024-07-04T00:00:00+00:00 <![CDATA[

If you’ve been following me for some time, you might already know how much I advocate for Elixir. We even rewrote the Tuist server, which was previously written in Ruby.

Developers often say that the technology doesn’t matter, that if you prove a solution solves a problem, you can always rewrite it. But Tuist was already a validated idea, and we had just learned about how nicely Elixir can help achieve and scale so much with a simple stack and very few resources. It felt uncomfortable at first because we were not entirely familiar with the syntax, but more and more we are realizing it was such a great decision.

The most recent reaffirmation happened when we tried to add telemetry to the instance for on-premise organizations. Erlang provides a standard telemetry API that many packages use to report telemetry information, including packages like Phoenix (the web framework) and Ecto (an ORM to access databases). Without a single line of code, we had all that information available, and with just a few lines, we had it converted and served to Prometheus via a /metrics endpoint. Mind-blowing.

Phoenix LiveView was another reaffirmation that took us some time to appreciate as we familiarized ourselves with the technology. It’s common to see many companies bet on React as a way to have a top-notch experience for building UIs, without realizing the pile of complexity and abstractions that becomes their technical debt forever. With LiveView, you get the former without the latter. You barely need to write JavaScript and can declare UI that’s a representation of state entirely in Elixir.

So in our particular case, technology does matter, and it matters a lot. And that’s without even mentioning anything about the concurrent model and the many data races that don’t happen because of it, saving us time we’d otherwise waste on debugging.

If you are building a web service and can afford to learn a new technology, Elixir is a great resource to have in your toolbox.​​​​​​​​​​​​​​​​

]]>
<![CDATA[The decision of which technology to use in a project can have a significant impact. In this blog post I talk about the impact that Elixir is having in Tuist.]]>
A different company https://pepicrft.me/blog/2024/06/29/a-different-company 2024-06-29T00:00:00+00:00 2024-06-29T00:00:00+00:00 <![CDATA[

Since we began working on transforming Tuist into a sustainable business, I’ve been deeply reflecting on the type of company we aspire to build. Here are some key observations and ideas that have been shaping our vision:

Open Source as a Foundation, Not Just a Marketing Tool

Unlike many companies in our space that treat open source merely as a marketing tool, we view it as the foundation for building better tools. We believe that openness and diverse contributions are crucial for long-term success and inspiring others to create their own open-source solutions and businesses. We plan to open-source our entire business, including the server implementation, following in the footsteps of successful open-source companies like Ghost, GitLab, Posthog, and Penpot. We’re committed to commoditizing many of our technologies with permissive licenses, as evidenced by our work on extracting Xcode project generation logic into a Tuist-agnostic package. Stay tuned for two new open-source projects we’ll be announcing in the coming months!

Embracing Standards for Long-Term Sustainability

We’re betting on widely-adopted standards when choosing technologies, formats, and designing our product. This approach aligns with our goal of building a long-term solution that can withstand the test of time. We aim to avoid the pitfalls of constantly refactoring due to rapidly changing frameworks or vendor lock-in. For example, our web setup uses raw HTML, CSS, and JS, intentionally sidestepping the complexities of the JavaScript ecosystem.

Openness Beyond Software

Our commitment to openness extends beyond our code. We’ll be transparent about our pricing (with the exception of custom on-premise solutions), drawing inspiration from Posthog’s model. Additionally, we’ll maintain an open handbook documenting our company’s operations. This transparency will help potential clients make informed decisions and support our goal of building an asynchronous remote company by fostering a culture of comprehensive documentation.

Simplifying the Developer Experience

We believe that teams shouldn’t have to rely on complex scripting to glue tools and services together by default. Tuist will continue to help teams streamline their setups, advocating for simpler, more integrated solutions. Our goal is to provide a single, cohesive tool with multiple well-integrated features, rather than a collection of independently negotiated tools that prioritize profit over user experience. We put people first, and unnecessary complexity is not part of our equation.

Challenging Convention and Fostering Innovation

We’re leveraging our culture of innovation to challenge cargo-culted solutions and move the ecosystem forward. In the coming months, I’ll be sharing more ideas on how we plan to approach common problems in novel ways.

Building Tuist is like working with a blank canvas, allowing us to create the company we envision based on our values. We’re not simply copying what’s been done before, but rather reflecting deeply on what we want to achieve. Our goal is to build a unique company that embraces open source without fear.

]]>
<![CDATA[As we shape Tuist’s future, we are committed to true open source, embracing standards, and simplifying developers’ lives. It’s not just business]]>
BREAKME.md https://pepicrft.me/blog/2024/06/25/breakme 2024-06-25T00:00:00+00:00 2024-06-25T00:00:00+00:00 <![CDATA[

If you are responsible for maintaining a semver-versioned piece of software, you might have come across a scenario where you believe a breaking change would lead to a better design, but you can’t because it would break the existing users of your software.

This is a need that we sometimes have when working on any of the Tuist repositories, and I found it a bit annoying having to note them down in issues that can get lost over time. The need for introducing a breaking change, and the rationale behind it, needs to be part of the Git history of the repository, close to the changes that prompted the developers to consider it. Therefore, I decided to introduce a new convention across Tuist repositories: a BREAKME.md file.

As its name suggests, it contains a list of breaking changes that we would like to introduce in the future. Every change should include what needs to be changed, and the rationale behind it. When working on future releases, developers can refer to this file to understand the breaking changes that we have been considering.

With that file, I feel more comfortable proposing breaking changes, since I know that they won’t get lost in the noise of issues and pull requests.

]]>
<![CDATA[BREAKME.md is a new convention across Tuist repositories to keep track of breaking changes that we would like to introduce in the future.]]>
Parallelism and programming languages https://pepicrft.me/blog/2024/06/24/parallelism-and-programming-languages 2024-06-24T00:00:00+00:00 2024-06-24T00:00:00+00:00 <![CDATA[

Over the weekend I spent some time watching the updates on the Swift complete concurrency model, and I couldn’t avoid but think: the language ergonomy is suffering. It reminded me a bit of the transition from Objective-C to Swift, where framework APIs were not designed with Swift in mind. But look at today, things have changed a lot, so I remain optimistic that Apple can lean back on the language ergonomy and make it more pleasant to work with concurrency in Swift.

Still, Swift and its approach to concurrency was not meant to be the subject of this post. I wanted to talk about concurrency in parallelism in programming languages, and how to achieve those without compromising the ergonomy of the language and the complexity of the programs.

There are interpreted programming languages like Ruby or JavaScript that can run IO-bound operations concurrently but require introducing complexity to do things in parallel. That’s why they are very suitable languages to validate business ideas, but are costly to scale after validation. Costly not only in terms of complexity but also in engineering time and production infrastructure. Programs also get harder to reason about and maintain, because parallelism is often achieved through the persistence of a state and pool of processes that subscribe to state changes and run tasks in parallel. If that’s a model you are happy with, then you are good to go. However, it’s not mine.

Then there are programming languages that have built-in primitives to run tasks in parallel. Programming languages like Swift, Java, or Zig. But they all share one thing in common. They put the burden on developers to prevent data race issues. As a developer, that’s not something I’d like to think about. Apple is well aware of this and is trying to move to a model where the compiler can help developers prevent that. But sadly, and as mentioned earlier, it does so by compromising the ergonomy of the language to hint the compiler about the intention of the developer.

An interesting programming language in the above group is Rust, which prevents data races with a new approach to memory management. However, there’s a bit of a learning curve associated with it and is a cost that many want to avoid, especially if they are just getting started and want to validate the business idea. An approach that one could take is starting with something like Ruby and JavaScript and doing a rewrite later on if the business proves to be successful. But who wants to go through that pain? Another question that I ask myself often is, why would you adopt a programming language that, even providing parallelism, can’t catch data races potentially causing production servers to blow up?

Functional programming languages can solve the above issue by not sharing mutable states. But like Rust, many functional programming languages, especially the ones that are pure, have a learning curve that many want to avoid. I’ve seen the DX of Clojure, and it’s mind-blowing to experience, but it’s a whole different approach to building, debugging, and testing software. Is it worth it? What’s clear though is that there’s something unique in the functional programming paradigm that we can’t ignore. When the state is mutated as it’s passed around, data race problems are gone and you can take full advantage of the hardware where the software is running. Your production servers and development scales by throwing more CPU cores and memory. Does your test suite take a long time to execute? Increase its parallelism. And the best part? You can do that without being worried about flakiness levels increasing. It’s beautiful.

The best of all is that there’s a programming language that blends the best of both worlds: Elixir. From Ruby and JavaScript, it draws the high-level abstractions and the ease of use. It’s also hot-reloadable, which at the end of the day is a productivity booster (this is achieved through the Erlang VM). From Rust, Java, and Swift, it draws the ability to run tasks in parallel without worrying about data races. It just doesn’t happen because there’s no shared mutable state. Logic is encapsulated in processes that communicate passing copies of the state. It’s amazing. It’s the most productive and amazing stack I’ve ever worked with. Zero complexity. Full parallelism. So yeah, if you are looking for a programming language that can scale with your business idea, I’d recommend Elixir.

This is a non-sponsored post. It’s a love letter to Elixir.

]]>
<![CDATA[In this blog post I touch on the subject of parallelism in programming languages and how to achieve it without compromising the ergonomy of the language and the complexity of the programs.]]>
Lazy-learning https://pepicrft.me/blog/2024/06/23/lazy-learning 2024-06-23T00:00:00+00:00 2024-06-23T00:00:00+00:00 <![CDATA[

I’ve been thinking lately about my approach to learning. When I become interested in a technical subject (e.g., Rust), I tend to throw myself into learning it, even if I don’t have a specific problem to solve with it. I’m driven by curiosity and the desire to understand how things work. Which is excellent from the standpoint of cross-pollinating ideas. However, since I don’t use them, I end up forgetting them, leading to a feeling of having wasted my time.

With many developers on Mastodon and X talking about WWDC, I can’t avoid but think: Should I watch some of the talks? Part of me thinks that I should because I might get ideas for problems to solve with Tuist. But the other part of me thinks that the ideas presented in it, and which I’ve seen people talking about on various social networks, are enough for me to envision the problems that I could solve with Tuist. When the time to work with them comes, for example enabling complete strict Swift concurrency in Tuist, I can simply look up the documentation and learn what I need to know. I feel it’s a more effective way of learning because it’s goal-oriented. I decided to call it lazy-learning.

So I’ll try to apply this approach going forward. I’m currently diving into Apple’s Virtualization Framework, Swift Testing, and how to flip a German company to the US, because these are the problems I’m facing right now.

And you? What’s your approach to learning?

]]>
<![CDATA[Learning for the sake of learning might not be the most effective way to learn. I'm trying a new approach: lazy-learning.]]>
Website redesign https://pepicrft.me/blog/2024/06/22/website-redesign 2024-06-22T00:00:00+00:00 2024-06-22T00:00:00+00:00 <![CDATA[

Working on Tuist dashboard, and learning and writing HTML and CSS, motivated me to redesign my website, which is powered by Elixir Phoenix.

The design principle that I followed was to make it as simple as possible. I used the same font-size and font throughout the website and played with the font-weight and gap to create a visual hierarchy. I’m quite happy with the result.

I took the opportunity to add new pages. I added a feed page to surface my posts from Mastodon, and photos to do the same with my Pixelfed posts. Since those pages are rendered on the server, I fetch the data from the respective APIs and populate the pages with it. I also added a now page that I plan to keep up to date with the latest ideas, thoughts, and things I’m interested in.

After having done this work, I’m quite excited about using the web platform in its rawest form. No TailwindCSS, no static site generators, #nobuild setup.

]]>
<![CDATA[I redesigned my website to make it visually simpler and added new pages.]]>
Emotional breakdowns https://pepicrft.me/blog/2024/06/12/mental-breakdowns 2024-06-12T00:00:00+00:00 2024-06-12T00:00:00+00:00 <![CDATA[

Yesterday, I came back from work and had an emotional breakdown. I sat on the sofa and felt sad. I could not pinpoint the reason. I just felt low. I tried not to overthink it too much, but I couldn’t avoid it. Why is it happening to me more often? Could it be something that comes with age? Or maybe the long-term consequences of the pandemic or the intense work that I did during that time? Perhaps it comes from putting pressure on myself regarding the success of Tuist as a company.

Don’t people say that solving a problem starts with acknowledging it? I feel I’m going through that phase. I’m trying to demand less from myself.

Less perfection. Less pressure. Less being online. Less you need to succeed. Less pleasing everyone but myself. Less work taking all my mental space.

I guess this is part of the journey of life. Learning about ourselves and our mental health. The largest disregarded part of our health.

I’ll prioritize the things that feel great in my brain. Running feels great on me. It’s my meditation. It’s the fresh air. At which moment did I deprioritize it?

How do you deal with emotional breakdowns?

]]>
<![CDATA[In this blog post I talk about a recent emotional breakdown that I had and how I'm processing it.]]>
Why you need the -ObjC flag https://pepicrft.me/blog/2024/06/04/why-you-need-objc 2024-06-04T00:00:00+00:00 2024-06-04T00:00:00+00:00 <![CDATA[

Tuist provides a method for integrating Swift Packages, previously resolved by SPM, into Xcode projects using XcodeProj primitives such as targets, build settings and phases. This feature uncovered a need that some packages in the ecosystem have, and that’s the need for upstream targets to pass -force_load or -ObjC through the build setting OTHER_LDFLAGS. Why is that needed? Thanks to David, who put together some troubleshooting guidance and provided some references to discussions, I could better understand what the problem was. This post is my attempt to write down my understanding of the problem, to help other developers in the future come across the same issue.

In simple words, the problem is that the linker overoptimizes the binary removing symbols that are needed at runtime. The linker’s dead-stripping logic can’t delete dynamically referenced symbols. And this is something that happens not only when referencing Objective-C symbols, but Swift too. For example, when integrating Composable Architecture, when integrating it with Tuist via Xcode targets, developers might need to add explicit references to those symbols or the flags above to the build settings.

What’s the solution? There are a few options:

  • The package maintainer can add static references to those symbols to prevent the dead-stripping logic from removing them (e.g., Promises, IGListKit)
  • You set the -force_load or -ObjC flag in the OTHER_LDFLAGS build setting of the target that links those packages statically. Note that this has some effects, like potentially increasing the binary size.
  • You turn those dependencies into dynamic targets, which as a caveat might end up increasing the launch time of your app.

This is a bit unfortunate because it requires developers to go a bit deeper in understanding their dependencies , but hopefully this write-up helps you understand the problem and the potential solutions.

]]>
<![CDATA[In this blog post, I explain why you might need to set the -ObjC flag in the OTHER_LDFLAGS build setting of your Xcode project.]]>
Simplicity, standards, and the platform https://pepicrft.me/blog/2024/05/25/building-software-that-lasts 2024-05-25T00:00:00+00:00 2024-05-25T00:00:00+00:00 <![CDATA[

While listening to an interview with José Valim, the creator of the Elixir programming language, I started thinking about the principles that we’ve been following at Tuist to build a future-proof platform for App productivity.

As you might know, we migrated Tuist‘s server implementation from Ruby to Elixir. Whenever we brought this up, it was common to hear developers saying that it sounded unnecessary. Almost as if we didn’t care about the problem space and we were distracted by the technology. But the reality is quite the opposite.

Many organizations bootstrapping businesses treat their initial work as a disposable prototype. If it works, they’ll rewrite it with a more robust technology. But Tuist is a validated product and foundation that needs to be as future-proof as possible. And it’s bootstrapped, which means we don’t have the luxury of throwing extra money at problems or paying for services that abstract complexity away. And that’s where Elixir shines.

So our approach to building a future-proof platform for App productivity builds on the following principles:

  • Embrace simplicity
  • Embrace the platform
  • Embrace standards

Erlang’s programming model enhanced by Elixir’s language is what we’d describe as a simple yet powerful technology that scales well. Dot. It’s been three decades since Erlang was created to solve telecommunication challenges that resemble the ones that we face today with Internet companies and that we are trying to solve with endless layers of abstractions. Will we have access to less resources and talent than other technologies? Very likely, but the few that we’ll have access to will last longer compared to ecosystems like JavaScript, which are in constant flux.

Note that abstracted complexity is still complexity. It’s just deferred complexity. Down the road, you’ll have to deal with it yourself, either by throwing expensive resources at it (e.g., engineers), or paying for services that abstract it away while creating a vendor lock-in. This is in a nutshell the JavaScript ecosystem, which on top of it adds the disposable and fragmented nature to the layers of abstractions. How fun is that?

Then there are the platforms that we build upon, Apple’s platforms and the web. They might not be perfect, but they are the most stable and future-proof platforms that we have access to. We are intentionally avoiding abstracting them. For example, we don’t use build tools on the Tuist web app. Instead, let the browser load CSS and JS files. We love and believe in CSS. Our Swift codebase doesn’t use any convoluted architecture or code patterns. It’s just classes and structs (and soon actors) that pass data around trying to follow good programming practices. Is it perfect? No. But anyone can jump in and look around without having to learn abstractions. It’s priceless in open-source, which we expect to embrace end-to-end.

And last but not least, we are embracing standards over proprietary formats. Many companies these days have incentives to create proprietary formats that lock users in. We bet on platforms that bet on standards. For example, our cloud provider, Fly, uses Docker containers as a deployment format, a runtime-agnostic solution, Flame for elastic auto-scaling of applications, or Grafana for data visualization. Standard is something we are going to embrace too when designing the Tuist product, drawing a lot of inspiration from Fly, which is a company that we admire.

We are in the early days of building a productivity platform for app developers, but we believe that by embracing simplicity, the platform, and standards, we’ll build a future-proof platform that will last for decades.

]]>
<![CDATA[In this blog post I share the principles that we embrace at Tuist to build a future-proof productivity platform for app developers.]]>
Meeting developers where they are https://pepicrft.me/blog/2024/05/20/meeting-developers-where-they-are 2024-05-20T00:00:00+00:00 2024-05-20T00:00:00+00:00 <![CDATA[

I’ve been thinking a lot lately about how Tuist compares to Bazel, a build system commonly used by large enterprises that face scaling issues with Xcode. Using Bazel reminds me Nix. The theory behind them is astounding. They are an engineering masterpiece. However, I don’t find any joy in using them. I’ll try to unpack why.

Ecosystems traditionally have their default or broadly adopted build systems. Android has Gradle, the JavaScript ecosystem has Vite (among many others), and the Swift ecosystem has Xcode’s build system. They might not be as advanced as Bazel, but over the years, they defined building blocks, mental models, and extensibility APIs upon which ecosystems have been built. Vite is a great example of the latter. Extending the build system is as easy as adding a plugin, usually distributed as an NPM package, and adding one line to the configuration file.

When you use Bazel, you are trading off not only the ecosystem of tools, but very likely a lot of ideas, patterns, and resources from the community. This is a big deal that can’t be underestimated. If the community innovates in a particular direction, you might not be able to leverage that innovation because you are using a different build system. Oftentimes, you have to port those ideas over to Bazel yourself or wait for someone to do it for you. In an ideal world, everyone uses Bazel, and everyone innovates on the same foundation, but how likely is that to happen? Very unlikely I’d say. I believe this explains why many organizations remain hesitant to adopt Bazel. Put differently, the few that adopt it, are because they adopt it across the board and therefore the cost is more justified.

My stance, and also Tuist’s stance, is that we embrace the limitations of existing build systems, and try to innovate within those constraints. By doing so, we don’t disrupt organizations’ workflows, and they are still able to leverage the community’s innovations. Moreover, we also meet developers where they are by embracing elements that are familiar to them. For example, we purposely chose Swift over something like Starlark to describe projects. It might seem like a subtle thing, but it’s not. Developers love using Swift and Xcode, so why not leverage that? We also reused the same concepts that Xcode uses: schemes, targets, build settings… Why push a new language onto them if the existing one works? When you combine all these elements, then organizations find a tool that’s a joy to use. Sure, it won’t reach the level of sophistication of a build system like Bazel, which has been battle-tested by Google and many other companies across various tech stacks, but it provides organizations with the right trade-offs.

I strongly believe that problems should not only be looked at from a technical perspective. Ecosystems have opinions, trends, and communities, that can either be embraced and integrated with, or ignored. Doing the latter often leads to a tool that’s technically superior but people don’t want to use.

]]>
<![CDATA[In this blog post I reflect on why I believe not more organizations are adopting Bazel and why Tuist is taking a different approach.]]>
The non-technological open source problem https://pepicrft.me/blog/2024/05/16/non-technological-open-source-problem 2024-05-16T00:00:00+00:00 2024-05-16T00:00:00+00:00 <![CDATA[

I noticed lately the emergence of tools that aim to tackle the problem of open source sustainability by treating it as a technological platform, such as Polar and the Tea Protocol. While it is exciting to see more companies trying to solve such a critical problem in our industry, the root of the problem lies in some social expectations and assumptions that I highly doubt can be changed with more tools. Don’t get me wrong, we need tools that pave the way for people and organizations that want to contribute to open source, but that’s not enough.

If you asked many citizens to contribute financially to the construction of a park where they’ll be able to relax and bring their kids to play, most will refuse, saying that they expect those things to be free. Now imagine the same thing but with digital infrastructure that many can’t see and on which many for-profit organizations build. Why would they pay if, by doing so, they’d increase their costs? They’d rather not. Not only that, but they take part in echoing misleading ideas such as GPL licenses, like considering them infectious. Why not say my software is incompatible with this license instead? Infectiousness places the blame on the open source software. Would you consider someone infectious because their need to receive a salary for the service they provide as an employee is incompatible with the company’s cost structure? Most likely not. So why do we do that to people who have contributed hours to develop something and therefore have the right to publish it under a license of their choice?

As I mentioned in past blog posts, the backlash from developers often arises when you have to make some changes in a model to protect the project’s sustainability, like we had to do with Tuist. But it was so hard for us to imagine how much work Tuist would require 7 years later, that we couldn’t have agreed on an ideal model when we started. This is not unique to open source, but companies have to go through decision-making that won’t please every user, and that’s alright.

So, going forward, the model that I’m embracing is similar to 37Signals’. I’ll layer the paid software that I build, and publish as MIT-licensed those layers that I believe would benefit from commodification and that the community might be interested in building upon and contributing to. For example, we are going to extract all the generation logic from Tuist, so anyone could build their own project generation tool upon a battle-tested logic to understand graphs and translate them to Xcode primitives.

Open source sustainability is hard, and while it’s necessary to have the tools to reward and pay contributors for their work, the problem requires more of us, developers, speaking out loud about the importance of contributing to the software that we depend on, like we do directly through our taxes to have spaces where we can live and enjoy. Software is no different there, just something more abstract.

]]>
<![CDATA[Open-source sustainability is hard, and while it’s necessary to have the tools to reward and pay contributors for their work, the problem requires more of us, developers, speaking out loud about the importance of contributing to the software that we depend on.]]>
Layering Tuist https://pepicrft.me/blog/2024/04/27/extracting-tuist-layers 2024-04-27T00:00:00+00:00 2024-04-27T00:00:00+00:00 <![CDATA[

Drawing inspiration from the VueJS, we’ll start extracting some layers from Tuist to make them Tuist-agnostic, and take the opportunity to ensure Linux compatibility, embrace Swift’s structured concurrency, and remove the dependency on shared instances that complicate parallelization of tests at the above layers. Those layers will be open source, and some will start as a fork of some swift-tools-support-core utilities since the package is no longer maintained.

Those utilities in order of priority are:

  • Path: Utilities to model absolute and relative paths in a type-safe manner.
  • Command: Utilities to run system processes.
  • FileSystem (coming): Utilities to perform file system operations such as copying files or creating directories.
  • SwiftTerminal: A design system for building command-line interfaces in Swift.
  • Dependencies: Utilities to declare the registration of a dependency graph.
  • XcodeProjectGenerator (coming): A foundation for generating Xcode projects and workspaces from the graph description.

All the packages will accept a swift-log‘s Logger instance to give users control over how they’d like the packages to log messages.

Note that they are still in the early stages of development. We want to make sure each of them is well-tested across the supported platforms and that they include documentation.

]]>
<![CDATA[Drawing inspiration from the VueJS, we'll start extracting some layers from Tuist to make them Tuist-agnostic, and take the opportunity to ensure Linux compatibility, embrace Swift's structured concurrency, and remove the dependency on shared instances that complicate parallelization of tests at the above layers.]]>
When you become infrastructure https://pepicrft.me/blog/2024/04/26/when-you-become-infastructure 2024-04-26T00:00:00+00:00 2024-04-26T00:00:00+00:00 <![CDATA[

While listening to this interview, where the interviewee, Adriana Goh talks about open-source sustainability and the role that sovereign tech fund aims to play there, I started thinking about Tuist and how it has become the infrastructure for many companies.

As Adriana points out, unlike digital infrastructure, in physical infrastructure boundaries are more clear and the need for maintenance is more evident. Although it’s common in societies to only realize the importance of infrastructure when it’s not working. The thing is, and I feel this is something that many developers are unable to realize, that a piece of open-source software that you developed in your spare time can become the infrastructure for many companies. It’s not that you set out to build infrastructure for the world, but you accidentally end up doing so.

Most companies know about this. Looking from the financial perspective, it’s cost savings. This allows them to gear their resources towards other things, for example innovating at a different layer, which is fantastic. The problem is that along with most societies, which haven’t yet realized the criticality of digital infrastructure, many turn a blind eye to the importance of maintaining it. The response is often in the shape of a donation, which is appreciated, but it doesn’t solve the problem in most cases.

I’ve seen some companies emerging to solve this problem, but I’m highly skeptical about their approach. They approach the problem thinking that it’s a technical one, but I believe it’s a social one. Hence why I’m excited about the German government’s initiative to create a sovereign tech fund to create awareness and support the maintenance of open-source projects. Sure, you can try to remind companies about the importance of maintaining the infrastructure they rely on, but “reminding them about a cost they should take care of” is not something that they want to hear.

In the case of Tuist, we had to start evolving the project into a company to ensure that we could maintain it. Our mental health was at the table, and we were running a bit out of energy trying to maintain the project in our spare time. Did everyone like the move of turning Tuist into a company? Not everyone. Some thought “I chose Tuist because it was open and free, and now there’s a portion of it that is not.” They felt betrayed. But we didn’t know anything about what the project would become and what that would mean for us in terms of time and energy. So we decided to place our mental health and project long-term sustainability first over the interest of getting everything for free yet at the risk of the project dying.

I wish more companies and people would realize the importance of maintaining the infrastructure they rely on. But until that happens, or until governments like the German one step in to create awareness and support the maintenance of open-source projects, open-source developers like us will have to come up with creative ways to ensure that the infrastructure we build is maintained healthily.

]]>
<![CDATA[A reflection on the importance of maintaining the open-source projects that become the infrastructure for many companies.]]>
Connecting App Signal with Incident.io using Cloudflare Workers https://pepicrft.me/blog/2024/04/25/connecting-app-signal-with-incident 2024-04-25T00:00:00+00:00 2024-04-25T00:00:00+00:00 <![CDATA[

We started using Incident.io on Tuist for incident management. The tool is great, but we were missing a way to connect it with our monitoring system, AppSignal. When you look at the integrations available, App Signal isn’t there.

Luckily AppSignal supports sending webhooks when anomalies are detected, but the schema of the payload and the headers didn’t match what Incident.io expected. For example, Incident.io expects the requests to come with an Authorization: Bearer xxx header.

To make the integration work, we decided to use Cloudflare Workers. We created a worker that transforms the payload sent by AppSignal into the one that Incident.io expects, and adds the Authorization header. The worker is deployed to Cloudflare, and we configured AppSignal to send the webhooks to the worker’s URL. Below is the code of the worker:

const token = "...."
export default {
  async fetch(request, env, ctx) {
    const alert = await request.json()
    const { alert_id, metric_name, human_comparison_value, trigger_description } = alert;
    // https://docs.appsignal.com/application/integrations/webhooks.html#exception-incidents

    await fetch("https://api.incident.io/v2/alert_events/http/....", { headers: {
      "Authorization": `Bearer ${token}`,
      "Content-Type": "application/json"
    }, method: "POST", body: JSON.stringify({
      "title": `${metric_name} peaked above ${human_comparison_value}`,
      "description": trigger_description,
      "deduplication_key": `${alert_id}`,
      "status": "firing",
      "metadata": {
        ...alert
      }
    })})
    return new Response(JSON.stringify({status: "success"}), {status: 200, headers: {"Content-Type": "application/json"}});
  },
};

Hopefully, Incident.io will provide an integration with AppSignal in the future, but until then, this solution works well for us.

]]>
<![CDATA[In this blog post I share how we used Cloudflare Workers to connect AppSignal with Incident.io.]]>
Standards, standards, standards https://pepicrft.me/blog/2024/04/04/standards 2024-04-04T00:00:00+00:00 2024-04-04T00:00:00+00:00 <![CDATA[

In the tech world, where “enshittification” is rampant, the importance of standards becomes clear as they protect us from platform interests that may not align with our own. As you observe your surroundings, you’ll find numerous examples:

  • Why opt for Figma and its proprietary file format when you could use Penpot, which utilizes SVG?
  • Why use serverless proprietary JavaScript runtimes when you can deploy OCI images to platforms like Fly?
  • Why choose Tailwind for styling your website when you can achieve the same with standard CSS, which has improved significantly over the years?
  • Why use Notion when you can write your content in Markdown files and manage them with tools like Obsidian or Logseq?
  • Why bind yourself to the complexity of React when you could embrace the platform and use web components?

I understand… standards can seem dull. They often lack the flashy, modern websites that play on psychology to convince you of their worth, yet they possess a unique value. They will outlast any proprietary format. React? It will eventually lose its appeal, leading people to chase the next trend. You have the chance to decide whether this results in a burdensome migration for you or has no impact at all. Should Notion decide to change its terms of service and raise the price for accessing your content? You can choose to avoid that situation entirely.

With the current overload of noise on social channels, it’s easy to mistake something’s value for how frequently it’s discussed. You won’t find many tech influencers evangelizing Markdown because it’s not deemed “cool.” It’s more fashionable to talk about a React-like Markdown format that blends Markdown with JSX. Unfamiliar with it? Consider introducing it to your project.

The JavaScript ecosystem undoubtedly epitomizes the realm of proprietary solutions. The absence of standards beyond the language itself has transformed the ecosystem into a Wild West. Try to name a single problem layer where solutions are standardized; you’ll find none. Take runtime as an example: each attempts to introduce a proprietary set of APIs until they realize the inevitability of Node’s standards, forcing them to ensure compatibility with Node’s APIs so that packages in the NPM ecosystem work seamlessly. Or consider cloud runtimes that mimic Node but aren’t actually Node. The code that functions locally fails in production because you’re using Node locally and something different in the cloud. As a result, new layers emerge to shield developers from the myriad proprietary solutions, like Honos, which abstracts the various methods of handling HTTP request-response cycles.

There are developers who enjoy hopping from one proprietary solution to another. I don’t. Perhaps it’s a sign of aging or a decreasing tolerance for trends that distract from creating anything of value. However, the moment I see a company promoting something proprietary, I instantly become wary. That’s why I prefer CSS over Tailwind, Fly over Vercel, Markdown over Notion, and runtimes like Erlang over Deno or Bun. The peace of mind that comes from betting on standards is invaluable, granting me the focus needed to build great tools with technology.

]]>
<![CDATA[Proprietary solutions are a dime a dozen, but standards will outlast them all.]]>
Thoughts on Open-Source https://pepicrft.me/blog/2024/03/19/thoughts-on-open-source 2024-03-19T00:00:00+00:00 2024-03-19T00:00:00+00:00 <![CDATA[

If you’ve been reading me for a while, you might know that I advocate open-source software. I can’t pinpoint exactly what makes me connect with it so strongly, but I believe it has something to do with communities.

I noticed that I like to build with other people and leave a lasting impact on them. I have a similar personality outside of the context of open source, and I noticed that my dad is the same. I also enjoy empowering others through my work, sharing thoughts like these, or collaborating with them on exciting challenges. This might sound odd in a very capitalist world, but the peace of mind this creates is priceless. I wouldn’t change it for anything else.

The problem is that money will get in your day whether you like it or not. The most common case is when you are an open-source developer and realize that you need to do something to make a living from it. Many open source developers get a job and continue to work on open source in their free time, but this might be a huge mental stretch, especially in some life phases. That was the case when I got the job at Shopify. They met me through my work on Tuist, which received no support from the company, neither by using it internally nor allowing me to spend some time contributing to it. I was younger and emotionally more dynamic to navigate it well.

In other cases, the open source developer gives the company the unique value that companies can’t compete with, such as a community and the recognition that comes with it. So, the developer becomes an influencer for the company, on top of meeting their daily responsibilities. If you look around, you’ll find some examples of that–a developer X who built Y and works now at Z, and therefore, I want to work for Z too because X is there. I sometimes wonder if open-source developers should charge for that or be paid more because, in terms of responsibilities, they are bringing tons of value to the company. At Shopify, I hired people who came through my open-source work. Even I talked so many times publicly about the exciting stuff my teams were working on. I did it naively, not knowing how those organizations operate, and to this day, I still feel guilty and somewhat stupid for having done that. But as I said earlier, whether you like it or not, the intrinsically motivated work that builds communities sooner or later meets the extrinsic motivations of a company that wants to profit from it. I won’t get into Bitrise because I have already talked about it.

The one model, which a few try because it requires wearing a different hat, is evolving the open source model to have a commercial component associated with it. This is the approach that we are taking with Tuist Cloud. We build something on the open source foundation but to build the source of revenue necessary to maintain the open source side. My Northstar is a product developed in Spain, Penpot, an open-source alternative to Figma that embraces standards over proprietary formats with rugged portability. It’d be my dream if we could achieve that with Tuist.

This is a model that we are seeing getting popular as COSS. However, I’ve seen some of those projects taking a different approach. Rather than being intrinsically motivated by open source values and what they enable, they are extrinsically motivated by making money and embracing open source as a marketing strategy. Most of them have VC funding and have contribution license agreements (CLA) to make them owners of your contributions. So even if they pitch it as “it’s open,” which is very sexy on a landing page.

The above model doesn’t align with my principles. I’d rather have a permissive component that anyone can take and do whatever they want and a closed-source extension to ensure funding for the project. Ideally, everything is open source, but we might make things unnecessarily challenging when bootstrapping a business. Our goal with Tuist Cloud is to open-source it eventually.

Re-connecting with the things that bring me joy is bringing back the mental health that I lost by being embedded into an environment designed for competition with all sorts of psychological tricks to make you overcommit and squeeze every gram of creative energy. Not anymore. I love collaboration, building, and empowering people, and doing that makes me stay mentally healthy and happy.

]]>
<![CDATA[In this post, I share my thoughts on open source and how it aligns with my principles. I also talk about the different models that I've seen in the industry and how I'd like to see it evolve.]]>
Transitioning Tuist Cloud from Ruby to Elixir https://pepicrft.me/blog/2024/03/14/transitioning-from-ruby-to-elixir 2024-03-14T00:00:00+00:00 2024-03-14T00:00:00+00:00 <![CDATA[

We’ve recently started moving Tuist Cloud’s current implementation from Ruby to Elixir. The TL;DR version of the motivation is that Ruby is not the most suitable runtime for apps that are IO-heavy—something that might change in the future. As part of the decision-making process, we also evaluated Swift, which powers the Tuist CLI, and some organizations are using it server-side. Ultimately, we decided on Elixir for various reasons that I’d like to share in this blog post in order of importance.

I don’t like the debates around a technology’s ability to scale because, at the end of the day, any technology can scale if you throw enough money at the problem. However, since we are a small bootstrapped company, we can’t afford to throw money at a problem—we need low-cost, easy scaling of not just the production servers but also development, and Elixir and the VM (BEAM) are uniquely positioned there. Thanks to the virtual machine and the functional nature of the language, you can scale by increasing cores and memory instead of horizontally adding more servers, which comes with its part of complexity.

Someone at this point might wonder: Isn’t this easy scaling also achievable in JavaScript with serverless JavaScript runtimes? Yes and no. The provider of the serverless environments supposedly abstracts that away from you, very likely with a proprietary runtime that looks like Node but it’s not Node. But that, at the end of the day, is a loss-leading commercial strategy for you to eventually pay more than what you’d pay if you had it as a long-running process in a server. And that goes without talking about the pain points of having inconsistent runtimes across environments. A JavaScript/TypeScript developer would tell you WinterCG is here to fix all these fragmentations caused by cloud providers, but that’s just a flat illusion, like many things in the ecosystem: Serverless, Lambdas, Functions, JamStack. They are constantly trying to reinvent everything to try to sell services with significantly more costs than value. So as you can imagine, JavaScript was a no-go for us.

Swift’s production servers can scale easily and cheaply. However, development might not because of Apple’s continuous strong focus on their platforms. The ecosystem lacks many tools and workflows that can play a significant difference in productivity. One of the workflows, for instance, that we find extremely valuable these days is being able to open a remote console with the runtime and run code in it. Obviously, this is a workflow that’s not a good idea long-term, but in this current phase, it’s life-changing.

We had a bit of a moment of thinking: should we bet on Swift and help move the ecosystem forward? However, we don’t have the financial luxury of making this investment. We were afraid the decision would lead us to rabbit holes of Swift issues in Linux environments that would distract us from shipping a great product. We are not entirely opposed to using Swift on the server. We just think that right now is not the best time for it. We remain excited about what the future holds for Swift on Linux and Server.

It was also important for us to have access to an ecosystem of community packages to help us with various problems and needs we’d face. Swift was way behind here compared to ecosystems like Ruby, NodeJS, Rust, or Go, which have plenty of tools for building server apps that run on Linux servers. From that list, we also discarded Rust and Go because we’d lack hot-reloading, which is a small issue at a small scale but can become significant when reached a certain scale. We deem it important for our productivity to change code and see it automatically hot-reloaded in the runtime. Once again, the combination of functional programming in Elixir plus Erlang’s VM capabilities make that possible. It’s insanely fast.

And last, and potentially very important in the future, is having access to primitives that will allow us to build real-time features around collaboration, without having to introduce additional pieces of infrastructure. Elixir makes that extremely easy. So easy that even platforms like Supabase offer that as a service for other ecosystems. We can see some Tuist Cloud features being real-time, like seeing builds test results data flowing into a dashboard and getting automatically refreshed.

We believe this is the best decision for the project, and we can’t wait to start investing in a unified dashboard for Xcode teams.

]]>
<![CDATA[Ruby proved not to be the most suitable runtime for apps that are IO-heavy. We evaluated Swift, but ultimately decided on Elixir for various reasons.]]>
Elixir scales better https://pepicrft.me/blog/2024/03/05/elixir-scales-better 2024-03-05T00:00:00+00:00 2024-03-05T00:00:00+00:00 <![CDATA[

Tuist is powered by Ruby on Rails. When making the decision, we decided on Rails because of its maturity, productivity, and the ecosystem around it. We could move fast with it and convert some early customers. However, we hit some scaling challenges earlier than we anticipated, and they made us question whether Rails was the right choice for us in the long term. Let me give you some context on the service first.

The service has a very basic UI, which might change soon, and a REST API that the CLI interacts with. Most of the interaction with the service is done through the API because teams interact with it through the CLI. By design, we aimed at keeping an S3-compliant storage, provided by Cloudflare in this case, as the source of truth for whether a given binary exists or not. We didn’t think it made sense to replicate that state in the database, and add additional synchronization logic to keep both in sync. That meant that all the requests would spend around 90% of their time waiting for IO operations to complete. To our surprise, we started seeing slow requests that timed out, and some which came back with 500s. What was going on? I started searching, and I came across this issue by @dhh. And there was a piece of his comment that grabbed my attention:

It’s not a good default for apps with quick SQL queries and 3rd party calls running via jobs, which is the recommended way to make Rails applications.

3rd party calls are what we were doing (against Cloudflare’s APIs), so he was clearly referring to our use case, but I couldn’t wrap my head around the idea of having to use jobs for that. I mean, I don’t mind adding jobs, but I’m a huge advocate of simple solutions, and that didn’t feel simple. So I continued digging into how Puma works, how they use Ruby threads and system processes, and how you have to figure out the sweet-spot configuration for your service to have the latency and throughput intended. I learned that the issue that we were facing with Tuist was related to having a low number of threads, which resulted in requests having to wait a long time in the Puma queue, and eventually timing out or being rejected. I added some additional cores to the production machines, did some tweaking of processes and threads, and got it to a point where it was acceptable. But still, it didn’t feel right. What are we going to do with our on-premise customers? Are we going to provide them with guidelines to figure out the right value for the production machines? Are we going to point them to lengthy guidelines around Rails and performance? It all felt wrong. Sorry Rails. You scale, but the cost of it is barely talked about.

All we want is a solution that’s able to use all the resources available in the machine, without much configuration or tweaking to find the ideal values, and a runtime that’s able to handle IO-bound operations efficiently. At the same time, we want to keep the productivity that Rails provides. The good news is that such technology exists, and I’ve already talked many times about it in the past. It’s Elixir. It feels like the right tool for the job. I don’t have to use jobs because a runtime’s global lock is not going to be optimal for the nature of my app. We can scale the service by throwing more cores at the machine. Look at Discord, Pinterest, and Supabase, which are all powered by Elixir. Development is very scalable too. You can use also all the cores available in the machine, and because it’s a functional language, you do so with minimal risk of introducing flakiness.

So as you might have guessed at this point, we’ll transition to Elixir. We won’t do a full rewrite. Rather, we’ll place a proxy in front of the Rails service, and run the Elixir service in parallel. We’ll route the traffic to the Elixir service as we migrate endpoints. It’s going to be awesome when we give our on-premise customers a Docker image that can use all the resources available without any configuration required on their end. Or even better, gear our energies toward building products instead of figuring out how to scale our API.

Rails scales, but Elixir scales better.

PS: Despite our excitement for Swift on the Server, we decided to go with Elixir because of the productivity and the ecosystem around it. Erlang’s hot-reloading of modules is an unbeatable productivity tool.

]]>
<![CDATA[In this blog post, I talk about the scaling challenges we faced with Rails, and why we decided to transition to Elixir.]]>
Gain back attention https://pepicrft.me/blog/2024/03/04/gaining-back-attention 2024-03-04T00:00:00+00:00 2024-03-04T00:00:00+00:00 <![CDATA[

I’ve been thinking lately about how hard it is for me nowadays to slow down and enjoy deep focus. It feels as if I became addicted to the dopamine of jumping from one thing to another feeding my brain with the illusion of productivity. I think it’s a consequence of the fast-paced world we live in. I’ve gotten used to the endless scrolling on social media, the synchronous working style that tools like Slack have brought to us, and the constant notifications from my phone and watch. I feel my attention has been hijacked by the digital world. And I don’t like it. I want my attention back.

To do so, I’ve been trying to change my habits. The first one is to prefer asynchronous communication over synchronous. For instance, in the Tuist Slack group, we encourage more users and contributors to move the discussions to GitHub issues and pull requests. They could theoretically move the same communication style from Slack to GitHub, but the long-form body of the issues and pull requests makes it harder to have a synchronous conversation. It makes the initiator of the conversation think more about the message they want to convey, In Slack it’s natural to feel entitled to grab someone’s attention and expect an immediate response.

Another thing that I started doing is using X less. The algorithmic feed is designed to keep me scrolling and scrolling and engaging with the polarizing content. Instead, I’m using Mastodon, where the feed is chronological and feels so much more peaceful. It sometimes feels so boring that I don’t use it at all. It almost feels like an intermediate step to get rid of the addiction. I don’t know if it’s because of the people that I follow there, but I don’t have the feeling that people are trying to grab my attention. Every conversation that I see is meaningful and respectful. X makes me anxious, and Mastodon doesn’t.

I’m also working on caring less about what people think about me. What does it have to do with attention? I started to notice that I sometimes did work to get the approval of others. I liked getting that attention. I think it’s an X/Twitter-induced behavior because it keeps me engaged and active on the platform. I still like sharing the things that I learn and think about, hence why I’m writing this blog post here. But similar to asynchronous communication over Slack, I’m trying to use more this website over X or Mastodon. Not only does writing here relax me, but makes me the owner of the content. I’m getting a bit fed up with all the companies using AI to ruin the content humanity has been generating for years. Do you like the content here? Sweet, there’s an RSS feed for you to subscribe to. Do you want to engage in a conversation? Just send me an email. F*ck the algorithms and enshittification of the Internet.

It’s time to gain the attention back and have a more meaningful and calmed relationship with myself and others on the Internet.

]]>
<![CDATA[In this post, I reflect on how the digital world has hijacked my attention and what I'm doing to gain it back.]]>
If I could just parallelize my tests execution https://pepicrft.me/blog/2024/02/29/parallel-testing 2024-02-29T00:00:00+00:00 2024-02-29T00:00:00+00:00 <![CDATA[

Did you notice that Xcode project schemes have an option to run tests in parallel? It makes sense, considering we have powerful multi-core CPUs and a programming language with concurrency built-in. But let me tell you something: all of that is useless if your code isn’t designed for it.

Swift allows having global state, whether protected or not, to prevent data races. It all starts with a stateless singleton instance for memory optimization purposes and better ergonomics: Client.shared, FileManager.shared… Sooner or later, someone adds state, and you end up with code making assumptions on the value of that state. The breakage of these assumptions is often not covered by tests, but that’s a subject for another post. So, part of your codebase is directly or indirectly dependent on mutable state. What could go wrong? Nothing if you run your tests sequentially. Run them in parallel and wait for the first signs of flakiness to arise. But it doesn’t happen to me, Pedro, because I’m mostly writing unit tests where I mock all the subject dependencies. And you are right, but in some scenarios, integration tests can bring a lot of value, and in those, your stateful components are very likely far from the subject under test.

To run the code in parallel, you need to be able to scope that state to the test being executed. How do you do that in Swift? Well… using dependency injection, which will lead to a lot of refactoring and some boilerplate in function signatures.

I started working on the above for Tuist, but I find it very annoying, honestly. Elixir solves this very beautifully through the concept of processes and their unique ids. Because a test is a process, and a process has an id, you can leverage that to swap modules depending on the id of the process the module is accessed from. Mind-blowing. Swift, please, bring that.

]]>
<![CDATA[Schemes in Xcode have an option to run tests in parallel, but if your code isn't designed for it, you are in for a world of pain.]]>
On protecting my creative energy https://pepicrft.me/blog/2024/02/23/on-protecting-someones-creative-energy 2024-02-23T00:00:00+00:00 2024-02-23T00:00:00+00:00 <![CDATA[

Has it ever happened to you that your work style drains your creative energy? This is something I’ve noticed with Tuist. Since the number of contributors and users outweighs the number of maintainers, our attention is often divided into different areas. For instance, over the past few days, I’ve been planning a trip to attend some conferences in Asia, ordering swag to bring with us, searching for a professional tax advisor in Germany, reviewing PRs, organizing ticket raffles on social accounts, and shipping code. For most of these tasks, you don’t need to be very creative. You just have to learn about the process and follow it. It’s very routine, and I don’t mind doing it, but if I do it for too long, my brain gets tired and leaves me with no energy to do creative work, which is what I enjoy the most.

I’m now fully aware of it, and I’m taking steps to protect my creative energy. I’ve started allocating some time to work on stuff that was not a priority but that I enjoy doing and that I believe can have a high impact on the community. At first, I felt guilty. My brain was telling me that I should be doing something else that was more urgent. And I felt bad for feeling that way. Sure, there’ll always be something more urgent, but I can’t be working like that all the time or I’ll feel like a hamster on a wheel. I think my creativity is one of my most valuable assets, and I have to protect it.

For instance, I recently started working on new MIT-licensed gifts for the Swift community under the Tuist umbrella. One of them is a design system for building CLIs. I noticed that unlike other ecosystems like Go’s and Rust’s, Swift lacks a proper toolchain for building stunning CLIs. So I started building it. Is it a priority? Hell no. There’s a business to run. But through that work, we can positively impact the experience of Tuist users and the users of many other Swift CLIs that might be built in the future. I’ve also been playing with Apple’s virtualization and containerization technologies. I find it’s such an unexplored area in the Swift community, where we could also have an impact with some open-source work that could serve as a foundation to diversify our Tuist Cloud offering. If no one tries those things, we’ll never know if they are worth it.

Do you feel the same? If so, how do you protect your creative energy? This is one of the topics in my therapy sessions, and we are coming up with strategies to protect it.

]]>
<![CDATA[In this post, I share my thoughts on protecting my creative energy and how I'm doing it.]]>
Global state is future debt https://pepicrft.me/blog/2024/02/15/global-state-is-future.debt 2024-02-15T00:00:00+00:00 2024-02-15T00:00:00+00:00 <![CDATA[

I’ve been thinking a lot about global state lately. When writing software, state is everywhere, in memory, in the file system, and in storage solutions. It’s the object that the business logic works with. State has a lifecycle. We often read about local state and global state to refer to state whose lifecycle is short and bound to a specific context, and state whose lifecycle is long and shared across different contexts. We’ve learned from functional programming that local state is easier to reason, and that global state is a source of complexity and bugs, but the latter is impossible to avoid because it escapes the program’s boundaries. For instance, in a purely functional program, you can have global state in a database that’s shared across different contexts.

I’m not a fan of pure functional programming languages because their syntax and semantics don’t click with me. However, and having learned that the hard way, it became clear to me that global state is future debt that sooner or later you’ll have to pay. If this sounds too abstract, let me guide you through what I mean by that.

When writing software, it’s easy to add global state for the sake of convenience, for example via the singleton pattern. A singleton is not necessarily a bad thing, because you can use it as a tool to use memory efficiently, but by introducing it you are laying the ground for global state to grow. Developers will plant the seeds, and you won’t notice it in your PR reviews. CI pass, developers will continue shipping features, and everything will seemingly work fine. It will until it doesn’t. You notice it when you have concurrent access to the global state, and the behaviour of your software becomes unpredictable due to race conditions. In some cases, you’ll notice it because users will report bugs that are hard to reproduce. In other cases you’ll notice it because parallelizing the execution of tests results in an increase in flakiness. In any case, it’s painful for your organiation because you’ll definitively have to throw resources at preventing and mitigating it. And the larger the software becomes, the more unmanageable it becomes.

There are some programming languages and runtimes where global state can happen so easily that it’s hard to avoid it. For instance, with the JavaScript ES module system, where variables can be defined at the root of the module and modules are singletons, it’s very tempting to use global state. In the ecosystem, they tooling refer to them as “side effects”. And not only it makes the software harder to reason about and unpredictable, not just for the developers but for the tooling that’s unable to understand the software well enough to optimize it. Hence why Webpack decided to introduce a convention at the package level to mark the side effects of a package. Other technology stacks like Ruby and Rails, where global state is common in databases, provides testing tools to scope databases to a particular test so that it’s not shared across different tests. Did you notice? Still, nothing prevents a Ruby class from using global state. And once again, a developer is not thinking about that while writing the code. It’s a natural inclination towards convenience without thinking deeply about the implications of the decision. That’s why when I hear that about Ruby or JavaScript scales, I can avoid it but to think about how many resources are required for that, and how much of the organization’s time is going into making that scale. But because that’s hard to measure, the framing is often about the number of requests per second.

The nature of the program makes the problem less or more common. For example, in web applications, that follow a request-response model, things are naturally modeled more functionally. A request comes, a set of functions pass that state around, and eventually generate a response that’s returned to the client. CLI’s are a bit like that too, where a command is executed passing a set of flags and arguments, which resembles a request in an HTTP server, and is passed through a set of functions that transform the state until it’s returned to the user. Still, global state can happen and will happen. It happened in Tuist, and now it’s limiting the test parallelization that we can achieve in some areas. It’s still manageable, and we are working on it, but I find it crazy that we reached this point without noticing it earlier.

So what can we do about that? For example, in the context of Tuist, which is a CLI implemented in Swift, we’ll have to resort to dependency injection to escape global state, and use it to isolate the execution of tests. Similar to what Rails and many other web frameworks do with databases. It’s feasible, but it comes at the cost of making the code more verbose. Suddenly all your functions take similar arguments, and developers wonder why they have to pass the same arguments over and over again. One could suggest to use a service locator, or a dependency injection framework, but that comes at a high cost too. It’s a new piece of technology that developers have to learn, and that you need to maintain. For example, Uber has an open-source tool, Needle, which requires an additional code-generation tool installed in the environment. Having required that in the past for Tuist contributors, and learning that it was a source of friction, we are not going to do that again. Sorry. It’s dependency injection at the cost of boilerplate the solution? Most likely yes in the context of Tuist, but we’ll try to model it to reduce the boilerplate as much as possible.

Someone familiar with functional programming languages like Clojure might read this and think: “I told you so”. But as I said earlier, the syntax and semantics of those languages don’t click with me, and the purism that naturally comes with them doesn’t either. Isn’t there a solution that’s more pragmatic?

Once again, Erlang and Elixir shine. Elixir feels like Ruby with a more functional touch. It really clicks with me. And they have one of the most powerful concepts I’ve seen since I started my career: processes. First, the language is functional. Not as pure as Haskell, but it embraces the functional paradigm, so global state is less likely to happen. Do you remember that trick that I mentioned earlier to scope databases to a particular test? Imagine being able to do that with any piece of state. In Erlang or Elixir, processes are like classes in OOP. They are cheap, can form a hierarchy, can communicate with each other by sending messages, can hold state, can be supervised, and most importantly, they have a unique identifier–a process ID. Tests are processes, and they have their unique ID (and state). Elixir leverages that to allow swapping a module’s implementation at runtime, for a particular test. So you can mock a module’s implementation for a particular test without having to pass the mock all the way down through the call stack. And this is truly powerful. I wish many other programming languages had that.

Anyways, because I can’t write everything in Elixir, I changed my approach to writing software to avoid either creating state myself or laying the ground for global state to grow. In Tuist we’ll have to refactor the code to escape global state, and in future projects, I’ll embrace the everything is a function paradigm, and figure out the right balance there with the readability and maintainability of the code.

Avoid global state.

]]>
<![CDATA[In this post, I share my thoughts on global state and how it's future debt.]]>
It takes a lot of determination https://pepicrft.me/blog/2024/02/13/it-takes-determination 2024-02-13T00:00:00+00:00 2024-02-13T00:00:00+00:00 <![CDATA[

One of the best skills that I learned from my parents and that I’ll forever be grateful for is determination. My family worked and works at a cafe selling “churros”, a common breakfast in Spain. Fun fact, I used to work there on the weekends when I was a teenager. I used the money that I earned to buy technology. My parents had no financial, or entrepreneurship education, yet they threw their little savings into paying for higher education and trusted me to go on my own. At some point, I crossed their limits of what they could comprehend and support, but they never stopped me from pursuing my dreams. They never stopped me from being determined.

I was on my own. Often with no idea of what I was doing. But filled with curiosity and determination. I remember every single moment of my life when I learned what other people have the privilege to learn earlier, due to the education system in their countries or the higher education of their parents. For example, it took me more than a year working at Shopify to understand that what I was given as part of my compensation alongside my salary were real shares of the company, and they had value. To put things in context, my parents have always been firm believers in not putting the money in the bank but under the mattress. Every one of those moments brings me a feeling of unfairness, but also a bit of: I’ll throw myself into it and figure it out.

Being determined has been key to many of the successes and also happiness that I’ve had in my life. It’s what has allowed me to push through the challenges of building something new, like Tuist, and to keep going when things get tough. It’s what has allowed me to keep pushing for new ideas, like the idea of a web compiler, even when people tell me that it’s not possible or that it’s not worth it. If you listen to the naysayers, you’ll never get anywhere. You have to be determined to keep going, even when things get tough (I just learned about the concept of naysayers thanks to the wonderful Copilot helping structure my thoughts).

I feel so lucky to have learned this, that I feel extremely entitled to empower others to go through the same, regardless of their starting point. And that’s why I like open source so much. OSS connects me with people who have the potential to create a lasting impact in the world, but they might not feel the confidence to do so. They instead remain quiet and don’t share their thoughts, ideas, and code with the world. Because the world is noisy with people who have the privilege to be loud.

Having gone through the Tuist journey and up the corporate ladder at Shopify, gave me a window into a world of privileges where it’s about oneself and not about the collective– about not valuing you because of your background while we throw resources around to train people on unconscious bias. A world of I’m more than you. A world purely looked through the lenses of numbers and financial productivity. That world is not for me. It makes me extremely uncomfortable. I don’t enjoy it.

And it’s tremendously unfair because the best leaders who might be able to stop the most pressing problems the world will be presented with are quietly waiting for someone to empower them. I enjoy working on Tuist because I see others being empowered to build things that they wouldn’t have been able to build otherwise. I’m pouring some of my spare time into Glossia, because María José is one of those people with huge potential that could have a long-lasting impact in the world, but she’s faced with one rejection and question after another. I like to tinker with ideas in the open because they can be a foundation for other ideas to emerge and empower others to build things that they wouldn’t have been able to build otherwise. And that’s why the layoff at Shopify, although feeling like a punch in the stomach, was one of the most liberating things that could have happened to me.

]]>
<![CDATA[The world needs more people who are determined to make a difference. In this post, I share my thoughts on the importance of determination and how it has helped me in my life.]]>
It's not about what, but how https://pepicrft.me/blog/2024/02/10/not-what-but-how 2024-02-10T00:00:00+00:00 2024-02-10T00:00:00+00:00 <![CDATA[

You might not believe it, but I suffer from a lot of imposter syndrome when building Tuist. When I started it, I was very determined and convinced that the ideas I was playing with in my head could work. It was just me, Xcode, and a GitHub repository where I could push my code for anyone to check and use. Part of me thought, “We are just doing project generation, and there are a handful of other tools out there that do the same thing,” but the other part of me thought, “But there’s something unique in the way we are going to approach it, so why not?”

It takes a lot of determination to build something these days. The moment you share “what” you are building, you’ll face loads of comparisons with other tools that seemingly do the same. And they might be right in that they solve the same problem that we are aiming to solve, but the important question is really “how.” Tuist’s strength lies in how it helps organizations overcome challenges. What makes it unique is the right balance between the value that organizations can get from it compared to the investment that they have to make to introduce it. It’s so well-balanced that even small organizations can afford to have it very early in the lifecycle of their projects.

People ask me what I think about Bazel. I don’t know what they expect, but my answer is always: it’s amazing. And what about XcodeGen? It’s an amazing tool. But I respond to them with another question: “Do you think it’s the right tool for you?” And this is only a question they’ll be able to answer, not me. I can share the pros and cons of Tuist’s approach to solving problems, but ultimately it’s up to them, who know their environment better than me, to decide which tool fits them best.

And why am I talking about it? I recently started thinking about the state of the web and the lack of a standard language for building and sharing components that are weakly coupled to the JavaScript ecosystem, and the reaction of people reminded me of the reaction of people when I started building Tuist: “But isn’t there Svelte, React, Preact, Vue, and the many others that are already solving that problem?” And once again, the problem is the same, but the important point is really the “how.” The lack of a narrow waist in the ecosystem has led us to a fragmentation of solutions and tools in the JavaScript layer and a degradation in DX in other ecosystems that are not JavaScript. This is far from ideal, and the whole ecosystem would benefit from a different approach to how to solve the problem.

It’s terribly challenging to push through it. My personality helps a lot. I pushed hard with Tuist. I pushed hard at Shopify about the idea of having frameworks for apps, which indeed caused a lot of frustration in the end because my idea was only accepted when the CEO mandated it. I’m going to push hard with the idea of a web compiler. Will it work? I don’t know. But I won’t let anyone stop me from finding new solutions to problem spaces that have accumulated an absurd amount of complexity.

]]>
<![CDATA[In this post, I share my thoughts on the challenges of building something new and how it's important to focus on the 'how' rather than the 'what.']]>
JavaScript DX without JavaScript https://pepicrft.me/blog/2024/02/07/the-js-experience-without-js 2024-02-07T00:00:00+00:00 2024-02-07T00:00:00+00:00 <![CDATA[

I’ve been thinking a lot lately about web UIs after seeing that the best UI component kits or design systems are so strongly coupled to the JavaScript ecosystem. From Tailwind to React through CSS-in-JS, with a necessary stop at Storybook, it’s inconceivable to have an excellent experience building web UIs without using JavaScript. HTMX, Rails Hotwire, Phoenix LiveView, which claim that they stay true to the platform, feel like a downgrade when coming from something like React. They are religious about their approach, but they fail to recognize the joy of using React, Svelte, or Vue to enhance the developer experience. I sympathize with their idea of not wanting to bring the complexity of a JavaScript toolchain though. If you want to stay true to the Rails framework while not giving up on the experience that JavaScript tools can bring, you’ll most likely find yourself moving everything to the client as an SPA with a build tool like Vite. Fun fact, Shopify suffered a bit from this—teams built internal services following the SPA model because developers wanted to have access to the React-based design system Polaris. Voilà, you’ve got a new problem: maintaining the JavaScript stack, the REST API (or GraphQL if you want to live at the limit), the client-side state, and the hundreds of issues that might arise from packages in the node_modules directory.

Erlang taught me that with the right modeling of the problem space, you have the potential to eliminate a whole set of problems that emerge at higher layers. Erlang does so with its concept of processes, and Elixir added the layer of programming language that we are used to. What if we need to peel layers, escape JavaScript, and move down levels to model the problem at a different layer?

If we look at the common denominator across all of them, what we find is a function that goes from a domain where we find components like .svelte, .vue, .jsx to .html, .css, and .js that our browsers can read. They all support being hydrated on the client, and some do it through a virtual DOM. We have a compiler. And this is the foundation upon which many of the tools emerged. But so many emerged that using the compiler directly became unthinkable to the point that React doesn’t recommend it anymore. This made the layer upon the compilers another foundation for frameworks to emerge. Their aim? Abstract away the absurd amount of complexity underneath. Some even went further, like Vercel, which built a business upon it. And having so many layers has a cost that depending on who you ask, they say they are willing to pay it. JavaScript developers do; they’ve internalized the brittleness and the frenzies of the ecosystem. They are fine spending days understanding how a deep dependency update resulted in crashes in their apps. Organizations, on the other hand, don’t want this. Still, they don’t want to miss out on the productivity that JavaScript concepts and tools can bring.

Among the ideas that emerged at the JavaScript layer, there was one that I believe played an important role in the state of things. I’m talking about sharing components in NPM packages. Components could be encapsulated in an NPM package, distributed through the NPM registry, and integrated by build tools like Vite, which can be instructed via plugins for how to do so with every available UI technology. This addresses the natural need that developers have for sharing code of their project. Code that represents UI in this case. Do you know why Tailwind is so successful? I believe one of the important factors is that they allowed a way of sharing components, copying and pasting, in this case, without requiring developing your UI at a JavaScript layer. It’s not a surprise that DHH supported it. It was solving something that neither Rails nor other technologies that are not based on JavaScript had solved.

And all of this leads me to the most important question: Does all of this have to be solved at the JavaScript layer? Or have we cargo-culted from UI frameworks like React, normalizing the layers of complexity along the process? I believe more the second.

I think the industry is missing a platform-agnostic and technology-agnostic compiler that can be plugged in as an in-code backend for rendering purposes in any other stack (Rails, Phoenix, Express). The compiler would support features that developers have grown accustomed to, like writing styles in the component and getting them estranged into CSS or having state that changes over time and that the system takes care of mapping to HTML when it changes. We don’t need to reinvent the wheel at a low level when web components exist. They are not there in terms of capabilities, but we can augment the missing bits and bet on the future of them. Moreover, the compiler can support sharing components via packages, making them easier to integrate. For example:

compiler install shadcn

And I can just write the following in my project:

<shadcn.button>
  Click me
</shadcn.button>

No more do you have to have Tailwind CLI, set it up this way, and have React too, and have this config file at the root. What an absurd amount of unnecessary tools. I just want to install a design system and use it.

If you notice, being able to write and share UI opens a lot of exciting opportunities. For example, giving design agencies a standard format to share their work and even charge for it. Instead of having to export it in many formats or just one that the customer uses, you can export it in a universal format that’s easily integratable. Or imagine training AI models with examples of design and having a tool like Vercel’s V0 that’s not coupled to React or Next and doesn’t try to lead you into becoming a customer of anything. Imagine too a Git platform that’s able to visually render the diffs.

I believe this would positively impact the tools built upon it because instead of having to waste time supporting every new UI framework that comes out, they can focus on one and make that experience the same.

Sounds like a giant effort, like Tuist seemed when I started working on it, but incrementally and with a lot of time, we arrived at where we are today. I feel I need to give this a shot in my spare time. I dream of enabling an ecosystem of sharing UI like the one that we developers have the luxury of having access to. The impact that a tool of such nature would have on the tech ecosystem is immeasurable. It would also align the experience of all the web frameworks, regardless of the programming language, and the decision of using one or another would be more nuanced.

I threw a repo and started dumping ideas on it at https://github.com/glossia/noora. I named it after Noora, which is an Arabic name that means light—the light in this ocean of complexity in the JavaScript world. Send me an email at [email protected] if the above idea sounds cool and you’d like to contribute.

]]>
<![CDATA[In this post, I explore the idea of having a platform-agnostic and technology-agnostic compiler that can be plugged in as an in-code backend for rendering purposes in any other stack (Rails, Phoenix, Express).]]>
Incremental Xcode workflows across environments https://pepicrft.me/blog/2024/01/26/incremental-xcode-workflows 2024-01-26T00:00:00+00:00 2024-01-26T00:00:00+00:00 <![CDATA[

Incremental compilations are the build-systems’ answer to speed up the development cycle. Some like Bazel are able to span the incrementallity across environments. But what about Xcode build system? They struggle even to achieve it within the same environment. The reason? Xcode is too magic. There’s some work happening, to improve the situation at the swift-driver level, but they are far from what Bazel can achieve.

Luckily, we have an excellent foundation in Tuist to tackle that. The first thing that we built was binary-caching, which skips some compilation steps. Tuist Cloud is a service that spans that incrementality across environments. The second feature that we landed to take that incrementality to the next level is selective testing. Leveraging the same hashing solution that we use for binary caching, we can run only the test targets that are impacted by the changes. The combination of binary caching and selective testing can cut CI times quite significantly. But we are not done yet, we are going to bring binary caching to the tuist build workflow too. And, last but not least, we are going to skip the dependency resolution via Swift Package Manager, which can easily add a pair of minutes to every CI build.

The best of all? It’s built into Tuist. If your project is described using Tuist’s DSL, you get all of the above right away.

tuist cache # Warm the cache with binaries
tuist fetch # Fetch dependencies from the cache (seconds)
tuist test # Use selective testing and binary caching
tuist build # Use cache binaries

We are working hard on providing organizations with the best balance between convenience and performance.

]]>
<![CDATA[Xcode struggles to achieve incremental builds within the same environment. What about across environments? Not even close. Tuist is working on bringing incremental builds and test execution to Xcode projects across environments.]]>
AirPods Max died after 2 years https://pepicrft.me/blog/2024/01/24/airpods-max-broke 2024-01-24T00:00:00+00:00 2024-01-24T00:00:00+00:00 <![CDATA[

I bought the first model of AirPods Max headphones in September 2021 at the official Apple Store in Murcia, Spain. From the beginning, I experienced intermittent issues. One issue was the poor microphone quality during meetings with colleagues. The store’s solution was to wait for a software update that would fix the problem, but that update never arrived. However, I primarily bought them for listening to music, so it wasn’t a major concern.

The music quality was astounding. However, occasionally, they would disconnect automatically, requiring a hard reset and reconnection. This was somewhat annoying, but not enough to warrant a trip to the Apple Store. Despite these issues, I enjoyed listening to music with them and considered it a great purchase.

That feeling lasted until they suddenly stopped working. I attempted to reset them, but instead of the sequence of blinks being three ambers and one white, it was just three ambers. The headphones had become useless. Consequently, I visited the Apple Store in Berlin, where I currently reside.

I explained the issue, and after examining them in a private area, the staff returned, ready to discuss the problem, starting with an unexpected question: “I’m sorry, Mr. Pedro, but do you have insurance for the headphones?” I was surprised. Insurance for headphones? They informed me that the repair would cost up to 250 Euros. I was shocked. “What happened to them? Why are they broken? It’s only been two years,” I questioned. I asked if I could send them for an estimate of the repair costs, suspecting a software issue. However, they informed me that if sent for repair, I would have to pay the full amount and would receive a 90-day warranty. I was puzzled and felt like it was a case of planned obsolescence. The Apple employee added that if I had reported the intermittent connection issues earlier and had a record of them, the repair would be free. I left the store feeling a mix of disappointment and anger.

At home, I researched the three amber blinks issue and discovered I wasn’t alone (Reddit, Reddit, Apple Discussions). Others had similar problems; one person fixed theirs with a software downgrade, which failed again after an automatic update. Some had their headphones replaced for free. Others got it working by putting them in the freezer for a while. It was unbelievable that Apple could potentially cause hardware issues through a software update without any liability, shifting the responsibility onto the user.

I secured another appointment at the same store in Rosenthaler Straße, Berlin. This time, the employee, after consulting with his superiors, offered to cover 50% of the repair cost. I inquired about the specific issue and the warranty duration, but they were unable to provide detailed information about the fault and only offered a 90-day warranty. It seemed they could temporarily fix the issue and charge me 250 Euros without a guarantee of a long-term solution.

Apple seems to have found a legal loophole to charge users, but this is unfair, and I am determined to seek a resolution. If you’re reading this, I would appreciate your help in spreading the word. I am also considering approaching consumer agencies in Europe to file a complaint.

]]>
<![CDATA[Apple might have found a legal loophole to charge users for hardware issues caused by software updates.]]>
Xcode is too magic https://pepicrft.me/blog/2024/01/24/xcode-is-too-magic 2024-01-24T00:00:00+00:00 2024-01-24T00:00:00+00:00 <![CDATA[

While preparing my talk for Swiftable, and thinking about the Xcode challenges that developers face and how Tuist helps overcome them, I realized that a lot of the challenges are rooted in Apple’s approach to products with Xcode. They lean on the convenience side of the spectrum, which is great for beginners, but it makes it hard for developers to understand what’s going on under the hood and optimize their workflows.

There are a handful of examples in Xcode. For example, the Find implicit dependencies option in Xcode schemes. It opens an interesting debate about whether a graph with dependencies not explicitly declared in the project should be a valid graph. What we learned with Tuist is that the more explicit and side-effect-free the build graph is, the easier it is to reason about it and optimize it. So developers lean on the side of implicitness, which is alright at the beginning because you can type import MyFramework and declare a dependency with MyFramework in code, but a few months later, when the project grows, some features like Xcode Previews don’t work and you don’t understand why. We’ve seen organizations adopting Tuist and reporting that their build times became faster after they migrated or that Swift Previews started working. So part of the work that we do with Tuist consists of designing APIs that prevent implicitness in projects.

Another example is Xcode exporting the built products into a directory that’s linkable from any target of the project. So if you have a dependency scenario like A > B > C where > represents “depends on”, A might be able to import C because by the time Xcode starts building A, the built products of C are already available in the directory that Xcode uses to link the products. You might think this product principle is unique to Xcode projects, but the Swift Package Manager has inherited it too. Package libraries can mark themselves as automatic, letting the build system decide what the best linking strategy is. I want to make that decision myself. Especially when the linking might impact the size of the output bundle, for example by having the same static library linked from multiple dynamic frameworks. Once again, leaning on the side of convenience, at the cost of presenting developers with other issues.

As you might have noticed, all this convenience is achieved by making Xcode’s build process more magical. You hit compile, and there are a bunch of things in your project that Xcode might be able to resolve magically, or maybe not, and you end up with one of those cryptic errors that developers try to resolve by cleaning the derived data folder.

The more I think about this, the more I think Apple should go back to first principles, flag those as anti-patterns, and provide a migration path for existing projects. The reason why that implicitness was introduced in the first place is because declaring a dependency graph in Xcode exposes a lot of complexity to the user. For example, when a dynamic framework should be embedded or not into a final product is something they can infer based on the graph, why are they requiring users to do so? We do that for users and they love it because it’s one less thing they have to worry about. What about potential bundle size increase due to static libraries liked in different parts of the dependency graph? They could detect that too and present the user with a warning before they hit compile. There’s a lot they could do to make Xcode a tool and a build system that works with large-scale projects, but I guess they are not incentivized to do so. The organizations that have reached that scale, can resort to other build systems like Bazel, or tools like Tuist.

Imagine if they took the opportunity to build a foundation that’s extensible like Gradle is so that developers don’t have to replace their build systems or resort to additional tools. I think I’m dreaming too much.

]]>
<![CDATA[Xcode is a great tool for beginners, but it makes it hard for developers to understand what's going on under the hood and optimize their workflows.]]>
XCBundles https://pepicrft.me/blog/2024/01/19/xcbundle 2024-01-19T00:00:00+00:00 2024-01-19T00:00:00+00:00 <![CDATA[

While working on adding support for binary-caching Swift Macros and multi-platform targets in Swift. I realized that there isn’t an equivalent to XCFrameworks for bundle targets. Why would you need that?

Tuist uses XCFrameworks to binary-cache framework targets (soon libraries too). Xcode supports defining links to XCFrameworks and Xcode takes care of using the right framework or library based on the destination. It’s very convenient. XCFrameworks is a directory with a convention around how to structure libraries and frameworks for different platforms and architectures.

But Tuist needs to be able to cache resource bundles too, and they are not supported by XCFrameworks. It’d be a huge stretch for the XCFrameworks name :). So I started pondering about creating our internal container, XCBundle. The structure of an XCBundle would be similar to the one of an XCFramework, but it’d be used to bundle resources instead of libraries and frameworks:

MyBundle.xcbundle/
  Info.plist
  ios-arm64/
    MyBundle.bundle
  ios-arm64_x86_64-simulator/
    MyBundle.bundle

This is going to be an internal container not exposed to the public. When generating Xcode projects, we’d introduce the necessary script build phases to copy the right one based on the destination. This is something that Xcode does automatically for XCFrameworks, but we’d have to do it manually for our own container.

This will most likely be worked on after we release Tuist 4, which is going to be a big milestone for us.

]]>
<![CDATA[Xcode doesn't have a native container for bundling resources for different platforms and architectures. But nothing stops us from creating our own for Tuist.]]>
Gifting OSS https://pepicrft.me/blog/2024/01/16/gifting-oss 2024-01-16T00:00:00+00:00 2024-01-16T00:00:00+00:00 <![CDATA[

I’ve been thinking a lot about Open Source Software (OSS) lately.

If you have read my work for a while, you might have noticed that I advocate for it. Many of us are introduced to building craft outside of work. It’s fun. If it goes well, and people use it, you can meet very talented builders. You also build public recognition through the work. Your OSS becomes your CV. This is net positive for you. I know some maintainers who, once they reached a point of recognition, moved on. OSS was the lever for their professional career.

You’ve probably built a community, and a sense of responsibility has grown within you. At this point, it’s common to hear among contributors, I devote all this free time to the community. But how much of that is pure altruism, and how much is self-interested altruism? I ask myself this question every day. Because we humans have a natural inclination towards recognition and appreciation from others. So, it might just be an exchange of your time for that recognition, which feels great. But you are putting something limited on the table, your time and attention. So, you might end up stretching it too much to the point of burning out, as is common to see in our industry.

Having reached the inflection point of being unable to satisfy community demand with limited resources, some projects decide to maintain their status and juggle all the demands by setting low expectations for responses. Being emotionally stable and good at prioritizing is key here. I’m quite impressed by projects like Homebrew and Fastlane that have managed to do this continuously. Still, I feel bad for the maintainers who need to take part of their free time for their project duties. I can’t help but think: it’s unfair.

Having reached that point, you can alternatively start thinking about ways of funding the project beyond donations, which, in my experience, it’s tricky. And depending on the nature of the project, you might have more or less trouble trying to achieve this. For example, if your plan is to provide a cloud service, and the code is OSS, you are competing with cloud providers like Google and Amazon, who can implement a much better and cheaper solution faster (I learned recently that some developers call this being “Jeff’ed”). This happened to Redis, and a handful of other projects that had to leverage licenses to avoid exploitation by large tech companies.

I couldn’t realize the seriousness of this, especially seeing large tech companies so devoted to OSS these days, until I experienced it myself. There’s a lot of hypocrisy. Companies support OSS because it gives them a great reputation in the developer community, and they might end up getting free (gratis) layers of software to build upon. They design a supply chain where most of it is free developers’ labor. Which business doesn’t like OSS when presented that way? Companies like Microsoft and Apple like being the ones creating and maintaining OSS layers, for example, TypeScript, VSCode, and Swift, because it gives them a great reputation in the developer community, and they might also get some contributions for free. It’s a sort of OSS-washing. However, when it comes to contributing to other OSS projects that they heavily depend on, the story is completely different. They don’t feel a sense of responsibility towards the external OSS tools that they use. The model is perfect for companies.

So despite the importance of OSS in the world, with great examples like Linux, we haven’t figured out sustainability, and I doubt we will. We can approximate sustainability, but the closer we get, the further companies will move it. Because while you see it as a lifestyle, as a hobby, as a creative space, as a community, for them it’s just free blocks. If a company like GitHub, which is in the position to deploy a frictionless solution to fund OSS, doesn’t, who else will?

There’s one last model, which I’ve been seeing a lot lately, and it’s the idea of doing OSS because “you are gifting something to the world.” DHH uses it a lot with every piece of OSS the Basecamp folks release. Many developers in Ruby do it too. It’s a model I’m starting to align more closely with. So rather than starting with an OSS project, you start with a business. You focus on making something that delivers value to society and that can financially sustain itself. And then, you extract the pieces that could be valuable for others. You extract layers of value to support others in their entrepreneurial journey. Rails is a good example of that. And you don’t necessarily need to set high expectations. Quite the opposite, indeed. You can put something out there for people to learn, fork, and extend, but they should not expect you to treat that as your first priority.

I wish a developer could make OSS sustainable without having to delve into the business side of things. But OSS is embedded in the tech industry, which is embedded in capitalism. So the sooner you learn about these dynamics, the better you’ll be able to navigate them. It took me years to realize and internalize this. But I’m glad I did.

Going forward, I’ll lean on the side of creating businesses that gift OSS bits to the community. And those businesses will inherit goodies from OSS like transparency, open communication, or extensibility. Those traits are not exclusive to OSS. And we have plenty of examples of closed-source businesses doing it beautifully.

]]>
<![CDATA[Navigating the dynamics of Open Source Software (OSS) and sustainability in the tech industry.]]>
Global state, CLIs, and test scalability https://pepicrft.me/blog/2024/01/15/global-state-clis-and-test-scalability 2024-01-15T00:00:00+00:00 2024-01-15T00:00:00+00:00 <![CDATA[

When you hear people talking about the goodness of functional programming, one of the things they mention is the lack of global state. States are introduced into the system and passed around mutating them as they go through the different functions. Side effects are also avoided, and when they are necessary, they are isolated and explicit. The most obvious benefit of this approach is that it makes things more predictable and easier to reason about. Another not-so-obvious benefit is that it eases scaling your test suite, and that’s the topic I want to talk about in this post.

When your program has global mutable state that’s shared across different parts of the system, the result of your tests might depend on the order in which they are executed. This might go unnoticed if the test runner has a deterministic order, but if it does not, you might end up with flakiness that’s hard to debug and fix. The matter gets worse when you try to run the tests concurrently or in parallel (Difference between Concurrency and Parallelism). And this is something that happens sooner or later in the lifecycle of a project. You first run the tests sequentially in every commit. Then, as the test suite grows, you introduce concurrency or parallelism if the runtime allows it. But that comes with flakiness that’s not fun to deal with. You could not anticipate it because you chose a programming language that allows global mutable state (e.g. Ruby). Eventually, you might consider selective test execution, but for that, you need compiler-lever knowledge that either the compiler doesn’t provide, or your programming language is compiler-free.

At Tuist we are at the point where we’d like to enable more parallelization, but we have some global mutable state that’s preventing us from doing so. At some point in the past, we decided to lean on the side of developer ergonomics over test scalability, and now we are paying that debt. For example, every module has a logger instance that they use to output information to the console. For most projects, having a global instance might be fine. But for CLI tools like Tuist, it does not. It does not because what we output and how we output it is connected to the experience of Tuist, and therefore we test it. And what happens if multiple tests are interacting in parallel with a global logger that’s storing the logs to run assertions on them? We’ll run assertions against logs coming from multiple tests interleaved. Voilà, flakiness. As I talked about in the past, it’d be great if each test run had a unique ID that we could tie global state to, like it’s possible in Elixir, but unfortunately, we don’t have that luxury in Swift. What’s the solution then? Passing a logger instance down from the command, to the deepest function that needs it. The challenge is to do so without adding too much noise to the codebase.

Another example of a global mutable state is any system cache that the CLI might need. For example, Tuist uses a global cache to serialize the compilation of manifests and speed up future command executions. Back when we implemented this feature, we added an API to customize the cache directory via an environment variable. We could use that API from our Cucumber-powered Ruby acceptance test suite to have a cache directory per test. However, since we moved the tests to Swift, using environment variables is not a viable option anymore because all the tests run in the same process with the same environment variables. Once again, we need to pass that information down to the deepest function that needs it.

Since this is something that we’ll have to do with multiple global states, I’m starting to think it might make sense to have a Context struct that we can pass around and that’d represent an interface with a global state. I’ll play with it and see how it goes. Ideally, this is not necessary, and Apple’s new testing framework in combination with actors, solves this by assigning unique IDs to each test run, but I’m being too optimistic here. That uniqueness is what makes Erlang and Elixir special, and everything builds upon it. I doubt they’d introduce it just for the sake of solving this one isolated problem.

]]>
<![CDATA[In this post, I talk about how global state in CLIs can make your test suite flaky and how to solve it to scale your test suite.]]>
Deterministic tool versions across environments with Mise https://pepicrft.me/blog/2024/01/11/deterministic-tool-versions-across-envs 2024-01-11T00:00:00+00:00 2024-01-11T00:00:00+00:00 <![CDATA[

A common source of headaches in development teams are issues coming from multiple versions of tools used across environments. The “it works for me” is often followed by a “Which version of X are you using?”. It happens because it’s the developers’ responsibility to manage their shell and the tools that are active in it through the $PATH environment variable, and CI environments, like GitHub Actions, offer their building block to manage the tools. For example, setup-ruby takes care of installing Ruby in the GitHub Actions environment. It uses the convention of using the .ruby-version file to indicate the Ruby version to use, but what if locally, the developers are using a version management tool that doesn’t follow the convention? Or what if they decide to manage the installation globally through a tool like Homebrew.

This developer experience is far from ideal. Some organizations decide to ignore the problem, perhaps because they are not aware of it. Others like Shopify have an entire team dedicated to solving it, which comes at a huge cost for the business. What if you could solve the problem with little to no effort and cost? That’s what Mise is for. We are making it the default installation method for Tuist, and also evangelizing it to the Swift community, where project setups have multiple tooling requirements. For example, Ruby to run Fastlane, SwiftLint to lint the Swift code, or Sourcery to do meta-programming. Imagine managing these tools without a unified method that’s consistent throughout the local and CI environments…

How does it work? Easy. Once you’ve installed Mise and added the hook to your shell, you can create a .mise.toml file at the root of your project:

[tools]
tuist = "3.39.3"
swiftlint = "0.54.0"
sourcery = "2.1.3"
ruby = "3.3.0"

Then you run mise install and Mise will install and activate the right version of the tools. Note the “activate” part. Mise ensures the right version is activated when you are in that directory or any sub-directory. That prevents globally-installed versions from taking precedence over the ones specified in the .mise.toml file.

And what about CI? You can also use Mise. If you are using a CI provider like GitHub Actions, the process is very simplified through a custom action, mise-action:

on:
  pull_request:
    branches:
      - main
  push:
    branches:
      - main
jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: jdx/mise-action@v2

Mise is by far one of the best tools I’ve used for configuring and activate my environment for a project. It’s one of those tools that you install and forget about it. @jdx, the creator of the tool, and a developer very passionate about creating terminal tools, has done a fantastic craftsmanship job with it. I’d recommend to give it a try. You won’t regret it.

]]>
<![CDATA[Non-deterministic tool versions across environments is a common source of headaches in development teams. In this post, I share how Mise can help you solve it.]]>
I'm allergic to complexities https://pepicrft.me/blog/2024/01/08/allergic-to-complexities 2024-01-08T00:00:00+00:00 2024-01-08T00:00:00+00:00 <![CDATA[

Call me weird, but I’m allergic to complexities. There are software crafters who enjoy understanding and working with complexities in software. I only enjoy understanding them to conceptually compress them to build simpler experiences that are fun to work with. At the end of the day, it’s not fun if you want to deliver value to people, and your tools, frameworks, and languages slap your face with complexities. Here are some examples of unnecessary complexities that I’m intentionally avoiding because they are not fun:

  • It’s not fun to replace Xcode projects and build system with new building (more advanced) building blocks that break with every Xcode release. I look at you Bazel.
  • It’s not fun when your shiny JavaScript-powered project fails to build after you do a minor version dependency update due to a cryptic error message that comes from ten layers of dependencies deep.
  • It’s not fun when you can’t debug your JavaScript code because it’s gone through a series of transformations and optimizations that make it impossible to understand. And it’s even less fun when you have to set up a system for your error-tracking solution to be able to help you with source maps.
  • It’s not fun when you can’t debug your issues because of the layers of abstractions and proprietary tech your project is building upon. I look at NextJS, Vercel, proprietary serverless runtimes, and Xcode.
  • It’s not fun when you have to read books and take courses to understand the architecture that software introduced in a code base because they saw everyone in the community talking about it.
  • It’s not fun when the programming language that you use gets unnecessarily complex because it’s trying to be everything for everyone. I look at you, Swift.
  • It’s not fun when you have to build your standard library because the language that you use, JavaScript, lacks it. And it’s even less fun when the people who built the packages required for that are often jumping from one project to another, leaving you with a broken project.
  • It’s not fun when there are thousands of ways to do the same thing, and you have to spend time learning them to understand the code that you are reading.
  • It’s not fun when the tooling that you use tries to be smart about how you want to use it and ends up being a black box that you can’t understand.
  • It’s not fun when your toolchain and code editor don’t integrate well and you have to spend time configuring it to make it work.

I believe the ecosystems with the healthiest communities are the ones that can spot the complexities and work together to simplify them. Rails is an excellent example of that. Erlang and Elixir have also done a great job at that. That’s the reason why I connect with them and enjoy working with them. Because I can stay focused on the problem that I’m trying to solve and not on the tools that I’m using to solve it. And more importantly, I do it with the relief of knowing that they haven’t abstracted a huge pile of complexity like JavaScript is obsessed with doing. Because knowing that makes me uncomfortable. That pile of complexity is going to bite me at some point, and I’m going to have to deal with it. Not for me.

]]>
<![CDATA[I like understanding complexities to simplify them. In this post, I share some examples of complexities that I'm intentionally avoiding because they are complex and therefore not fun.]]>
Starting therapy next week https://pepicrft.me/blog/2024/01/05/therapy 2024-01-05T00:00:00+00:00 2024-01-05T00:00:00+00:00 <![CDATA[

Next week I’m going to start mental therapy with a psychologist. It’s been 4 years since I last went to therapy, and I think it’s time to go back and do a regular checkup. 2023 was a year of many challenges in my life:

  • Shopify fired me in May along with many other colleagues, many of whom happened to live in a country, Germany, where people were organizing to unionize. This was a moment of disappointment for me towards a company that I admired and that I thought was doing things right. I felt lied to and betrayed. After this experience, I looked at other companies with a different lens and a lot of skepticism. Luckily I came across the Hacking Capitalism book, which taught me a lot about what no one teaches you about capitalism and how to navigate it.
  • My wife was fired too from Shopify. She’s a localization program manager, and she’s not as lucky as we still are in the software industry. Everyone is questioning the value of localization in the industry due to AI advancements. She loves languages and would like to continue working on that space, but seeing her suffering and feeling lost is hard for me. It’s unfair, but once again, capitalism is not fair.
  • I decided to focus on Tuist full-time and turn it into a business. Some organizations received this transition well, while others didn’t. In particular, Bitrise, who was trying to profit from the work we’d done for years without contributing anything back, decided to publish a web page to rant about the project and me as one of the maintainers of the project. Once again, capitalism is in its purest form. If you do things altruistically, you are cool, but if you try to make a living out of it, you are a bad person. I’m not going to lie; it hurt me a lot. I felt like I was being bullied, and I didn’t know how to react. I felt powerless. Luckily, I was surrounded by many people, including my wife and Marek, who understood the situation and helped me to get through it.
  • I won’t lie, trying to turn Tuist into a business puts a question in my head every day: will it work? I have enough savings to not worry about money for a while, but still, I can’t stop thinking about it. An impostor syndrome is growing inside me, and I don’t know how to stop it. I’m trying to be rational and think about the facts, but it’s hard. I used to have more confidence in myself, but since all the Shopify stuff, I feel like I’m not good enough. Crazy, isn’t it? Even if it doesn’t work, I’m sure I’ll learn a lot, and I can always go back to a full-time job and help organizations that are struggling with their Xcode projects.
  • My mother and sisters are going through a rough time mentally caused by a toxic relationship with the family business, which is dominated by patriarchy and slavery practices. I knew it’d happen sooner or later, and tried my best to prevent it, but there are many big elephants in my family’s room that no one talks about to avoid conflicts. They prefer conflict avoidance, even if they know that will result in a much larger conflict. My mother was the first one to break down after more than 20 years, of working without any salary adjustment and 7 days a week non-stop. I’d like to build a source of income to retire her and my father and put a hard stop to this situation. The part of the family that benefits from this situation, which includes my uncle and my grandmother, doesn’t want to see me around because I bring the whole setup into instability. They don’t want me to speak up. This was a rough way for me to end the year.

Will 2024 look better? I don’t know. I hope. For now I’m going to talk to a psychologist about it and get some help to cope with these situations.

]]>
<![CDATA[2023 was a year of many challenges in my life. Next week I'm going to start mental therapy with a psychologist.]]>
Open-source and the imposter syndrome https://pepicrft.me/blog/2024/01/03/open-source-and-imposter-syndrome 2024-01-03T00:00:00+00:00 2024-01-03T00:00:00+00:00 <![CDATA[

These days working on Tuist, I realize that I’m unable to stay on top of everything that’s happening on the project. The breadth of the project has grown so much, that I lack a lot of context to provide meaningful feedback on pull requests. On one side it’s good because it means there are some areas of the project with people that are more knowledgeable than me. So I can trust them to make the right decisions and focus on other areas. However, I feel bad because I’m not able to provide the same level of feedback that I used to.

I’m starting to accept that it’s part of the process of growing a project. At some point, you need to trust other people to make decisions. And you need to identify what are your strengths and how you can impact the project the most. In my case, that’s helping shape the vision for the project, and fostering a community of contributors that feel empowered to make decisions. Trying to do coding feels like being a manager and trying to write some code. You certainly can do it, but you’ll be more effective if you focus on the former. Otherwise, you risk being on the critical path of efforts that could be done by others.

I admire technologists like Tom Preston, who were able to build successful companies like GitHub, create and grow successful open-source projects like RedwoodJS, and balance all of that with his family life. I think it must also take a lot of discipline and prioritization to be able to do that. Which I’m still learning to do.

]]>
<![CDATA[In this post, I talk about how I'm feeling about my contributions to the open-source community.]]>
I'm sick https://pepicrft.me/blog/2023/12/28/i-m-sick 2023-12-28T00:00:00+00:00 2023-12-28T00:00:00+00:00 <![CDATA[

I’m sick. This has become a recurring event whenever I visit my hometown, Cieza, in the south of Spain. This time, the illness hit me particularly hard. I’m on the fourth day of a flu, still feverish and unable to leave the sofa. It’s a peculiar way to end the year, isn’t it? However, being confined to bed provides ample time for reflection on my connection with this city where I lived until I was 18, and where my parents still reside.

These reflections have led me to realize that many people in this town are also metaphorically ‘sick,’ due to an inherent toxicity that pervades every aspect of it. This toxicity makes me extremely uncomfortable during my visits, a sentiment that saddens me as my parents, despite their best efforts, cannot mitigate it.

To understand this toxicity’s origin, we must delve into the local culture. While my experience is specific to Cieza, I believe many aspects are reflective of the broader Spanish culture, which is complex and often surprising. It’s important to note that not everyone conforms to this pattern, but it’s common enough to be noticeable.

On the surface, everyone appears social and supportive. But upon closer inspection, it’s often superficial. There’s a lack of genuine empathy and trust. People constantly compare themselves to others and gossip incessantly, almost like a national sport. If you’re successful, envy ensues because they attribute your success to luck. If you’re struggling, they offer hollow condolences. Non-conformity to societal norms leads to judgment and subtle ostracism. For instance, my sister-in-law, who is vegan and lesbian, faces misunderstanding and indifference from many who are unwilling to challenge their preconceptions. If you don’t adhere to societal expectations and seek mental well-being, you must don a façade, a process both exhausting and mentally draining.

Whenever I encounter locals, I feel compelled to conform to their preconceptions about Germany and Spain. Challenging these views often leads to social rejection, and amusingly, they report back to my parents, as if to confirm my ‘Germanization.’ It’s infuriating.

Walking around, you often see people drinking large quantities of beer, possibly using alcohol as an escape. I think of the many who suffer because they can’t express their true selves. My mother, for instance, struggles with anxiety and stress, and fears the town’s judgment for seeking mental health treatment. Ironically, when I moved to Germany, she felt compelled to frame it as a deliberate choice, rather than a lack of local opportunities.

As my parent-in-law often says, Spain lies at the world’s edge, Murcia at Spain’s, and our town at Murcia’s. Being here feels like time-traveling to a bygone era, where societal expectations dictate major life decisions like marriage and parenthood, often at the expense of personal desires.

I’ve pondered whether this cultural aspect is recent, but Spanish literature, including Lazarillo de Tormes (1554), Miguel de Cervantes Saavedra, Don Quijote de La Mancha (1605), and Federico García Lorca, La Casa de Bernarda Alba (1945), reflects similar themes. A line from the latter beautifully encapsulates my feelings about my town:

“This is how one must speak in this cursed town without a river, a town of wells, where one always drinks water fearing it’s poisoned.”

It symbolizes the pervasive mistrust and fear of public opinion.

So, being in Cieza means not being myself, which in itself makes me ‘sick.’ When locals advise us to return to Cieza, boasting of its unparalleled lifestyle, I’m reminded that explaining my connection with Berlin is futile.

And regarding health practices like wearing masks during flu season, there’s resistance since it’s not mandated. Unlike in Germany, where people understand and respect health guidelines, here it’s about following rules, not rationale.

Will this change? I’m uncertain. It requires a significant investment in education, something I don’t see happening. Friends starting their teaching careers are already witnessing discriminatory practices, a disheartening sign that impedes Spain’s progress. While there are aspects to celebrate, like our cuisine, they don’t overshadow the need for societal evolution.

If I ever return to Spain or Cieza, seeking therapy will be a priority to prepare for confronting a reality I now encounter only a few times a year.

]]>
<![CDATA[Cieza, my hometown, is a beautiful place, but it's also a toxic environment that makes me sick. In this post, I reflect on my connection with this city and the Spanish culture.]]>
Type safety but at which cost? https://pepicrft.me/blog/2023/12/28/type-safety-at-which-cost 2023-12-28T00:00:00+00:00 2023-12-28T00:00:00+00:00 <![CDATA[

I’ve come across developers recently who are obsessed with type safety and leveraging the compiler to its fullest extent to catch errors. So much so that they are willing to sacrifice readability, compilation time, and maintainability for the sake of type safety. There’s no right or wrong here, but I’ve found myself on the opposite side of the spectrum.

When we decide to invest in type safety, we do so because we trust the compiler more than we trust ourselves. For example, if there’s a function named getUser(id: ID), one might be concerned about someone calling the function with an ID that doesn’t represent a user ID but a post ID. Sure, we could model our types to have different types for user and post IDs (there must be a mathematical term for this), but isn’t the semantics of a function enough to convey its purpose and trust that developers will use it correctly? I think so. Sure, there’s a chance that someone will call the function with the wrong type, but there’s also a chance that even getting the types right, your software won’t do what it’s supposed to do.

What this obsession with type safety often leads to is over-engineering–teams spending endless hours discussing what the best typing solution is for a particular problem according to theory X or Y. It also worsens the onboarding experience for new developers, who have to learn a new type system and its quirks before they can start contributing. Imagine being an engineering manager seeing your team not delivering features because they are stuck in a discussion about types. Sure, one could justify that this is a long-term investment that will pay off in the future, but in software, where requirements change all the time, all the time spent to come up with the perfect type system is time wasted because you’ll have to change it anyway soon.

Note I’m not saying you should not care about type safety. I’m saying we might sometimes get too obsessed with it without realizing the cost of it. This obsession will lead to a tax on your team’s productivity and happiness. I’ve seen it happen.

]]>
<![CDATA[Type safety, while important, can be overrated and lead to over-engineering. In this post, I share my thoughts on the topic.]]>
JavaScript-owned state and accessibility https://pepicrft.me/blog/2023/12/23/js-owned-state-and-accessibility 2023-12-23T00:00:00+00:00 2023-12-23T00:00:00+00:00 <![CDATA[

I’ve been following the Enduring CSS for my projects for a while, including this blog, and a principle that stuck with me is the idea of WAI-ARIA attributes to persist state. I never thought about it until I read the methodology– if we have HTML markup that represents semantics, and a set of attributes to represent the state making your website accessible, why would I store the state elsewhere?

This realization made me think about JavaScript and JavaScript web frameworks and their APIs to store state:

// React
const [state, setState] = useState(false);

// Vue
const state = ref(0);

// Svelte
const state = writable();

// Solid
const [state, setState] = createSignal(0);

They all have a common pattern: they are convenient. But the convenience has a cost– it’s so convenient that developers naturally hoist that’s representable by ARIA attributes to JavaScript. Instead of embracing the platform, they get distracted by the convenience of the layer in which they are working, and end up with a solution that’s not as accessible as it could be. It’s not the framework’s fault. Not all states are representable by ARIA attributes or should be represented by them. However, I think it’d be great to include a reminder in the documentation of these frameworks to consider ARIA attributes before reaching out to the framework’s APIs. Like Enduring CSS does:

“While the specification is aimed at helping communicate state and properties to users with disability (via assistive technology) it serves the need of a web application project architecture beautifully. Adopting this approach results in what is (perhaps cringingly) referred to as a ‘Win Win’ situation. We get to improve the accessibility of our web application, while also gaining a clearly defined, well considered lexicon for communicating the states we need in our application logic.” Enduring CSS - Chapter 6

I love how they put it. It’s a win-win situation. We should not forget that the web is a powerful platform and that despite how natural it is to get distracted by the convenience of the layer in which we are working, we are ultimately building for users, some of whom might appreciate that we are building directly on top of the platform.

]]>
<![CDATA[WAI-ARIA attributes are a great layer to persist application state and make the website accessible. However, the convenience of JavaScript APIs to store state makes them store all the state in JavaScript, making the websites less accessible.]]>
What if XCTest a concept akin to Elixir's processes? https://pepicrft.me/blog/2023/12/20/elixir-processes-testing 2023-12-20T00:00:00+00:00 2023-12-20T00:00:00+00:00 <![CDATA[

If you’ve been reading this blog for a while, you might know that I’ve been diving into Elixir lately. I like learning about other languages and technologies because I can cross-pollinate ideas and apply them to my day-to-day work on Tuist.

What I find fascinating about Elixir and Erlang, which Elixir is built on top of, is that their modeling of the world makes a whole set of problems disappear. Problems for which ecosystems like JavaScript, Swift, or Ruby have created a whole set of tools to solve them. There’s no quote that better summarizes this than the one from Robert Virding, the co-creator of Erlang:

“Any sufficiently complicated concurrent program in another language contains an ad hoc informally-specified bug-ridden slow implementation of half of Erlang.”

But how does it achieve that? I think the reason is rooted in their concept of processes. Everything is either a process or builds on the concept of processes.

There are many effects of processes in how you model your programs, but one that got my attention was how contracts can be mocked that easily without worsening the design of the code. In Swift, mocking very likely means that you need to introduce a protocol and use dependency injection to inject the mock. This is fine, and the ergonomics improved recently with the introduction of Swift Macros. Still, if you are writing integration tests, which in the case of Tuist deliver more value than unit tests, you’ll have to pass the mock down to the deep-most layer of your code. And that makes all the interfaces unnecessarily verbose.

Let’s look at a concrete example. We use swift-log for logging in Tuist. Seeing this piece of code makes me skeptical:

LoggingSystem.bootstrap(MyLogHandler.init)

The library is using a global internal state to configure the logging system. This is fine as long as it’s thread-safe (which I assume it is) and you don’t want to run test assertions against the logs and run the tests in parallel (which we do). There’s an alternative to that. You can create an instance and pass it down to the layers that need it. But again, it hurts the ergonomics of the code.

let logger = Logger(label: "me.pepicrft.Logger")
doSomething(logger: logger)

Can’t we have the best of both worlds? And that’s something that Elixir solves beautifully with processes and that I wish XCTest would eventually adopt.

Every test in Elixir is a process. And processes have a unique ID. That process is known by the test logic and also by the code that’s being tested, regardless of how deep it is in the call stack. What that allows is associating a mock with a particular test process. Let’s look at an example using the Mimic mocking library:

# test_helpers.exs
Mimic.copy(Calculator)

use ExUnit.Case, async: true
use Mimic

# calculator_test.exs
test "invokes mult once and add twice" do
  Calculator
  |> stub(:add, fn x, y -> :stub end)
  |> expect(:add, fn x, y -> x + y end)
  |> expect(:mult, 2, fn x, y -> x * y end)

  assert Calculator.add(2, 3) == 5
  assert Calculator.mult(2, 3) == 6

  assert Calculator.add(2, 3) == :stub
end

Note how no dependency injection is needed. Calculator.stub and Calculator.expect only affect the logic in that particular test process. And you can safely run all the tests in parallel without worrying about the state of the mocks leaking between them and causing flakiness.

Will this ever happen in XCTest? I don’t think so. XCTest would need to assign a unique ID to every test and expose it to the code that’s being tested. Perhaps through some compile-time magic that’s only available when running tests.

Note that the ergonomics might be improvable through dependency injection solutions, but I’m not a big fan of adding something that improves the ergonomics at the expense of making things obscure and introducing a dependency with a third-party library. The trade-off is not worth it.

Erlang, you are so cool.

]]>
<![CDATA[Erlang processes are a powerful concept that allows you to mock dependencies without introducing dependency injection. In this post, I share my thoughts on how XCTest could adopt a similar concept.]]>
Learning to love the problem and not the solution https://pepicrft.me/blog/2023/12/19/love-problem-not-solution 2023-12-19T00:00:00+00:00 2023-12-19T00:00:00+00:00 <![CDATA[

Has it happened to you that you get too attached to a solution to a problem instead of the problem itself? This is something that didn’t happen to me earlier in my career, but as I advanced and learned more about other technologies and approaches, which I became excited about, it started to happen more often. This is not the case with Tuist where the technology required to solve the problem is set, Swift and Xcode. But as I poured some spare thinking into the right tech for Glossia, I became more indecisive about the right tech to use.

To illustrate this, let me share my most recent experience. When I navigate the web, a pattern that I notice is that many websites that are designed with JavaScript frameworks are well designed. Today, in particular, I was amazed by how well-designed the websites made by the folks behind NuxtJS and VueJS are: Elk, NuxtUI, Volta, VitePress. Guess my thinking after seeing those websites? Is there a cause-effect relationship between the technology and the design? Or did developers with good taste happen to meet in those communities? Will technology like NuxtJS have a strong connection with a gorgeous design and a first-class experience in Glossia? Should I reconsider the decision to use Elixir now that it’s early in the lifetime of the project?

These are the questions that I mentally navigate when I’m exposed to new technologies. And it’s draining. I have to make an effort to look at things with enough perspective to realize that:

  • Technology is an implementation detail. You can achieve a great design with any.
  • Distractions for the technology are a way to procrastinate on the problem.
  • The problem is the most important thing. The solution is just a means to an end.
  • Other factors are equally important when choosing a technology, not just the design.

I want to change my mindset to stop doing this because it’s fatiguing. When I look around and see people that I admire, I see that they are focused on the problem and not the solution. They pick a technology that clicks with them and they stick with it. One of them is @dhh. He loves Ruby, and he is taking Ruby everywhere. @levelsio does the same with PHP. There are many examples in the Swift community too, like John Sundell.

Note that I don’t want to isolate myself from other technologies. I learn a lot from them which I can use to cross-pollinate my thinking. However, I need to do it enough so that I can learn from them and not get distracted by them. This is where the challenge lies.

Have you experienced something similar?

]]>
<![CDATA[We engineers are vulnerable to getting attached to solutions instead of problems. In this post, I share my experience with this and how I'm trying to change it.]]>
Implicitness in Xcode and SPM. Why Apple? https://pepicrft.me/blog/2023/12/19/xcode-implicit-dependencies 2023-12-19T00:00:00+00:00 2023-12-19T00:00:00+00:00 <![CDATA[

Since the beginning of Tuist, one of the principles that we embraced is that the dependency graph should be explicit. This is something that we encouraged through the Swift DSL, but that we couldn’t enforce because Xcode and its build system allow it.

Why explicitness you might wonder? When the graph is explicit and known upfront, you can validate, reason about, and optimize it. When it’s not, you can’t do those things. Or you rely on a closed-source build system that you don’t control to do it for you. And if things go wrong, you need to file a radar and wait for Apple to fix it.

The worst part is that Apple is not reversing this trend. Instead, they are building on it and the Swift Package Manager is inheriting the same problems. For example, they realized that package products defining the static or dynamic nature of the library were a mistake that could lead to duplicated symbols. What did they do to solve it? They solved it with a new linking option, automatic, that’s resolved at build-time. Why would you want a build system to make these important decisions for you? Your graph is yours. Tuist gives you agency over the graph while hiding the complexities that are not relevant to you, like the fact that some dynamic frameworks might need to get copied into the app bundle.

One of the issues that is particularly annoying, and that we are coming across often in Tuist’s codebase is being able to import dependencies that are not explicitly declared. The issue is more apparent now because we optimize our workflows with binary caching. The reason why it happens is that the directory in which Xcode outputs all the project target products, DerivedData/{project}-{hash}/Build/Products/{config}, is visible to the other targets in the project. In other words, they can import them without having them declared as dependencies.

For example, let’s say that you have a simple dependency graph A -> B -> C where A (App) depends on B (Framework), and B depends on C (Framework). If we do import C from A it’ll work if we build the project through a scheme with the “Find implicit dependencies” enabled or if C has been previously compiled. For example via another scheme. What’s particularly annoying about this is that the issues might arise on CI, or weeks later after weeks of CI workflows accidentally working.

This is an itch I’d like to scratch for Tuist and every Tuist user. I doubt Apple does anything, but I might be wrong. They are in a tricky spot because any step to remove implicitness would very likely break existing projects. It’s the same for Tuist, but we can work closely with our users to define a migration path that allows them to transition to an explicit dependency graph. I believe once we do it, things will get more deterministic and stable for everyone. They won’t most likely notice it, because teams don’t have a metric to measure “how often implicitness caused headaches”, but I bet it’s more often than we think.

We hope to ship it as part of Tuist 4.0. Stay tuned!

]]>
<![CDATA[Apple embraced implicitness in some areas of the build system, and it's causing headaches to developers. In this post, I share my thoughts on the topic and how we are planning to address it in Tuist.]]>
What I expect from a knowledge management app https://pepicrft.me/blog/2023/12/18/perfect-knowledge-management-app 2023-12-18T00:00:00+00:00 2023-12-18T00:00:00+00:00 <![CDATA[

I still haven’t found the perfect knowledge management app. Logseq outline doesn’t quite click with me. I end up creating a log of useless blocks that I never revisit and link from other blocks. Obsidian‘s longer text format is better. It makes me think thoroughly about what I’m writing and give it a structure. However, it misses some things that I consider essential:

  • An app that’s native to Apple platforms. Not native in the sense that it compiles to native. But native in the sense that it embraces the platform patterns and capabilities and doesn’t try to fit web patterns into the platform.
  • Auto-linking of notes. We’ve seen a spread of technologies like embeddings that can be used to calculate semantic similarity between texts. Imagine using that for suggesting links between notes.
  • Inbox for ideas to process. Sometimes I’m running and I’ve got an idea that I’d like to jot down and process later into a longer note. I’d like to just press a button, record the idea, and have it transcribed into text. Or share content that I find on the web with the app for later processing.
  • A standard structure that’s documented to allow users to port their notes and foster an ecosystem of apps.

The itch is becoming too itchy so I don’t know if I’ll be able to resist the temptation of building it myself as a hobby. Once we ship Tuist 4 and Tuist Cloud and have the business running of course.

]]>
<![CDATA[In this post, I share what I expect from a knowledge management app.]]>
Swift Packages default to supporting all platforms https://pepicrft.me/blog/2023/12/13/packages-default-platforms 2023-12-13T00:00:00+00:00 2023-12-13T00:00:00+00:00 <![CDATA[

Did you know that Swift Packages default to supporting all platforms when they don’t specify any? We, at Tuist, didn’t know either until we added support recently for multi-platform targets. Tuist integrates Swift Packages as Xcode project targets that give users more control over them and allow optimizations. By the way, kudos to Mike Simons for the amazing work on this one.

Because Swift Package targets have now multiple platforms, we included the platforms that the Swift Package Manager indicates that are supported. And that works, until it turns out that a given package doesn’t support a platform that they seemingly support (at least according to the defaults). In that case, Xcode fails to compile the target. To overcome the issue and eliminate blocks for users, we started adding code like if 'Firebase'… but that would not scale so we needed to do something about it.

Because Tuist knows the dependency graph, we cascade the platform of the project targets down to the dependencies. For example, if the upstream targets compile for iOS, we can narrow down the list of supported platforms for dependencies to iOS. I believe this is something that Xcode must be doing under the hood. The difference with Xcode is that they do it at build time and we do it at generation time.

This shows that assuming that a package supports all platforms by default was not a good idea. Especially when giving support for multiple platforms was not trivial due to inconsistent APIs across Apple and other platforms like Linux. This will hopefully change soon with the new era of Foundation, but until then, Xcode and tooling like Tuist will have to forever deal with this decision.

Please, if you are a package maintainer, make sure you are very explicit about the platforms that you support and that you validate that through CI. You’ll be making a great favor to the community and to the tooling that depends on your package.

]]>
<![CDATA[Swift Package Manager defaults to supporting all platforms when they don't specify any. This is a problem for tooling like Tuist that integrates Swift Packages as Xcode project targets.]]>
Swinging back to positivity https://pepicrft.me/blog/2023/12/13/swinging-back-to-positivity 2023-12-13T00:00:00+00:00 2023-12-13T00:00:00+00:00 <![CDATA[

I’m a person who tends to look at things through a positive lens. Or at least, I used to. 2023 was a bit of a traumatic year for me professionally, and that led me to a negative mindset full of disillusions.

The first one of those events was the Shopify layoffs. For someone who has experienced layoffs before, and that has faced the realities of how businesses operate, that shouldn’t have been a big deal. But it was for me. I over-committed to the company and the people I worked with, believing I was on a long-term and successful journey there. That’s what they told me, and that’s what I believed. But then one day, you are faced with reality. In my case, I was part of the German workforce that supported the unionization efforts. Workers understand their rights. What could be wrong with that? Everything. Techno-optimists and builders, as they like to call themselves, don’t understand of anything that hinders their path to wealth and power. If there’s something that gets in their way, they’ll get rid of it, throwing money at it if needed. That reality shock led me to trace back everything that happened during my time there, and could see a different angle to the whole story. From the removal of the ‘dropshipping’ word from everywhere after having benefited from it for years, through the support of crypto and NFTs, to the green-washing of the company’s activities. It was all a big lie. It felt like a toxic relationship that’s hard to escape from and whose big picture you can only see once you are out of it. There’s professional growth, but at what cost? I was let go with a mental exhaustion that I’m still recovering from. I couldn’t look at other companies without wondering, will they be the same? I started to see patterns everywhere, like having a name for the family (e.g. “shopifolks”), or people continuously talking about how the company changed their lives for the better. I had huge respect for Tobi and his ideas, but I’m having a hard time buying into them now.

I felt so relieved when the layoffs happened, but you can’t just get rid of the trauma that easily. It’s something that comes back to you here and there. You try to look forward and stay positive, but you have this feeling that you can’t trust anyone anymore. Has it happened to you? My escape was to focus on my projects, and in particular Tuist. We built a healthy community and project, in which we could continue to invest, and build something that we could be proud of and that people would be inspired to use and contribute to. I threw myself into the project, working full-time every day. I built a new website, put a plan in place for the rest of the year and started working on it, continued to help more companies get onboarded. People loved the project, and that love for the project fueled us to keep going towards a sustainability point where we could work on it full-time. But in that process, we made the project so financially attractive, that a company like Bitrise thought it was a great idea to wrap it into their product without contributing anything back. And here comes the second trauma. We had to make a quick decision to prevent them from hurting what had taken us 6 years to build. The result? A website telling everyone how terrible deciding for Tuist was, explicitly mentioning me as a bad maintainer for the project. If I didn’t have enough with Shopify’s micro-trauma, there is another one to add to the list. Luckily, I’m surrounded by great humans, among whom are my lovely wife, María José, my partner in Tuist and Tuist Cloud, Marek, and the Tuist community itself. They know me well, they know my values and supported me through this emotionally difficult time. Still, like Shopify, is that one thing that you can’t get rid of easily.

I started feeling disillusioned with the often cruel reality of the tech industry. This disillusionment grew a lot of negativity in me, which often resulted in a lot of public criticism as if I could change anything by doing so.

But I’m working on changing that. Criticism does more harm than good to me. If there’s something that has always been there, that motivates me to keep going and inspires me to build things, that’s having a community of people that I can work with on solving exciting problems. And that’s something that open-source provides me with. I’m a very community-oriented person. Building in isolation with the only goal of profit is not something that I enjoy. However, doing it sustainably is something to keep an eye on (thanks Bitrise for the lesson), and that’s something that I’m working on. Open-source changed me. I met many wonderful and talented people, some of whom became friends. I see on Mastodon what open-source communities are capable of, and I feel I want to be inspired. When I’m in these communities, I feel I’m in a safe place and very positive. It’s when I get closer to the tech industry, in its purest form, which is often on Twitter (X), that I start to feel negative. Perhaps the algorithms are contributing to that.

The micro-traumas of 2023 grew negativity in me, but I’m finding my way out of it thanks to open source and communities. I’ll stop criticizing, and start building things that inspire positivity. 2024 is going to be an exciting year.

]]>
<![CDATA[2023 was a year full of micro-traumas that led me to a negative mindset. In this blog post, I talk about how I'm working on swinging back to positivity.]]>
Open-sourcing the lightning_css Elixir package https://pepicrft.me/blog/2023/12/11/open-sourcing-lightning-css-hex-package 2023-12-11T00:00:00+00:00 2023-12-11T00:00:00+00:00 <![CDATA[

I’ve been reading a lot and connecting with the idea of embracing the web platform and building right on it, without layers and layers of abstractions that make it harder to understand what’s going on under the hood, and that make the software less future-proof. For instance, I got rid of Tailwind from this website.

When it comes to styling, I’ve been following the Enduring CSS methodology. This provides answers for many challenges that CSS presents and embraces accessibility attributes to hold state leading to more accessible websites. Nevertheless, one challenge that I found, in particular in the context of Elixir and Phoenix, is that it’s hard to follow the directory convention that they propose to get styles close to the components. Doing so requires using a tool that’s able to resolve glob patterns to locate CSS files and bundle them into an output CSS bundle, and the tool that asset pipeline tool Phoenix projects use by default, powered by ESBuild, doesn’t provide that functionality for CSS.

Luckily, there’s a solution for that in the community: Lightning CSS. Yet, the integration into Elixir projects requires some tedious plumbing. Because I wanted to overcome that, and also scratch the itch of building an Elixir package. I built and open-sourced lightning_css. The interface is similar to the one that the Elixir’s esbuild package proposes. You first configure profiles in your Mix project’s configuration file:

config :lightning_css,
  version: "1.22.1",
  default: [
    args: ~w(assets/css/app.css --bundle --output-file=priv/static/styles/bundle.css),
    watch_files: "assets/css/**/*.css",
    cd: Path.expand("..", __DIR__),
    env: %{"NODE_PATH" => Path.expand("../deps", __DIR__)}
  ]

And then you can invoke it right from the terminal using the Mix CLI passing the profile, lightning_css default. In the case of Phoenix projects, you need some additional steps to integrate it into Phoenix tasks and watchers.

I still need to add some tests, which is another itch I want to scratch: getting familiar with Elixir’s testing framework. However, I’ll leave that for the near future. I’ll adjust this website to make use of glob patterns, and start using it in Glossia, in which I plan to continue investing early next year.

]]>
<![CDATA[In this blog post I talk about the motivations that led me to build and open-source lightning_css, an Elixir package to bring a more advanced CSS bundler to Elixir and Phoenix projects.]]>
On mental health https://pepicrft.me/blog/2023/12/07/mental-health-journal 2023-12-07T00:00:00+00:00 2023-12-07T00:00:00+00:00 <![CDATA[

Today I had one of those days of feeling down and not being able to pinpoint the exact reason. We are back from attending a conference in Buenos Aires, Swiftable, and found my mother suffering from anxiety attacks due to recent traumatic events. It’s manifesting as back pains, and she is trying to convince herself that’s a physical problem. So I’m trying to convince her about the importance of going to a psychologist. This is affecting me a lot, and I’m trying to be strong for her.

Tuist is also draining me a lot. I’m excited about everything that’s ahead, but the closer we get to the release, the further I feel from the finish line. And because I continue to work on it full-time without a salary, this is drilling my brain. Not that I don’t have savings, but not having a source of income makes me mentally uncomfortable. And the recent Bitrise events didn’t help. Someone who has been in business for longer might think this is normal, but I’m not yet emotionally prepared to deal with this type of competition moves. I might go to therapy when I’m back in Berlin to learn how to deal with this.

Besides all of this, I think my relationship with social networks is not helping. I think it’s important to continue to be present on X and Mastodon because that’s where the users of Tuist are. But at the same time, it’s a huge mental stretch to my list of responsibilities in the project: development, support, community engagement… Also, being on X and Mastodon triggers many thinking streams. Is it positive? It is, I get many ideas from it. But if I don’t limit the bandwidth that I dedicate to it, it can be overwhelming. I get ideas that I’d like to play with, but I don’t have the time for them. I can go from excitement to exhaustion from one day to another. When I’m exhausted, my natural reaction is to disconnect from everything.

I’m also emotionally processing some surprising realities that I learned about the tech industry by accident. One of them is everything that happened at Shopify. It was in May, but it continues to be a recurring thought in my mind. I feel lied and a chip in a game of billionaires. I worked hard towards some professional growth that was all a story to get me to overcommit to the company. I know it won’t happen again, but I can’t avoid feeling bad about having fallen into that trap. The other one is the exploitable nature of open-source, which I suffered through Tuist. Because open source exists within the tech industry, and the tech industry exists within capitalism, where companies have no obligation to give back to society, they treat open source as a free resource that they can exploit. Companies can go as far as to tell everyone that you are a bad person because you are taking steps to protect yourself from that exploitation. This is tough to process emotionally.

There might be other factors contributing to these emotional swings, but those are the ones I’m the most aware of. I suffer from mental health breakdowns from time to time, and I think it’s important to share them. When I feel down, I take a break from everything and use the space to reflect on what’s happening. I also make sure exercising is part of my routine. Even though I don’t run as often as I used to, but doing casual runs helps me to clear my mind.

]]>
<![CDATA[In this blog post I open myself about recent mental breakdowns and how I'm dealing with them.]]>
3 package managers + 2 build tools = One big mess https://pepicrft.me/blog/2023/12/04/xcode-mess 2023-12-04T00:00:00+00:00 2023-12-04T00:00:00+00:00 <![CDATA[

As you know, I’ve dedicated the past 6 years to overcoming the challenges of using Xcode at scale. That was the theme of my recent talk at Swiftable, From challenge to joy: My journey developing Tuist for scalable Xcode Projects. The more I dive into this topic, work on it, and talk with developers about it, I realize in which difficult spot Apple finds itself.

Xcode builds on the Xcode build system, which works with Xcode project files. As the environment changed and things became more complex, Xcode project files were stretched beyond their limits and presented developers with a lot of challenges. Some examples of those are frequent Git conflicts, invalid dependency imports implicitly resolved, and frequent clean builds that make developers lose their time. This is the day-to-day of many developers, or at least it was and is mine when I use Xcode with a modular project.

Apple failed to evolve the project files and the build system that builds on it.

They left the community with only one API to overcome challenges: project generation. This is not great and tells a lot about the foundation. CocoaPods pioneered this approach, and then project generations like XcodeGen and Tuist followed. Most recently developers started to use the Swift Package Manager, which Apple integrated tightly with Xcode, to overcome the most pressing challenge, frequent Git conflicts.

But the matter has gotten worse. We have now a fragmentation of dependency management solutions, and many organizations are struggling to move from CocoaPods because it provides a level of extensibility and configurability that the Swift Package Manager doesn’t. The same implicitness that Xcode had, and that caused many problems to developers, is now making its way to the Swift Package Manager, like the “automatic” linking mode that Swift Packages now has. Since when is the build system’s responsibility to decide the linking of a dependency? Optimizations? Not even think about them. It’s Apple’s responsibility. If the integration with the Swift Package Manager is suboptimal, you have to wait for Apple to fix it. I’m sorry, but this is too bad.

Apple built a package manager and faced a community using it as a project manager because they were tired of Xcode project issues.

Apple is now in a difficult spot, but one they could move from with support from the community and a clear vision. I think they’ve reached a point where they need to go back to first principles and evaluate the foundations of the platform. Is it time to evolve everything that’s around the Xcode project files? I think so. It’s time for the build system to be something closer to what Gradle or Bazel are. A build system that’s deterministic, configurable, optimizable, extensible, and overall, easy to reason about. If the convenience that they need for the people getting started is a concern, they can always build a layer of convenience on top of low-level primitives. Android has done that with Gradle. You drop a plugin, and that takes care of everything for you. And if you need to, you can peel layers of complexity.

But there are no signs of that happening. Chatting with developers at the Swiftable conference, I noticed that developers are more confused than ever. They want to use the official tools, but while doing so they realize that they add more complexity and challenges to their problems. It’s positive for Tuist, because it’s a huge opportunity for us to help them, but I feel really bad for those development environments that are not fun to work in. I was in one of them, and you know what leadership decided to do? Move away from native development to React Native. Simply because leadership doesn’t understand that you need to wait an entire year to hopefully have Apple fixing the productivity problems. It’s bizarre when looked at from the productivity angle.

I know put a Project.swift next to every package or in every new project that I create. Why? Because the productivity levels that Tuist provides are unmatched. If you haven’t tried, I encourage you to do so. You can clone the Tuist repository and play with it. Hopefully, there’ll be one day where we won’t need project generation anymore, and we can move our optimizations to a more sophisticated build system. But until then, project generation is our answer, and it works damn great.

]]>
<![CDATA[I shared a bit of a reflection on what are the issues with current Apple's tooling touching on some of the points that I presented in my Swiftable 2023 talk.]]>
Peeling layers https://pepicrft.me/blog/2023/11/21/peeling-layers 2023-11-21T00:00:00+00:00 2023-11-21T00:00:00+00:00 <![CDATA[

If you’ve read the content in this blog, you might have noticed that I have little experience building for the web. The few interactions that I had with it were geared towards putting my personal blog or Tuist‘s website on the Internet. The process often goes like this:

  • Find a template that I like on the Internet.
  • Learn about the framework the template is powered with (often JavaScript-based).
  • Adjust content and structure accordignly.
  • Publish

Did you know that this website has been implemented with technologies like Gatsby, NextJS, Jekyll, Rust, Phoenix?

Sometimes the technology decision was driven by a technology I was getting excited about, like it’s the case with the current iteration of this website, which is implemented with Phoenix and Elixir. Other times, it was driven by the template that I’d found–Tuist’s template drove the decision to use Astro. Playing with all these different technologies is a lot of fun. Nonetheless, it seems to come at the cost of making projects less future-proof in part, due, in part, to the high stack of layers (abstractions and tools) they build upon. We make the web, a platform that’s designed to be backwards-compatible, feel quite the opposite, a platform that breaks more often than not.

I get that abstractions and tooling are necessary. For example, to generate HTML pages at build or runtime from templates and content. However, I can’t avoid but wonder if we might sometimes be going too high in the stack. This is a question that I find particularly interesting. It’s in fact one of the reasons why I like following the Phoenix and Ruby on Rails ecosystems closely. Their love for the web as a platform makes them question every abstraction that arises. I find somewhat some-provoking the swing of the pendulum from the server to the client with SPAs, and recently back to the server but with the additional legacy that they’ve accumulated in the swinging. React Server Components is a good example of that. They are back at the server, but with a vast list of NPM packages with components that make assumptions about their rendering. It feels wrong. Another one is Tailwind, which has gone from being a tool to being a layer on which entire design systems and templates are built. All you want is to add a design system to your project, and then you find yourself having to integrate the Tailwind toolchain into your stack. Why?

Some organizations and developers might like that working setup. I completely understand it. But I decided that I’m doing the opposite. I’m diving deep into understanding the web platform to peel layers of abstractions and build projects that are more future-proof and easier to maintain. Recent conversations with Marek, about the approach we’d take for Tuist Cloud, inspired me to embrace this philosophy. Here’s a list of principles and ideas that I embraced in this website and that I’ll embrace going forward:

  • Only build tools that are strictly necessary: For example, no ESBuild and no Tailwind. Recent standards and browser capabilities reduce the list of scenarios for which you needed build tools.
  • Plain CSS: There isn’t really a need to learn a new layer of semantics that fill HTML elements with endless lists of utility classes like the ones that Tailwind proposes. CSS has evolved a lot and provides solutions for the problems that led to the emergence of utility-class frameworks like Tailwind. CSS is the abstraction itself, and I can achieve consistency via CSS variables, some of which I can take from the wonderful Open Props.
  • Web components: Web components are supported by the evergreen browsers. If you create a web component, you can rest assured that it’ll work forever. The same is not true in use-a-bloated-JS-rendering-technology, where you might need to allocate time in a year or two to adopt a framework, because the technology says it’s the recommended way to develop with it, or updating your components to adapt to breaking changes.
  • Embrace HTML semantics: I used to be that developer that <div>‘ed everything making my websites very inaccessible. Now, the first thing that I do is trying to get the semantics right while keeping the design aside. Then, I bring the design into the equation using ARIA attributes to store state, as suggested in the Enduring CSS methodology.

The new iteration of this website no longer depends on ESBuild and Tailwind, and uses web standards. Note that the design is simple by design, not by the simplification of the underlying stack. The only dependency is on Elixir and Phoenix, which are responsible for running the HTTP server that serves the website. Can I make it more future-proof by scripting something myself in JavaScript? Definitively. It can be a built-time-generated static website. However, I might add some server-side things down the road, so I’d rather keep the server piece, ensuring I use technologies like Phoenix that embrace the platform instead of abstracting it away.

It’s been an elightening process that has shown me how powerful the web platform is. I’ll follow the new ideas that abstractions bring to the table, while I slowly build on the lowest layer available.

]]>
<![CDATA[This blog post contains a recent reflection over the often over-abstracted web platform, and how powerful it's become, making many of the normalized abstractions feel unnecessary.]]>
Ensuring a smooth workshop experience https://pepicrft.me/blog/2023/11/17/workshop-assert-script 2023-11-17T00:00:00+00:00 2023-11-17T00:00:00+00:00 <![CDATA[

While preparing a workshop for Swiftable, I wondered how the attendees could verify that they were ready to continue with the next topic. Jumping to the next topic with their setup in an invalid state can make a difference between enjoying the workshop from beginning to end and feeling completely stuck and lost. To tackle it, I came up with the idea of providing the developers with a script that they can run at the end of every section:

bash <(curl -sSL https://url-to-server.com/assert.sh) 1

1 is the number of the section that they just completed. Note how convenient it is to run it because you only need bash in your system, which is a safe assumption to make. I believe the fewer system requirements for the workshop and everything that surrounds it, the better. As a fallback, I provide a Git repository and a commit sha at the end of every section that developers can check out to continue with the workshop.

I’ll test the method in a couple of weeks and report back.

]]>
<![CDATA[In preparation for a workshop that I'm conducting in Swiftable (Buenos Aires), I came up with an idea to ensure a smooth experience following the workshop]]>
Balancing mastery and sustainability https://pepicrft.me/blog/2023/11/14/mastery-and-sustainability 2023-11-14T00:00:00+00:00 2023-11-14T00:00:00+00:00 <![CDATA[

My work with Tuist has revealed the intricate evolution of open-source projects into complex systems that demand effective governance. I embarked on this journey with a singular focus on craft and community, yet as the project expanded, I found myself juggling numerous roles. These included coding, updating social media, reviewing code, delivering talks, writing documentation, providing support, and even redesigning and implementing websites. Such diverse tasks are typically handled by specialized roles or entire teams in corporations, but in the realm of open-source, these responsibilities often fall on a few individuals, sometimes just one. This can lead to burnout, a struggle often borne in silence due to a fear of appearing vulnerable to the community. It feels like an unspoken obligation to live up to a vision, showcasing the complex nature of human psychology.

This experience has taught us to approach our projects with a renewed perspective, particularly considering the significant impact they have on mental health and motivation. These human aspects are crucial to the wellbeing and innovation of the project. My deep-seated passion for Tuist’s problem domain has driven me towards mastery, a trait I share with other maintainers. However, an exclusive focus on mastery can lead to an imbalance, potentially risking the project’s long-term sustainability. It is encouraging to see individuals and organizations prioritize mastery over profit and growth, but the challenge arises when the system fails to support individuals financially. Therefore, designing and implementing a supportive framework within the project is imperative.

This task is inherently challenging. Any evolution from a well-established model is bound to encounter resistance. Looking back at the inception of Tuist, I ponder whether I could have anticipated the system’s current needs. At that time, the project’s future was uncertain, with no user base for over a year. In hindsight, there was certainly room for improvement, but my intense focus on the craft limited my foresight regarding the system’s evolution. Observing projects like Sourcery and Sourcery Pro, it’s intriguing to see the acceptance of open-core models contrasted with criticism of open-source projects evolving toward that model.

We acknowledge our imperfections and embrace the learning curve ahead. Our conviction in our decisions remains strong, and we are prepared to learn and pivot as needed, as we have over the past five years. Currently, we are focused on a major update for Tuist, signaling a new and significant phase in the project’s journey.

]]>
<![CDATA[Juggling roles in Tuist, from coding to community support, taught the delicate balance between mastery and sustainability in open-source projects.]]>
Dear Bitrise https://pepicrft.me/blog/2023/11/10/dear-bitrise 2023-11-10T00:00:00+00:00 2023-11-10T00:00:00+00:00 <![CDATA[

Dear Bitrise Team,

Today, I find it necessary to speak about the recent attack directed at our beloved Tuist project. Our journey with Tuist has been one of unwavering dedication. Over the past five years, we have poured our hearts into it, working tirelessly during our free hours. Our commitment led us to develop tools like XcodeProj, integral to many in the ecosystem and crucial for systems like Bazel, which you are now steering users toward. We’ve not only facilitated the integration of Tuist with your CI services but also recommended speakers for your events.

In our community, we’ve always extended a helping hand to those facing challenges, focusing on support over financial gain. Our approach seems to differ starkly from yours.

Embarking on the Tuist journey, we underestimated the incoming demands for support and the sheer volume of requests. This is a common scenario for open-source maintainers, often leading to burnout, a situation we have strived to avoid. Your approach, seeking a PR merge without having contributed a single line of code to Tuist, appeared more self-serving than community-focused. It added to our workload, contrary to your public image as an advocate for open-source.

The reality of Tuist’s demands became clear – it needed dedicated, full-time attention. Here, we faced a financial dilemma. Donations, which are always welcome and channeled towards bounties, weren’t enough. Investment wasn’t a viable route either, as it often brings profit-driven motives that could jeopardize the essence of our project. Our independence from such influences allows us to focus solely on our user community’s needs and challenges.

Introducing Tuist Cloud was our strategic response to this need for sustainability. We’ve designed Tuist to minimize vendor lock-in, a detail you may have overlooked given your limited engagement with the project. Users can easily migrate away by committing their Xcode projects and dropping Tuist, a feature common to most tools, Bazel and Bitrise included. However, we choose not to retaliate by suggesting alternatives like GitHub Actions.

Tuist Cloud began as an open-source endeavor until we realized that this path wouldn’t lead to sustainability. Comparisons with well-funded projects like Bazel are misplaced; unlike these giants and not so giant, we lack substantial financial backing. Your actions, increasing our burden and publicly undermining us, were a blow to our efforts.

We’re now developing Tuist Cloud as a closed-source extension for Tuist, a decision forced by circumstances, not choice. Our aim is not to perpetuate closed-source development but to find a sustainable path forward. The distinction between Tuist and Tuist Cloud lies in the realm of project generation versus optimized project generation. We urge our users not to be swayed by fear-mongering about Tuist’s future.

Our commitment remains steadfast to a project and community we deeply care about, facing the challenges ahead, and gearing up for major releases. For those in doubt, we encourage conversations with our users to see the value and dedication we bring to the table.

Warm regards, Pedro

P.S. Bitrise, we’ve noticed that you’ve set up redirects on your pages, funneling readers to a blog post casting Tuist in a negative light. This includes the now-missing comparison page, which previously claimed your service’s superiority over ours. Removing this page appears to be an attempt to hide the full story.

]]>
<![CDATA[Read my personal take on Bitrise's actions against Tuist, and how we're rallying as a community to uphold our values and vision.]]>
Integrating Swift Macros with Xcodeproj native blocks https://pepicrft.me/blog/2023/11/08/swift-macros-with-xcodeproj-native-blocks 2023-11-08T00:00:00+00:00 2023-11-08T00:00:00+00:00 <![CDATA[

Swift Macros were introduced by Apple as a feature bundled within Swift Packages. This approach enhances shareability—a notable limitation of XcodeProj elements like targets. However, it also tightens the reliance on seamless integration between Xcode and the Swift Package Manager (SPM), which, from my experience and that of others, can be less than ideal in large projects with numerous dependencies. In fact, some developers are shifting towards Tuist’s methodology, reminiscent of CocoaPods, where projects are immediately ready for compilation upon opening.

Given the suboptimal experience offered by Apple’s ecosystem, which precludes optimization opportunities, Tuist employs SPM to resolve packages before mapping them onto Xcodeproj elements. While generally effective, this approach has encountered occasional setbacks, which developers can rectify by tweaking the build settings of the generated targets. Yet, it has not supported Swift Macros since their announcement.

Interestingly, developers managing Xcode rules for Bazel quickly devised a method to accommodate Swift Macros using compiler flags. Inspired by this, could Tuist adopt a similar strategy by utilizing targets, dependencies, and build settings? After some investigation, the answer is affirmative. Here’s the blueprint:

The macro’s representative target must be a macOS command-line target, encompassing the macro’s source code. A secondary, dependent target is required, hosting the public macro definition for import by other targets.

Targets wishing to leverage the macro should:

  • Establish a dependency on the secondary target for prior compilation.
  • Include the setting OTHER_SWIFT_FLAGS with the value -load-plugin-executable $BUILT_PRODUCTS_DIR/ExecutableName\#ExecutableName.

This setup is contingent upon the secondary target and the dependent targets producing their outputs in the same directory. If that’s not the case, SWIFT_INCLUDE_PATHS will be necessary to make the module available to the dependent targets.

With this mechanism uncovered, the next step is to integrate it into Tuist’s Swift-based DSL and combine it with our binary caching feature. This integration will enable developers to concentrate on targets dependent on macros without the overhead of compiling the macros themselves.

]]>
<![CDATA[Exploring native Swift macro support in Tuist to simplify and accelerate Xcode project builds.]]>
Making Tuist easier to work with by saying goodbye to Ruby https://pepicrft.me/blog/2023/11/07/ruby-and-tuist 2023-11-07T00:00:00+00:00 2023-11-07T00:00:00+00:00 <![CDATA[

At some point in the life of Tuist, we decided to introduce Ruby into its codebase. The CI pipelines were beginning to contain a considerable amount of business logic, and it was somewhat inconvenient for developers to run the same workflows locally. Therefore, I developed a small Ruby-based CLI, which would be included in the repository as Fourier.

We also chose to implement the acceptance tests in Ruby. We believed that the BDD approach, as proposed by Cucumber, would be most suitable due to its ability to produce very readable scenarios and prevent the tests from being aware of the Swift implementation details.

In hindsight, it was a mistake, and we are already taking steps to rectify that.

The main issue is that it introduces friction to the experience of contributing to the repository for the first time. In addition to Xcode, which we can safely assume developers have on their systems, they need to install the same version of Ruby that everyone else is using (to avoid inconsistencies), remember to run bundle install to pull dependencies such as Cucumber, and, not least, feel comfortable diving into Ruby code when things don’t work as expected. Many developers may not have used Ruby before, and this task can seem very daunting, resulting in them relying on us—the maintainers—to resolve issues.

We also noticed considerable hesitance towards writing the tests. Beyond the fact that some steps’ implementations needed to be done in Ruby—a task they could manage by referring to other steps—the Regex-based step definitions were also a bit intimidating. Once again, it was we—the maintainers—who would end up writing most of the tests.

So, what are we doing about it? We’re removing Ruby from the codebase and integrating everything tightly into Xcode. We are eliminating the Fourier CLI and replacing it with universal bash scripts that require no additional tooling, some of which can be invoked through make. ChatGPT has been quite helpful here, as I’m not at all comfortable writing business logic in bash. We are also rewriting our acceptance tests in Swift so that they can run in parallel using Xcode and SPM. Developers will be able to execute these tests directly from Xcode and even add breakpoints to debug execution. This change will not only enhance the working experience with these tests but also create a faster feedback loop. This transition has been challenging because PRs took a long time to be ready for merging.

I am eagerly awaiting the implementation of these improvements. It’s easy to make incorrect decisions like these, but it’s crucial that we periodically reflect on them and adjust our course if necessary—and that’s exactly what we are doing.

]]>
<![CDATA[We're removing Ruby from Tuist, integrating everything into Xcode, replacing Fourier with bash scripts, and rewriting tests in Swift for ease.]]>
From side project to sustainable tool https://pepicrft.me/blog/2023/10/28/open-source-sustainability 2023-10-28T00:00:00+00:00 2023-10-28T00:00:00+00:00 <![CDATA[

It’s hard to believe, but Tuist is now 6 years old and has become an indispensable tool for medium and large organizations. I began building it due to my profound understanding of the challenges associated with using Xcode at scale and the inaccessibility of alternative build systems for smaller organizations. Tuist was always a side project — something I built in my spare time.

While I was at SoundCloud, the organization didn’t adopt Tuist until after I had left. Later, at Shopify, I attempted to introduce it to address challenges they faced with Xcode. However, there was strong advocacy for using only Apple’s official tools, so adoption didn’t happen. Interestingly, the company eventually transitioned to React Native. While at Shopify, I hired developers I met through Tuist. This led many to assume that Shopify used Tuist and sponsored me to work on it full-time. In reality, I could only dedicate my spare time to it, and as time progressed, it became increasingly challenging. Eventually, Marek and the core team took over some responsibilities because I needed a break.

Throughout this period, word of mouth was effective. Companies initially approached Tuist for project generation but stayed for the additional features. Today, numerous organizations prefer Tuist over XcodeGen as they recognize that Git conflicts aren’t the sole challenge of scaling. Defining Tuist is challenging; while many see it simply as a generator, it’s much more. We eventually realized that we had laid the foundation for significant optimizations and an integrated development experience.

The project’s popularity surpassed our expectations. With the increase in users came a proportional increase in time and effort. Continuing on our personal time risked burnout, which I wanted to avoid both for myself and other dedicated contributors.

So, what next? I researched open source sustainability models to determine what would fit Tuist best.

One idea was to limit Tuist’s scope to easily maintainable features given our team size. But this would disappoint many users by reducing the tool to just project generation. Although this feature is reliable, extendable, and only requires major updates with new Xcode releases, we decided against this route. Users love the unique extensions that enhance their productivity with Tuist.

While we were eager to have full-time contributors, donations didn’t suffice. They covered some costs but weren’t substantial enough for salaries. We’re immensely thankful to our sponsors for their support.

One potential path was to join a large tech firm, similar to Fastlane. However, such partnerships often prioritize corporate interests over solving real problems. In Fastlane’s case, data became the focus under Google‘s ownership. I also contemplated offering consulting services, leveraging our expertise. But the reception was lukewarm as most companies had either already transitioned or found their answers online.

We then considered models from similar tools, like Gradle and Nx. These tools offer server-side paid features for advanced requirements, which seemed fitting for Tuist. Thus, we began working towards this model, even initiating the process of setting up a legal entity in Germany, Tuist GmbH.

Simply put, if you need project generation, Tuist aims to be the best at it. For optimizations, we’ll introduce paid features, priced reasonably in comparison to the benefits they offer.

However, challenges emerged. A prominent Mobile DevOps company, Bitrise, launched a service for Tuist users, requiring a Tuist fork. This experience taught us the complexities of business dynamics. We had previously communicated our intentions to them, so we decided to develop certain client features privately. Our goal remains to return to open source, but first, we need to ensure the project’s sustainability.

Another issue was that some organizations bypassed payments for Tuist Cloud. While we tried to stress the importance of financial support, some viewed it as optional. Consequently, we’re implementing measures to counteract these workarounds.

There are more discussions ahead, but this new phase for Tuist is crucial. We want to ensure that Tuist remains faithful to its core values and principles, delivering consistent value sustainably. Sustainability is central to Tuist’s success, and I’m committed to realizing that vision.

]]>
<![CDATA[Tuist, now 6 years old, has become essential for organizations using Xcode. While initially a side project, its popularity surged. To ensure sustainability, we're introducing paid features alongside free ones, navigating challenges like unauthorized forks and finding the right business model.]]>
Empowering Development: The Journey and Vision of Tuist https://pepicrft.me/blog/2023/10/16/the-future-we-envision-for-xcode-devs 2023-10-16T00:00:00+00:00 2023-10-16T00:00:00+00:00 <![CDATA[

It’s been about six years since organizations began to understand the lack of focus Apple has given to the challenges of large-scale development. Within this timeframe, it became evident how invaluable Tuist is in addressing these challenges. Numerous major projects and white-label apps have come to recognize the prowess of Tuist, valuing its ability to maintain simplicity even while scaling. Some have even transitioned from intricate build systems like Bazel, which, despite their strengths, impose an unsustainable level of complexity and support demands on many organizations.

Our journey in developing Tuist wasn’t an easy one. We juggled multiple roles, from coding to marketing, writing to community support. Each hat we wore was worn out of faith in Tuist and its potential impact on the development community. And let’s be clear: this is only the beginning.

The journey took six years, primarily because it was a side project for us. But imagine the strides we could make if we dedicated our full attention to it! Tuist is more than just a tool; it’s a foundation that equips developers with essential tools for sustainable growth. I’m often perplexed as to why Apple hasn’t addressed certain metrics:

  • How frequently do developers need to perform a clean build after encountering a compilation error?
  • How long do targets take to build or tests take to run?
  • How frequently do tests fail, and can they be automatically disabled when they do?

Apple seems preoccupied, perhaps grappling with legacy decisions in Xcode, while continually rolling out new language features and APIs. While these features are exciting for small projects, they’re less so when projects scale up, revealing Xcode’s limitations. At this juncture, organizations often either waste time battling these issues or divert to alternative solutions like React Native. The good news? Other ecosystems have already tackled these challenges, and there’s much we can learn.

For us, this represents an opportunity. An opportunity to offer a seamlessly integrated solution. When observing the tools aiming to integrate into developers’ workflows, it’s evident that many setups are convoluted. The ideal experience for developers is simplicity. They should execute a command and have everything work seamlessly. Sadly, Apple doesn’t provide the necessary APIs for a smooth integration into Xcode. Many tools lean on Fastlane, bringing along the complexities of Ruby, Bundler, and more. With Tuist, all you need is Tuist and a project defined through it.

The future for Tuist Cloud looks promising. We’ve laid a solid foundation and cultivated a community that genuinely believes in our mission. As we move forward, we’ll introduce exciting features. From boosting developer productivity to providing actionable insights and even automating tasks for developers, the possibilities are boundless.

In summary, while Tuist Cloud is still evolving, the hardest part is behind us. We have a firm foundation and a dedicated community. Now, it’s our turn to supercharge Tuist and empower development teams to foster thriving environments.

]]>
<![CDATA[Tuist provides solutions to challenges in large-scale app development overlooked by Apple. It's a foundation for developers, promising simplicity and a future filled with actionable insights.]]>
Recalibrating Mental Models in Elixir Programming https://pepicrft.me/blog/2023/10/14/reprogramming-my-programming-mental-models 2023-10-14T00:00:00+00:00 2023-10-14T00:00:00+00:00 <![CDATA[

As I delve deeper into programming with Elixir, I am prompted to reconsider the mental models formulated over years of experience. Initially, when contemplating a problem space and a potential solution, my mind spontaneously navigates towards an Object-Oriented Programming (OOP) world, constructing a picture involving repositories, services, and presenters to facilitate various layers of application, especially testing. However, this model doesn’t quite fit seamlessly into Elixir-a functional language where everything condenses down to functions and modules simply act as namespaces to encapsulate them, occasionally embodying semantics similar to interfaces in OOP.

While my mental models are instinctively oriented towards objects and classes, envisioning an ideal—or even a proximate—solution in Elixir poses a challenge. A strategy I’ve adopted to navigate this is to think in terms of domains instead of the traditional OOP mental model. Consider the following Elixir functions that interface with a %Plug.Conn{} to manage session information:

Auth.set_authenticated_user(conn, user)
Auth.get_authenticated_user(conn)
Auth.user_authenticated?(conn)
Auth.load_authenticated_user_from_session(conn)

This code, expressive and reflective of real-world modeling, underscores one of Erlang creator Joe Armstrong’s points from his renowned paper: the proximate modeling of problems and solutions in Erlang, analogous to real-world scenarios, enhances the maintainability and reasoning of the software. Adopting this mindset is pivotal and necessitates the relinquishment of numerous concepts ingrained from OOP.

Moreover, abandoning these concepts also means recalibrating practices built upon them, such as testing. Traditional architectural solutions enable isolated testing. For example, testing business logic without the concern of data origin, since we might mock a repository. However, in the functional realm of Elixir, where every entity is a function or a function combination, the narrative is distinctly varied.

Consider a hypothetical scenario: testing a function combination that symbolizes a slice of business logic, which is more user-centric as opposed to stating, “when I call this function with these arguments, I expect this query to be executed to the database”. My journey through various codebases has occasionally led me to allow inertia to dictate, perpetuating historical practices. Yet, the pertinent question that perpetually surfaces as I script in Elixir is: Do these tests make sense? Is it logical to test whether storing the authenticated user appears in a specific key inside the connection assigns? Or, would it be more valuable to test, for instance, whether a specific request, requiring user authentication, fails with a designated error if the user is not authenticated?

In conclusion, as I savor my Saturday morning coffee ☕️, Elixir continues to captivate me. The challenge of mentally rewiring to adapt to its distinctive paradigms is not only intriguing but also refreshingly fun.

]]>
<![CDATA[Navigating through Elixir requires a rethinking of traditional OOP mental models, inviting a shift towards domain-centric thinking. Embracing Elixir's functional paradigm offers intriguing challenges and a rewarding, fresh perspective on problem-solving in programming.]]>
We do it for the community https://pepicrft.me/blog/2023/10/06/for-the-community 2023-10-06T00:00:00+00:00 2023-10-06T00:00:00+00:00 <![CDATA[

It’s a beautiful sentiment to reflect upon – “We do this for the community.” This phrase, delicately expressed to us recently, cast light on the vibrant pulse and potential inherent in our project, Tuist. An insightful suggestion about integrating a feature to streamline Tuist remote caching presented not just a technological prospect, but a moment to ponder about collaborative innovation and ethical praxis in the open-source landscape.

Navigating through this journey of developing Tuist, whilst seeking a viable path to support my family and me, has been a profound exploration. It’s not just about discovering solutions for myriad challenges users encounter with Xcode, but also about weaving a tapestry of sustainable development that resonates with community ethos and altruistic collaborations.

In a world where every digital step is entwined with ethical considerations, especially in our endeavour to sustainably develop Tuist, we tread cautiously. While our project is shielded with a permissive license, the ethical dialogue it intertwines with is both vital and delicate.

At times, miscommunications happen. Perhaps in their zeal to contribute, some may overlook the underlying struggles and aspirations of our team, attempting to harness Tuist’s capabilities for their gain, somewhat overshadowing our journey towards sustainability. It’s conceivable - maybe they were unaware of our aspirations to immerse ourselves full-time into Tuist’s development? Always leaning into a belief in others’ good intentions, this was a possibility I considered. But understanding evolved through conversations, revealing a knowing that was deeper and more informed.

Despite these revelations, we’ve witnessed actions that teetered on the edge of ethical bounds. Navigating through instances where our sustainability initiatives, like Tuist Cloud were contrasted unfavorably against services of investor-backed entities, was indeed a challenge. But herein, we also found a spark – an ignition of resolve to protect and nurture the sanctity and future of Tuist.

In a realm where open-source projects, like Fastlane and Rails, have found sanctuary and sustained development within large organizations and dedicated teams, Tuist weaves a slightly different narrative. The likelihood of a dedicated Tuist infrastructure team materializing within an organization may appear slim, yet what pulses brightly within us is an unbridled passion for problem-solving and innovation within this domain.

So, here we stand, at a pivotal crossroads, choosing to develop the client-side functionality that fuels Tuist Cloud in private, at least until we solidify our path to project sustainability. This decision, forged in the crucible of challenges, is imbued with hope - hope that this enclosure is merely a temporary phase, and that we’ll soon fling open the gates to a reservoir of source value once more.

Even as we navigated through the echoes of bold business moves that nudged us unexpectedly, our commitment to positivity and innovation has remained unwavering. Now, we’re not merely players, but steadfast contributors to a different league, an evolved narrative of inspiring sustainable open-source development.

To you, our cherished Tuist users, let your hearts be light and assured. In the embracing and indomitable spirit of us - the people that breathe life into Tuist - you have a trustworthy companion. As we untangle the threads of this challenge and reweave them into a tapestry of innovative solutions and sustainable practices, the horizon looks even more radiant and promising.

There’s an entire universe of solutions we’re eager to explore and implement, and with an evolving framework that hopefully allows us more dedicated time for Tuist, this will change, blossoming into a reality where our collective dreams and innovations converge, uplifting and empowering each one in our community.

Together, in unity, innovation, and ethical collaboration, let’s continue to co-create, inspire, and elevate Tuist into realms where technology and ethical sustainability dance in harmonious synchrony.

]]>
<![CDATA[Embarking on a nuanced journey with Tuist, facing ethical dilemmas & aiming for sustainability, I invite you to be part of our personal tech story.]]>
Between Simplicity and Limitations: A Developer's Take on Apple's Tooling Strategy https://pepicrft.me/blog/2023/09/29/apple-tooling 2023-09-29T00:00:00+00:00 2023-09-29T00:00:00+00:00 <![CDATA[

Yesterday, I wrote about the evolution of iOS development and the role of Tuist in it. I kept reflecting on some of the ideas that I touched on in it. One of them, in particular, is the tradeoff that Apple found itself having to make between developers’ convenience, which is a key tool to make it easy for developers to get started, and flexibility, which is what medium-to-large apps need to be able to scale.

You might think you won’t ever have the need of thinking about scale, but let me tell you that you will. Scalability is not only about the size of a project but also about the breadth of it. The moment you decide to support multiple platforms, for example, iOS and watchOS, you’ll most likely want to reuse code across targets. This is achievable through shared targets, which introduce you to the world of dependencies that I discussed yesterday.

Configuring a modular Xcode project, including its external dependencies, is a tedious and error-prone task. If Apple’s platform were dynamic like NodeJS, we wouldn’t even have to think about the problems I’m going to talk about because modules are dynamically resolved and loaded. Unless you make some node_modules-like design decision, and you find yourself in a spot that’s difficult to move away from. What that means is that the build process is more involved, and the build system and the mental models it works with are more intricate.

If you have built for the Apple ecosystem for a while, you’ve most likely faced the issue of “duplicated symbols” or apps crashing at runtime because there’s a dynamic module that hasn’t been copied into the final product (i.e., “framework not found”). Those are hard to debug, aren’t they? I became so weirdly obsessed with understanding them that I decided to build Tuist so that no one would have to do it themselves manually. The problem is that when Apple recognized that it’s not trivial, and the number of use cases that we need to handle in Tuist’s codebase is a good proof of that, they decided to go down the path of convenience and enabled some implicit settings. What does it mean in practice? There are several places where one can experience that, but the most obvious one is Apple being able to detect target dependencies by looking at who outputs a .framework into a directory that’s exposed to our target through the framework search path. Isn’t it cool? It is, until it’s not. What works for the developer that’s getting started, whose dependency graph is small, doesn’t work in slightly larger projects where there’s a mix of static and dynamic frameworks and libraries. And the matter keeps getting worse because Apple doesn’t cease to add new target types. For example, there’s now Swift Macros and Build Tools.

So, in order to solve the problem, we had to start by making the implicit explicit. And the core-most element that required that explicitness was the dependency graph. One might think that the dependency graph refers only to external dependencies, but with it, I’m also referring to the targets that are part of your project - some dependencies are local, and others are remote. They are all dependencies. So when you look at Tuist’s DSL, you’ll notice that it made dependencies front and center. The build settings and phases that are required are an implementation detail. If a dynamic framework needs to be copied, we know and configure things properly. The same is true for when the right binary of an .xcframework needs to be selected for the target that’s linking against it. As mentioned earlier, the scenarios are endless, and you really don’t want to be doing that yourself.

The problem is that Apple doesn’t reconsider the implicitness path, which in my opinion is a terrible design idea. At least if we think of a future where apps are multi-platform and very modular, which I think is realistic to think about. A good example of that is that the integration between Xcode and the Swift Package Manager is also very implicit. Xcode’s build system and the Swift Package Manager are both communicating and making decisions at build time to keep things convenient. Just flag Swift Packages to be automatically linked, and Xcode will do it for you. Until it doesn’t, or the experience is laggy and slow, and you can’t do little about it.

I met with a developer over a month ago, and we chatted a bit about their transition from Tuist to Swift Package Manager, and they found themselves very limited by the optimization opportunities that the Swift Package Manager offers. I’d be surprised if they can change that without going back to first principles because they should even start with questioning whether a compiled language, Swift, is the right tool for the job. When we look at more advanced systems, we see dynamic, fast, and functional DSLs being the common denominator. Xcode’s build system should probably be closer to what Gradle is for Android. From all the things compiling Swift could offer, like sharing code across manifest files or coming up with with abstractions, they only use type-related capabilities.

Apple appears to be hindering their own progress. While they’ve streamlined basic tasks, they’ve concurrently made more intricate ones seem unattainable. This puts developers in a quandary: how can they discern these limitations when Apple doesn’t offer comprehensive insights from its tools? Often, developers only recognize these constraints when faced with them directly. Consequently, many organizations either pivot towards alternatives like React Native or undertake the daunting task of overhauling the entire build system. I encountered this firsthand at Shopify. Despite my persistent efforts to highlight the unsustainability of their unwavering reliance on Xcode, I was met with staunch opposition, culminating in a top-down directive to transition to React Native.

It’s not an easy problem to solve, but I believe it’s a problem worth solving. They should consider layering their tooling such that there’s a very low extensible layer without convenience or implicitness that developers can build upon and extend. And then another layer on top of it that’s the one that provides the convenience and says “we are trying to be smart to help you stay focused.” Right now, there’s a single layer that stretches too broadly and is more of a hindrance than a help. Xcode would know how to contract with those layers, and alternative build systems like Bazel or Gradle would have the possibility to swap pieces as needed.

As someone trying to help solve this problem, I find the whole situation very frustrating. It’s frustrating because everyone gets eclipsed by what Apple proposes, and they are unable to objectively decide whether that’s a good idea or not. Hence why I’m writing blog posts like this one. Hopefully, one day Apple goes back to the root and rethinks the build system. I personally believe the Swift Package Manager path is not the way.

]]>
<![CDATA[Apple's focus on simplifying basic tasks may overshadow the challenges of complex operations. Developers, often uninformed due to lack of data, face hard choices: pivot to alternatives or revamp systems. ]]>
Reclaiming Mental Peace: My Personal Odyssey https://pepicrft.me/blog/2023/09/27/rebalancing-mental-peace 2023-09-27T00:00:00+00:00 2023-09-27T00:00:00+00:00 <![CDATA[

Every morning, I wake up with a vigor that promises a fruitful day. But come evening, my mind clouds over, leaving me mentally drained. This daily battle with mental fatigue has shadowed me for years. I’d gaze at my reflection, asking the weary eyes staring back, “What am I missing?”

During a recent bout of unemployment, I found a quiet moment of clarity to understand the roots of my exhaustion. Today, I’m opening my heart to share what I’ve unearthed and the measures I’m taking to reclaim my mental peace.

When I share my daily tasks with friends, their eyebrows raise in surprise at my seemingly endless activities. But here’s my confession: building software isn’t just a job for me, it’s a sanctuary. Just as an artist finds solace in their brushstrokes, I find joy in coding. It’s a family trait, I believe. My parents and sister are ceaseless whirlwinds of energy, and pausing what they love would be like caging a free spirit. I cherish the moments of creativity my craft brings me, ensuring I balance my passion with rest.

However, another culprit silently contributed to my mental turmoil: my dwindling exercise routine. Once, the rhythmic pounding of my feet on the pavement was my daily ritual. Yet, I let life’s distractions coax me away from it. But every time I return, like my recent run through the familiar paths of Cieza in Spain, I’m reminded of the mental clarity it grants me. So, I’ve made a pact with myself - daily exercise, with occasional breaks.

Diving deeper into my daily life, I noticed a pattern. My attention flitted like a restless butterfly: from Slack to coding, coding to email, email to countless other tasks. These ceaseless shifts became second nature, feeding my anxiety. To counteract this, I now set specific times for checking emails, social media, and other apps, training myself to relish moments of stillness without the lure of aimless scrolling.

I’ve also introduced a faithful companion to my life: Todoist, a TODO app. Before, I tackled tasks as they appeared, like a ship swaying with the tide. Now, I pen down every thought and prioritize tasks each evening, setting clear intentions for the next day. Adopting this methodical approach has been a challenge, given my instinctive chaotic nature, but its promise of a calmer mind drives me forward.

Coming to terms with the fact that life’s to-do list is ever-growing has been a revelation. My labor of love, Tuist, always has a fresh challenge. But I’ve learned the art of setting boundaries, prioritizing mental well-being above all. I’ve also carved out pockets of time to indulge in tasks that might not be ‘productive’ but nourish my soul.

These changes, though recent, have begun to mend my mental fabric. I now greet evenings with a clearer mind and a more present heart. My journey is still unfolding, and I remain a student of life, always eager to refine my ways. Sharing my story here is my way of reaching out, hoping my experiences might resonate with someone else. If you’ve carved your path to mental well-being, I’d love to walk a mile in your shoes and learn from it.

]]>
<![CDATA[Battling daily mental fatigue, I embarked on a personal journey to rediscover clarity. Through exercise, mindful task management, and self-reflection, I'm finding my way back.]]>
Exploring Mocking Solutions in Elixir: Introducing Modulex https://pepicrft.me/blog/2023/09/21/application-module 2023-09-21T00:00:00+00:00 2023-09-21T00:00:00+00:00 <![CDATA[

I’ve recently delved into the world of mocking in Elixir and have been particularly intrigued by the Mox package, endorsed by José Valim. While studying this approach, I noticed that it could introduce a considerable amount of boilerplate code into a codebase, along with potential inconsistencies in how module references are managed in the application environment. I couldn’t help but think there had to be a more streamlined solution.

Take, for example, the typical module structure in such a setup:

defmodule MyModule do
  @behaviour __MODULE__.Behaviour

  def hello(name) do
    application_env_module().hello(name)
  end

  def application_env_module() do
    get_in(Application.get_env(:my_app, :modules), [:my_module]) || __MODULE__.Implementation
  end

  defmodule Implementation do
    @behaviour MyModule.Behaviour

    def hello(name) do
      "Hello #{name}"
    end
  end

  defmodule Behaviour do
    @callback hello(name :: String.t()) :: any()
  end
end

In this example, MyModule serves as a facade that selects an appropriate module based on the application environment configuration. If a module atom is specified, it’s utilized; otherwise, the code defaults to the built-in implementation. However, this structure has some downsides:

  • Boilerplate code that acts as a proxy to the underlying implementation could be automatically generated.
  • The method application_env_module and related naming conventions can become inconsistent across the codebase.

To tackle these challenges and experiment with Elixir macros, I created a new package for the Elixir ecosystem named modulex. With this package, the previous example can be refactored as follows:

defmodule MyModule do
  use Application.Module

  defimplementation do
    def hello(name) do
      "Hello #{name}"
    end
  end

  defbehaviour do
    @callback hello(name :: String.t()) :: any()
  end
end

Notice how much more concise and ergonomic the code has become. I chose to prioritize convention over configuration, thereby standardizing the naming of child modules and the keys within the application environment.

For those who use Mox or Hammox for mock definitions, you can easily set a mock like so:

# test_helper.exs

Mox.defmock(MyApplication.Module.mock_module(), for: MyApplication.Module.behaviour_module())
MyApplication.Module.put_application_env_module(MyApplication.Module.mock_module())

I’d love to hear any feedback on the implementation or the API design. This is my inaugural venture into Elixir macros, and the journey has been both rewarding and a process of trial and error.

]]>
<![CDATA[Exploring Elixir's Mox for mocking reveals boilerplate code issues. A new package, modulex, aims to streamline this process.]]>
Passion vs. Profit: My Quest for Meaningful Craftsmanship in Tech https://pepicrft.me/blog/2023/09/13/passion-profit 2023-09-13T00:00:00+00:00 2023-09-13T00:00:00+00:00 <![CDATA[

As I delve deeper into my work on Glossia and Tuist Cloud, I frequently find myself drawn to tasks that, while they influence the final product, aren’t necessarily seen as financial priorities by many. This leaves me in a conundrum: I can pursue what I genuinely enjoy but might not be financially rewarding, or I can focus on monetary gains and potentially lose out on the joy of the process. Striking the right balance is an ongoing challenge. It begs the question - what kind of world are we molding if human needs take a backseat?

Strangely enough, pursuing money doesn’t fulfill me. I recently attended an AI Meetup in Berlin, and left feeling hollow. The majority of conversations revolved around financial pursuits - raising capital, monetizing ideas, or selling startups. Shockingly, there was little to no discussion about the human-centric problems the showcased technology aimed to solve.

Ideally, I’d like a steady stream of income that lets me forget about financial woes altogether. But attaining this requires adopting a business-centric mindset which doesn’t resonate with me. Wouldn’t it be wonderful if financial stability was the byproduct of quality craftsmanship? This was evident with Tuist. A simple change on the website, showcasing logos of companies using our platform, suddenly piqued the interest of many. Ironically, this value was constructed without a monetary focus. Isn’t that paradoxical?

A recent endeavor of mine is Noora, a design system for Glossia, powered by Elixir and inspired by prominent JavaScript design systems. I recognized the superior design systems in JavaScript and felt it unjust for users to integrate additional complexity into their stack merely to access these systems. With Glossia in mind, I opted to embed this into its core and make it universally accessible. This also gave me a chance to delve deeper into Elixir macros. While such endeavors may not directly boost Glossia’s profitability, they refine the craftsmanship, enhance the product, and nurture a community - potentially drawing Elixir enthusiasts towards Glossia. A parallel can be drawn with Shopify and its significant contributions to the Ruby community, now a haven for Ruby developers.

Reflecting on this, I believe such an approach is what sets Tuist apart. It’s a project that oozes humanity and is primarily fueled by passion. Introducing a financial element is a necessity, a means to an end. My earnest hope is that once I reach that financial comfort zone, I can continue to revel in the sheer joy of craftsmanship. I wonder if others resonate with these sentiments? I’d be keen to hear your perspectives.

]]>
<![CDATA[In a world where money often takes the front seat, how do we balance the joy of craftsmanship with the need for financial stability? Here's a personal reflection on the intersection of passion, craft, and monetary pursuits.]]>
Tuist: From Passion to Craftsmanship, Charting New Horizons https://pepicrft.me/blog/2023/08/19/tuist-journey 2023-08-19T00:00:00+00:00 2023-08-19T00:00:00+00:00 <![CDATA[

If you’ve journeyed with me from the onset, you’ll recall the birth of Tuist and how I’ve nurtured it through thick and thin. Since its inception in 2018, we’ve navigated tumultuous waters, including those frenzied days at my previous job. Today, allow me to pull back the curtain, recounting our adventures, triumphs, and the promising horizon ahead.

For newcomers, here’s a quick sketch: Tuist began as a beacon for developers crafting apps for Apple’s platforms using Xcode. It sprung from a desire to simplify and enhance Xcode project scalability. Today, it’s not just a tool; it’s a foundation. While I had dabbled in open source before, Tuist eclipsed them all in popularity. So, what made Tuist stand out?

At its heart, Tuist was conceived out of profound passion. The challenges we wrestled with at Soundcloud often led to solutions that felt complex and cumbersome. I was determined to simplify this. My time at SoundCloud gifted me insights into scalability, which I infused into Tuist. It’s a legacy I wear proudly, one where contributors can engage meaningfully without drowning in the vast sea of code.

However, Tuist’s true magic unfolded unhurriedly. Balancing it with my full-time job meant limited but quality hours poured into it. This allowed for rich contemplation and design, even if I was just “coding in my head.” My style might lean towards the classic in Swift, but its architecture is solid and purposeful.

Then came the beautiful moment when like-minded souls began rallying around Tuist. Kas‘s comprehensive issue, considering its use at Bloomberg, was a pivotal turning point. Soon, MarekDanielle, and many others joined our ranks. The ebb and flow of contributors, like the heartbeat of any thriving organization, propelled Tuist forward.

Our strength lay in our foundations, direction, and vibrant community. With our energy levels skyrocketing 🚀, more organizations began embracing Tuist. While Apple endeavored to shift everyone towards the Swift Package Manager, we held our ground, earning our user’s trust and appreciation.

Today, Tuist stands at a crossroads. To forge ahead and ensure its longevity, we’re introducing Tuist Cloud - a premium, integrated suite for organizations. This initiative goes beyond mere donations, transforming Tuist into a sustainable enterprise. Yes, it’s a path strewn with challenges - from liaising with legal teams to mastering the art of sales. Yet, it’s these very challenges that ignite my passion. They’re opportunities in disguise, ones that promise to elevate Tuist to greater heights.

I may not have a crystal ball to glimpse into Tuist’s distant future, but our commitment is unwavering. We’ll continue innovating, always pushing the boundaries, and granting Tuist the superpowers it rightly deserves.

Here’s to the boundless possibilities that await.

]]>
<![CDATA[From its inception in 2018, Tuist has grown through passion and dedication, now embarking on new horizons.]]>
Fixing request/2 is undefined or private with Ueberauth https://pepicrft.me/blog/2023/08/06/ueberauth-request-error 2023-08-06T00:00:00+00:00 2023-08-06T00:00:00+00:00 <![CDATA[

In the process of setting up Ueberauth for my latest venture, Digestfully, I stumbled upon an interesting hurdle. A baffling request/2 is undefined or private error surfaced when I attempted to kick-start the authentication flow with one of the providers. This unexpected glitch left me pondering for a while until I found the underlying cause.

After a thorough read-through of the related documentation and a deep dive into community discussions, I stumbled upon an enlightening issue. The root of the problem was a mismatch in the library’s default path assumptions. While the Ueberauth library naturally presumes /auth as the default path, my project setup didn’t adhere to this.

Hence, the necessary step was to clearly specify the correct authentication path to the library during its configuration, like so:

config :ueberauth, Ueberauth,
  base_path: "/other/path",
  providers: [
    #...
  ]

The journey to uncover this solution took a fair bit of time, and a handful of head-scratching moments. But, in the spirit of shared learning and community growth, I’m documenting this experience here. To those who may run into the same issue in the future, consider this a digital breadcrumb trail.

And to the mighty Google indexing bots, I hope you pick this up and help those in need find this solution with ease.

]]>
<![CDATA[Tackled 'request/2 undefined' error in Ueberauth setup for Digestfully, sharing solution here.]]>
Abstracted complexity remains complexity https://pepicrft.me/blog/2023/08/05/abstracted-complexity 2023-08-05T00:00:00+00:00 2023-08-05T00:00:00+00:00 <![CDATA[

Recently, I had to decide on a technology stack for Glossia, a localization tool I’m bootstrapping with my wife. Choosing a technology stack is an exciting decision for us, as software crafters. Still, it carries a profound impact on both the software and the business that builds on it, and should not be taken lightly.
​ Many factors can influence this decision, but simplicity weighed heavily in our choice. By controlling system complexity, we limit the resources needed to bring the tool into existence. This approach is key to bootstrapping the project without taking external investment, an option we’re not wholly against but prefer to defer.\

While assessing the complexity of various stacks, I noticed something revealing — many technologies that seem simple at first glance may either have complexity abstracted away, which can sneak through upper layers, or may present complexity relatively early, such as when you introduce a background jobs system.

Can we maintain a simple stack for as long as possible? Let’s look at some examples to understand better what I mean.

Ruby on Rails is a fantastic framework, and Ruby is a very idiomatic language that’s enjoyable to work with. Large organizations like GitHuband Shopify running their businesses on it speaks volumes about the technology. We could go a long distance with it building a product and attracting the first customers. However, you would eventually need a platform or developer productivity team to ensure that complexity remains under control, not impacting developer productivity. These teams might find themselves grappling with state mutation causing test flakiness, scaling the system horizontally using technologies like Kubernetes, or building sophisticated pipelines to debug production issues and heal the system swiftly. One might think this is normal in any software, but let me tell you, a technology that doesn’t require that does exist.

What if we choose JavaScript or TypeScript? They’re very popular and offer extensive ecosystems, allowing your app to leverage Cloud Infrastructure providers like CloudflareCool, right? Cool until it’s not. Pretty much any layer in the ecosystem is fragmented, which might foster creativity but isn’t practical for running a business. Reactcontinues to be trendy, but with the introduction of React Server Components and Vercel monopolizing its development, some are flagging it as evil and shifting towards web components. If your company opts for it? Well, good luck. Frameworks are abstracting an absurd amount of complexity and introducing a lot of indirection, creating the sensation that complexity doesn’t exist. But it does.It’s just hidden. When things don’t work as planned, you must wade through complex layers, not designed for navigation during issues, and spend days unraveling complexity rather than developing the actual product. If we could gauge the time spent dealing with these matters, my bet is that it’s substantial.

So what should I choose instead? I recently discovered Elixir and the Erlang VM, powerful technologies that can scale with low complexity, without compromising developer experience. They provide everything you need to build from a very basic to a highly sophisticated system, and that’s why I chose it for Glossia.

The world is concurrent, with things happening simultaneously, communicating via messages. Each of these occurrences, or processes, is an independent element with its memory and interface. Traditional programming languages propose a sequential mental model, which evolved with hardware enabling multiple simultaneous processes. However, issues like shared memory led to memory races and complicated concurrent software development.

Erlang, which preceded Elixir and provides the Erlang VM, examined the world and asked: why not model problems and solutions to align with real-world mental models? What if processes were easy to create, each with a portion of memory and CPU, communicating through messages? This fundamental idea prevents complexity that other languages naturally inherit, making software more logical.

Elixir runs on the same VM, BEAM, but offers a more modern language and toolset. It can handle millions of processes running tasks in parallel across cores and concurrently on each core. It does this fairly, meaning that if one process is busy indefinitely due to a bug, others can continue working. Thanks to this, we can scale the system vertically by adding more cores, usually a simple task, and avoid Kubernetes unless resembling a tech giant more than a small startup. Since the language is functional with processes to store state, code is more predictable, easier to test, and can even run tests in parallel with less likelihood of flakiness. How crazy is it that you don’t need all the additional efforts associated with other languages?

That’s not all. Elixir’s support for macros and compilation error checking, access to powerful tools like the observer, and its error tolerance make it an impressive choice. Phoenix leverages processes to support component-based UIs without the complexity of frameworks like React, easing real-time experience creation, which we may utilize in Glossia for a collaborative review functionality.

Elixir and the Erlang VM are fantastic, and I strongly believe they’ll help us stay focused on bootstrapping Glossia. I’m thrilled these days building with it, and everything is fitting together beautifully. The ecosystem is mature, and the community is welcoming.

]]>
<![CDATA[Recently, I had to decide on a technology stack for Glossia, a localization tool I’m bootstrapping with my wife. Choosing a technology stack is an exciting decision for us, as software crafters. Still, it carries a profound impact on both the software and the business that builds on it, and should not be taken lightly.]]>
The Power of Concurrency: My Journey Learning Elixir https://pepicrft.me/blog/2023/07/26/magic-in-elixir 2023-07-26T00:00:00+00:00 2023-07-26T00:00:00+00:00 <![CDATA[

For the past few months, I’ve been immersed in learning Elixir. I can still recall the precise moment when curiosity about Elixir ignited within me. There I was, strolling around New York, while tuning into a podcast that featured José Valim. He discussed how many programming languages, like Ruby, were not designed to scale vertically (increasing computing power with more CPU processors). Moreover, those that did scale through threading primitives often posed challenges, such as data races. This made the task of writing parallel code less than ideal. Elixir, however, combined the Ruby-like syntax I favored with the Erlang mental models and virtual machines. This fusion allowed for the easy construction of scalable, distributed, and fault-tolerant apps. As soon as I returned home, I delved into the Elixir documentation.

The syntax of the language is reminiscent of Ruby, yet entirely functional. It didn’t take long for me to get acquainted with the syntax and primitives. Everything is organized into modules placed in various directories in the system, each exposing pure functions that can be piped using the |> operator. Things started getting truly fascinating when I began working with processes. This was somewhat of a novel concept for me in a programming language, yet I quickly drew parallels with an OS and its processes. They are lightweight encapsulations of tasks that don’t share CPU or memory resources. Numerous processes can be created instantaneously, communicating with each other through message exchanges. These processes can also be structured into a tree, enabling the codification of supervisory rules such as automatically restarting any child process that crashes.

This introduces a profound mental shift in how you perceive your programs. When this shift resonated within me, I instantly fell in love with Elixir’s approach to programming. The ‘click’ occurred when I started reading Joe Armstrong’s thesis on Erlang, which fuels the BEAM virtual machine that Elixir operates on, as well as the programming language itself. The thesis opens with an exploration of the system requirements that Armstrong was designing for Ericsson, the need for a programming language, and his vision for solving global issues with software. He then introduces the concept of concurrency-oriented programming. Let me share some fragments from the thesis:

The word concurrency refers to sets of events which happen simulta- neously. The real world is concurrent, and consists of a large number of events many of which happen simultaneously. At an atomic level our bodies are made up of atoms, and molecules, in simultaneous motion. At a macroscopic level the universe is populated with galaxies of stars in simultaneous motion.

When we perform a simple action, like driving a car along a freeway, we are aware of the fact that there may be several hundreds of cars within our immediate environment, yet we are able to perform the complex task of driving a car, and avoiding all these potential hazards without even thinking about it. In the real world sequential activities are a rarity. As we walk down the street we would be very surprised to find only one thing happening, we expect to encounter many simultaneous events.

If we did not have the ability to analyze and predict the outcome of many simultaneous events we would live in great danger, and tasks like driving a car would be impossible. The fact that we can do things which require processing massive amounts of parallel information suggests that we are equipped with perceptual mechanisms which allow us to intuitively understand concurrency without consciously thinking about it.

When it comes to computer programming things suddenly become inverted. Programming a sequential chain of activities is viewed the norm , and in some sense is thought of as being easy, whereas programming collections of concurrent activities is avoided as much as possible, and is generally perceived as being diecult.

I believe that this is due to the poor support which is provided for con- currency in virtually all conventional programming languages. The vast majority of programming languages are essentially sequential; any concur- rency in the language is provided by the underlying operating system, and not by the programming language.

This perspective is eye-opening, isn’t it? The world is concurrent, but programming languages are forcing us to model the world differently, using primitives that don’t map 1-to-1 to the real world, like controllers, repositories, factories, and builders. Armstrong goes further in explaining how we should observe the world to shape our software. He advocates for a 1:1 mapping of real-world concurrent activities to concurrent processes in our programming language, asserting that this is critical to minimize the conceptual gap between the problem and its solution, therefore enhancing maintainability:

Now we write the program. The structure of the program should exactly follow the structure of the problem. Each real world concurrent activity should be mapped onto exactly one concurrent process in our programming language. If there is a 1:1 mapping of the problem onto the program we say that the program is isomorphic to the problem.

It is extremely important that the mapping is exactly 1:1. The reason for this is that it minimizes the conceptual gap between the problem and the solution. If this mapping is not 1:1 the program will quickly degenerate, and become diecult to understand. This degeneration is oden observed when non-CO languages are used to solve concurrent problems. Oden the only way to get the program to work is to force several independent activities to be controlled by the same language thread or process. This leads to a inevitable loss of clarity, and makes the programs subject to complex and irreproducible interference errors.

This is one of the most potent concepts I’ve encountered recently, and it’s the primary reason I’m smitten with the Erlang VM and Elixir as a modern language on top of it. Just yesterday, someone questioned why I didn’t opt for Swift, which is also embracing a similar model. While I appreciate Swift’s syntax, the problem lies in its inherited technical debt from supporting the transition from Objective-C. Furthermore, the incorporation of concurrency-oriented programming adds another layer of technical debt. Many libraries and standard library APIs are unprepared for it, which could potentially expose you to data race conditions.

Erlang’s processes are the foundation for several other brilliant ideas that I’ll explore in future posts. On a side note, did you know that Glossia is powered by Elixir?

]]>
<![CDATA[Exploring Elixir, I discovered the power of concurrency-oriented programming]]>
Fixing psych compilation error trying to install Ruby on an Apple M2 laptop https://pepicrft.me/blog/2023/07/19/psych-issues-installing-ruby 2023-07-19T00:00:00+00:00 2023-07-19T00:00:00+00:00 <![CDATA[

Let me tell you about the hilarious roller coaster ride I had today while attempting to install Ruby 3.2.2 using rtx, which is basically a turbocharged version of asdf. Man, oh man, did I encounter some wild errors during this wild adventure! But fear not, dear readers, because I’ve got the solution to this mind-boggling puzzle, and I’m here to share it with you. Consider it my gift to humanity, saving you precious time and countless headaches.

So, buckle up and let’s dive into the wacky world of Ruby installation! The first step on this zany journey is to install a few packages via Homebrew. Here’s the list of packages you’ll need:

brew install zlib readline libyaml libffi

Once you’ve got them installed, it’s time to do some secret-agent-style stuff with your environment variables. These variables are like the secret codes that the Ruby compiler uses to link itself to the libraries you just installed through Homebrew. Sneaky, right?

Open up your ~/.zshrc file (or your shell profile of choice) and add the following lines:

# ~/.zshrc
export RUBY_YJIT_ENABLE=1
export RUBY_CONFIGURE_OPTS="--with-zlib-dir=$(brew --prefix zlib) --with-openssl-dir=$(brew --prefix [email protected]) --with-readline-dir=$(brew --prefix readline) --with-libyaml-dir=$(brew --prefix libyaml) --with-gdbm-dir=$(brew --prefix gdbm)"
export CFLAGS="-Wno-error=implicit-function-declaration"
export LDFLAGS="-L$(brew --prefix libyaml)/lib"

Now, here’s the kicker: to make sure these environment variables actually kick in, you need to either open a fresh terminal session or give your profile a good old source. It’s like summoning the magical powers of Ruby by performing a secret ritual. Trust me, it’s worth it!

Alright, folks, that’s all there is to it! You’re now armed with the knowledge to conquer the Ruby installation beast and emerge victorious. May your future terminal sessions be error-free, and may your code flow like a river of laughter.

]]>
<![CDATA[Buckle up for a hilarious adventure in Ruby installation! Unravel the mysteries of rtx, Homebrew, and secret environment variables.]]>
An update on the entrepreneurship path https://pepicrft.me/blog/2023/07/14/entrepreneurship-update 2023-07-14T00:00:00+00:00 2023-07-14T00:00:00+00:00 <![CDATA[

Just over a month has elapsed since I embarked on my entrepreneurial journey. I thought it would be an enriching experience to share my current standing, the emotional journey, and my forward-looking plans.

Firstly, I revel in the ability to bring my ideas to life without the need for consensus from multiple stakeholders or trying to convince senior leadership about its merits. It feels like a breath of fresh air to not have to deal with those challenges anymore, though it doesn’t come without its own set of trade-offs. The challenge is not to solely chase joy and fulfillment but also to create a profitable, sustainable business.

The primary engine for this is Tuist and Tuist Cloud. Tuist Cloud, a feature set necessitating an HTTP server, forms the ideal foundation to create a sustainable business model. This is vital to ensure the longevity of Tuist, currently buckling under the weight of an influx of support and feature requests. As for whether it will be open source, the jury’s still out. Initially, I aim to establish the most suitable form for the business, after which it will be prudent to navigate the legalities associated with making it open source. In terms of the programming language and frameworks, I have chosen Elixir and Phoenix for their ability to streamline my processes and accelerate iterations. While Swift on the Server shows promise, it’s currently more time-efficient to leverage existing solutions than to build missing components.

Alongside this, I have conceived a tool designed for React Native developers, using Tuist as a foundation. However, that idea is on hold until the feasibility of Tuist Cloud is confirmed. If Tuist Cloud doesn’t meet expectations, this alternate path awaits exploration.

A secondary focus of mine is Glossia, an AI-based localization tool I’m co-creating with my wife, a former member of the localization team at Shopify. We recognized several issues in the localization process that need addressing:

  • Localization often ends up as an after-thought despite its potential to make products more accessible.
  • Large localization entities struggle with scaling their efforts and sometimes resort to using Google Translate-based products.
  • The inherent complexity and obfuscation within localization can be daunting.

The solution to these issues appeared elusive until we delved deeper into the possibilities of AI. Our subsequent experiments hint at a potential breakthrough. Our combined expertise allows us to envision a solution that’s easy to use and integrate into existing workflows. We’re vigorously working towards an MVP before summer ends. Provided things go smoothly, we’ll cultivate a community of translators and developers and strive for integration with the most popular frameworks, including Shopify Apps. We firmly believe in the necessity of translating these apps to ensure accessibility.

In conclusion, I am thoroughly enjoying this journey despite the financial uncertainty that looms. I remain hopeful that we will soon see clarity on that front. If by year-end the financial landscape remains murky, we might seek external support, equipped with a solid foundation of work. I look forward to the challenges and opportunities that lie ahead, and I am eager to share more about our journey in future posts.

]]>
<![CDATA[Embarking on a tech-venture adventure; creating value, chasing joy, and conquering challenges.]]>
Integrating Tailwind into your Swift projects https://pepicrft.me/blog/2023/06/18/introducing-swiftytailwind 2023-06-18T00:00:00+00:00 2023-06-18T00:00:00+00:00 <![CDATA[

In recent years, Tailwind has gained popularity as a web styling tool. Frameworks such as Ruby on Rails and Phoenix default to using Tailwind for new projects. If you’re unfamiliar with Tailwind, it provides a set of well-defined and configurable utility classes that ensure consistent styling and allow for atomicity of markup and style. While JavaScript-based UI solutions like React or Vue already offer this functionality, they rely on the JavaScript ecosystem of tools and packages. Tailwind, on the other hand, brings these benefits to other ecosystems without introducing additional dependencies. It accomplishes this through a simple CLI that runs during build time and outputs CSS, removing any unused classes from the project. Essentially, using Tailwind doesn’t require developers to install new system dependencies like NodeJS, making it easier to contribute to a project.

When I returned to working with Swift, I noticed that the Swift ecosystem lacked an easy way to integrate Tailwind into Swift on the Server projects, such as those based on Vapor or Publish. To address this, I took it upon myself to create a new Swift package called SwiftyTailwind. It allows for the lazy downloading and execution of the Tailwind CLI using system processes. Below, you’ll find an example of how to use SwiftyTailwind:

let tailwind = SwiftyTailwind()
try await tailwind.run(input: .init(validating: "/app/app.css"), output: .init(validating: "/app/build/app.css"))

Integrating it with Publish

Integrating SwiftyTailwind is a breeze with the use of plugins. Simply instantiate and pass a plugin, and Publish will execute the code within the closure, generating the CSS file at ./Output/output.css. Remember to include the necessary code in the <head> tag of your website to load these styles.

try Site().publish(withTheme: .tailwind, plugins: [
    .init(name: "Tailwind", installer: { context in
        let rootDirectory = try! AbsolutePath(validating: try context.folder(at: "/").path)
        try await tailwind.run(input: rootDirectory.appending(components: ["Style", "input.css"]),
                               output: rootDirectory.appending(components: ["Output", "output.css"]))
    })
])

Integrating it with Vapor

Integrating SwiftyTailwind with a Vapor project is a seamless experience. To begin, you need to configure the app to serve static assets located in the default Public directory by Vapor. Here’s an example of the code:

app.middleware.use(FileMiddleware(publicDirectory: app.directory.publicDirectory))

Next, create an instance of SwiftyTailwind and invoke it by providing the file Resources/Styles/app.css as input. Additionally, set the options watch and content to point to Resources/Views/**/*.leaf. Take a look at the code example below:

// Tailwind
let publicDirectory = try AbsolutePath(validating: app.directory.publicDirectory)
let inputCSSPath = try AbsolutePath(validating: app.directory.resourcesDirectory).appending(.init("Styles/app.css"))
let outputCSSPath = publicDirectory.appending(component: "app.css")
async let runTailwind: () = try tailwind.run(input: inputCSSPath, output: outputCSSPath, options: .watch, .content("Resources/Views/**/*"))
async let runApp: () = try await app.runFromAsyncMainEntrypoint()

_ = await [try runTailwind, try runApp]

Finally, update the <head> section of your views to load the generated app.css file from the Public directory. Here’s an example:

<link rel="stylesheet" href="/app.css">

And that’s it! SwiftyTailwind will monitor file changes in your Resources/Views directory and automatically regenerate the CSS output.

You can check out the Examples directory in the project repository.

Contributing to Swift on the Server

When compared to programming languages that have a more established presence in web development, Swift’s ecosystem is still in its early stages. However, I am determined to contribute to changing that by building and sharing utilities with the community. Tailwind was my initial contribution, but I also have plans to introduce ESBuild and Orogene to Swift. These additions will enable a more advanced build pipeline in Vapor and facilitate fetching Node dependencies without relying on the NodeJS runtime. Exciting developments are on the horizon, so stay tuned for more updates!

]]>
<![CDATA[Tailwind: A game-changer for web styling, with seamless integration into Swift server projects using SwiftyTailwind.]]>
Issues Dockerizing a Vapor project in M2 https://pepicrft.me/blog/2023/06/09/vapor-follow-up 2023-06-09T00:00:00+00:00 2023-06-09T00:00:00+00:00 <![CDATA[

It turns out that telling the Fly CLI to build with a local Docker is insufficient. When run in an M1 or M2 architecture, Docker uses QEMU to cross-compile the binary, and that causes the issues that I was seeing. To fix it, I could have configured Docker to use Rosetta, but instead, I decided to run the deployment from a GitHub Action. Due to the GitHub Actions environment’s architecture, issues don’t arise, and I was able to deploy the app successfully.

]]>
<![CDATA[Fly CLI + Docker on M1/M2 architecture caused issues, so I switched to GitHub Actions for deployment. No more problem!]]>
Focusing on Swift https://pepicrft.me/blog/2023/06/08/focusing-on-swift 2023-06-08T00:00:00+00:00 2023-06-08T00:00:00+00:00 <![CDATA[

In the past years of my career, I went from being a Swift developer to working with various technologies. In particular, Ruby, Javascript, Typescript, and Elixir. I also got familiar with the web ecosystem of tools and frameworks. The shift gave me a unique perspective on how to solve problems that span across technologies but at the cost of mental clutter that makes focusing on building harder. I spend most of my time catching up with tweets, newsletters, and podcasts than doing anything with any technology. If Linguee hasn’t lied to me, people say in English, “Do not bite off more than you can chew”.

For my mental health I think it’s time to bite less and limit the time I spend reading and learning about new technology. I focused on the Swift programming language and Apple’s App ecosystem. First and foremost, I like it and owe a lot to it. Watching the WWDC Keynote this year felt as excited as it used to before I put Swift and the Apple platform aside. Second, as someone interested in building products, there isn’t an ecosystem better than Apple’s to stay focused on a problem. Apple provides a well-integrated set of tools to be highly productive and plenty of community resources like Swift Packages to help with common problems.

It’ll be an effort for me, but I’ll park further learning Elixir, Typescript, Javascript, and anything related to the web ecosystem. I’m grateful for everything I’ve learned from it, but it’s time to go back to my roots and have some fun building apps.

]]>
<![CDATA[Transitioning from Swift to diverse technologies broadened my problem-solving skills, but scattered focus hindered productivity. To prioritize mental well-being, I'll limit tech consumption. Swift and Apple's ecosystem captivate me, offering integrated tools and a supportive community for app development. I'll step back from web-related learning and return to my roots, enjoying the process of building apps once again.]]>
Hitting memory limits deploying Vapor apps to Fly https://pepicrft.me/blog/2023/06/08/vapor-memory 2023-06-08T00:00:00+00:00 2023-06-08T00:00:00+00:00 <![CDATA[

I’ve been trying to deploy a Vapor app to Fly, and the deploy command continuously aborted unexpectedly. It turns out that Swift’s static linter needs more memory than the one available in the builders that Fly provides, 2048 MB. I tried to increase the memory using fly scale as suggested in the community forum, but it didn’t work. Perhaps because the scale command cannot be used with the builder. I used the --local-only flag to build locally using my local running instance of Docker. Once the image is built, the Fly CLI pushes it to its image registry and continues the deployment.

]]>
<![CDATA[Deploying a Vapor app to Fly encountered unexpected issues. Swift's linter requires more memory than Fly's 2048 MB limit. Scaling via fly scale didn't solve the problem. Workaround: built locally with Docker, then pushed to Fly's image registry for deployment.]]>
Embracing the Journey of an Indie Developer https://pepicrft.me/blog/2023/05/28/indie-developer 2023-05-28T00:00:00+00:00 2023-05-28T00:00:00+00:00 <![CDATA[

I’ve been doing some serious thinking about where I want my career to go, and guess what? I’ve had this amazing realization—I want to dive into the world of indie development!

Ever since I was a kid, I’ve had this crazy passion for creating stuff and sharing it with the world. Drawing used to be my thing, and it still brings me so much peace. But then, technology came along, and I realized that software development is like a superpower for bringing ideas to life. I’ve built open source tools, given awesome talks, organized conferences, and built communities.

For a while, I followed the traditional path that everyone around me was taking—getting a stable job, working eight hours a day, and enjoying vacation time. And you know what? It was great! But deep down, I’ve been craving something more—a closer connection to software development, like a true craftsman. The idea scared me at first, but you know what? Life’s all about taking risks and chasing your dreams.

What’s fueling my fire to embark on this journey are the positive feedback and the adoption of the tools I’ve created. Just look at Tuist, for example. Big names like Adidas, Bloomberg, American Express, Monday, Stripe, and even Ford Motors are using it to handle their Xcode projects. And let’s not forget about the revamp of Shopify’s CLI that I came up with to support their future.

So here’s the plan: I’ll start by identifying the best ideas and diving right into them. I’m even excited to explore new domains beyond developer experience. I’ll focus on building my personal brand, getting super active on my blog and social networks, so I can build a source of revenue as a contractor if I’m not able to monetize the ideas.

I’m all about doing less but doing it way better — putting all my passion into a handful of projects.

I’m bursting with excitement as I embark on this new chapter in my career. The last time life threw a similar opportunity my way was when I made one of the best decisions of my life—moving to Berlin. And now, as I enter my thirties, it’s time for me to jump on this train of endless possibilities.

If any of you have walked a similar path and want to share your experiences, I’d be thrilled to hear from you. I have so much to learn, and your insights would be priceless. Shoot me an email at [email protected].

Let’s make magic happen together!

]]>
<![CDATA[After 5 years at Shopify, I’ve decided to pursue solopreneurship. Drawing on my craftsmanship and software development skills, I’ll build and sell tools based on user needs. Excited for the intimate relationship with development and greater agency over my life. Open to insights from others on this journey.]]>
How to Configure VSCode to Use Alternative Shells https://pepicrft.me/blog/2023/04/02/change-vscode-default-shell 2023-04-02T00:00:00+00:00 2023-04-02T00:00:00+00:00 <![CDATA[

If manage the installation of alternative shells like ZSH or Fish, as I do via Nix, you might consider configuring VSCode to use that installation instead of using the VSCode’s default profiles. If so, you can do it easily by opening the VSCode settings, defining a new profile, and setting it up as the default:

"terminal.integrated.profiles.osx": {
    "Nix-managed ZSH": {
      "path": "/Users/pepicrft/.nix-profile/bin/zsh"
    }
},
"terminal.integrated.defaultProfile.windows": "Nix-managed ZSH"
]]>
<![CDATA[Learn how to easily configure VSCode to use your preferred alternative shell installation like ZSH or Fish instead of the default profiles.]]>
Generating client secret from Apple's P8 key in Elixir https://pepicrft.me/blog/2023/03/27/apple-sign-in-client-secret 2023-03-27T00:00:00+00:00 2023-03-27T00:00:00+00:00 <![CDATA[

I had to implement Sign in with Apple as part of a macOS app I’m building with a friend. The login would initiate on the client, and a session would be created server-side in a Phoenix application. The work took me down a rabbit hole in understanding JWT and how Apple uses them. Even though, in the end, the solution turned out to be simpler than expected, thanks to this blog post that shed a lot of light, I ended up coming up with a piece of code to generate a client secret, which can be used when doing web authentication and having to validate and refresh the token with Apple’s servers. I thought it’d be helpful to share it here for anyone running into the same need, or for ChatGPT for indexing. By the way, all I needed to do to implement authentication on macOS was to verify server-side that the identity token was generated by Apple using their public key.

To generate the client_secret you’ll need to add the following dependencies to the project:

{:joken, "~> 2.6.0"},
{:jose, "~> 1.11"}

Once you have them, the implementation is very concise. The certificate variable in the example below is the content of the .p8 file generated by Apple. You can read its content using Elixir’s File.read! API. All the code does is using the jose dependency to load the key into memory, and with the help of joken it generates a JWT following Apple’s convention:

def client_secret(%{ team_id: team_id, client_id: client_id, certificate: certificate}) do
    {_, key_map} =
      certificate
      |> JOSE.JWK.from_pem()
      |> JOSE.JWK.to_map()

    signer = Joken.Signer.create("ES256", key_map)

    claims = %{
      "aud" => "https://appleid.apple.com",
      "iss" => team_id,
      "sub" => client_id,
      "iat" => :os.system_time(:second),
      "exp" => :os.system_time(:second) + 86400 # 1 day
    }

    {:ok, secret} = Joken.Signer.sign(claims, signer)
    secret
end
]]>
<![CDATA[Implementing Sign in with Apple on a macOS app using JWT and Phoenix. Learn how to generate a client secret for web authentication. ]]>
Typescript not loading in Visual Studio Code https://pepicrft.me/blog/2023/03/09/typescript-not-working-in-vscode 2023-03-09T00:00:00+00:00 2023-03-09T00:00:00+00:00 <![CDATA[

I spent much time today investigating why Visual Studio Code was not loading Typescript as usual. I initially thought it was due to the Typescript compiler, tsc, not being present in the environment. However, it turned out that Typescript is an internal extension of VSCode, and it was disabled for some reason. If you ever come across the same issue, all you have to do is to search the following extension in VSCode:

@builtin typescript

The @builtin is essential because the search filters out internal extensions by default. Once you enable it, it’ll start working again. What caused it? I don’t know, and I think it’ll remain a mystery forever.

]]>
<![CDATA[I share the investigation into Visual Studio Code not loading Typescript and the solution I found - enabling the @builtin typescript extension.]]>
Iterating on my learning system https://pepicrft.me/blog/2023/02/14/learning-systems 2023-02-14T00:00:00+00:00 2023-02-14T00:00:00+00:00 <![CDATA[

I find it challenging to retain the things that I learn. In most cases is because I don’t need that learning daily. I have many examples of this. I take German lessons, but then I don’t have the opportunity to use them. I’ve learned the basics of Rust several times, and I keep forgetting it because I don’t need it. Part of me thinks it’s okay to learn new things, even if I’m aware I’ll forget them, because I’ll be able to create connections with other pieces of knowledge, and ideas will emerge. It’s indeed one of the reasons why I like to learn new technologies and programming languages. I learn things that I can apply to other domains or make better tradeoffs when making decisions. The connection of multiple pieces of knowledge gives me unique perspectives.

However, my ability to retain and connect things will degrade over time. Hence, I think it’s essential to build a system to dump knowledge off my head, making connections with existing knowledge. I’m building that system on the Logseq app. I decided on it for a few reasons:

  • It’s open-source
  • It uses standard formats (e.g., Markdown)
  • It doesn’t create vendor lock-in (I own my knowledge)
  • It’s extensible via plugins
  • It has a great community around it

Dumping knowledge in an app is unnatural, especially when knowledge happens anywhere and anytime. But I’m starting to make it feel natural. Whenever I have an idea or learn something, I open Logseq and dump it there, ensuring I tag it properly. I’m also refraining from using integrations with platforms like Readwise to prevent dumping stuff without manual processing. Another thing that I’m doing is reading about the Zettelkasten method, which Logseq takes inspiration from. Once I’m interiorized the knowledge capturing, I’ll move on to working on that knowledge and making connections using the tools that Logseq provides — for example, throwing myself into a part of the graph and navigating it from there, filling knowledge gaps and learning new things along the process.

If knowledge systems are a topic that interests you, I’d love to hear about your system and processes. Drop me a DM

]]>
<![CDATA[Some notes on what changes I'm introducing to my learning system.]]>
Static imports with ESM and startup time https://pepicrft.me/blog/2022/12/23/startup-time-in-node-clis 2022-12-23T00:00:00+00:00 2022-12-23T00:00:00+00:00 <![CDATA[

When building a Command-line interface (CLI) with Javascript and ESM to run on NodeJS, one can end up with a CLI that’s slow to launch (above hundreds of milliseconds). It’s common for developers to use static imports at the top of the source files:

import { groupBy } from "lodash-es"

Those imports form a module graph that needs to be loaded before any code gets executed. And because loading a graph entails doing IO and in-memory parsing operations, some of which can be parallelized, there’s a strong correlation between the size of the graph and the time it takes to load. It’s indeed one of the reasons, among others, why developers choose Rust or Go as programming languages to implement their CLIs. Compilers statically link all the code and the startup time is insignificant.

Note that the problem goes away when working on a client-side rendered app because bundling tools smash all the modules into a single or handful of modules. Vite embraces ESM in development but does some bundling-based optimizations with third-party dependencies. In the case of web servers (e.g., Express-based HTTP server) or SPAs, the startup time also gets impacted, but additional seconds during deployment don’t impact the developer experience significantly. Orchestrators like Kubernetes wait until the server runs to send traffic to it.

What can we do about this? You can remain with CommonJS, although I’d advise not to. ESM is the standard, and more NPM packages are making it their default. CommonJS works synchronously and doesn’t have to wait for the whole graph to load to start executing code. First, I recommend minimizing the number of dependencies of your project. It is also suitable for security and graph determinism at installation time. If you need to add a dependency, check how they export modules. If they have a single export from where you import everything, then use dynamic imports:

// Static import
import { bar } from "bar"

async function foo() {
  // Dynamic import
  const bar = await import("bar")
}

The best scenario is the dependencies using subpath exports, meaning that you only import what you need. However, few dependencies in the ecosystem are designed this way, so it’s rare to come across one. As a last resource, you can introduce a compiler that can tree-shake external dependencies and delete unused code. However, code transformation might output Javascript code that blows up at runtime, so you’ll have to invest in integration tests.

]]>
<![CDATA[When building a Command-line interface (CLI) with Javascript and ]]>
An explicit build graph with Nx for a better DX https://pepicrft.me/blog/2022/12/13/an-explicit-build-graph-for-a-better-dx 2022-12-13T00:00:00+00:00 2022-12-13T00:00:00+00:00 <![CDATA[

When I started doing Javascript more actively, something that got my attention was the absence of a build system that allowed declaring a build graph explicitly. Most of the time, developers combine package.json scripts with shell operators:

{
    "scripts": {
        "clean": "rimraf dist/",
        "build": "pnpm clean && tsc"
    }
}

The above setup is standard across Javascript tasks. A task for cleaning the output directory, clean, and another to output Javascript from the Typescript source code, build, which depends on clean. Because package.json‘s scripts section doesn’t have the notion of dependencies, developers have to resort to workarounds like using &&: Do clean, and if it succeeds, build.

Defining dependencies that way works if the project is small. However, if a project gets larger and automation more complex, the dependency graph will inevitably be hard to reason about, hindering contribution to the project.

Luckily when I started working on GestaltJS I came across the Nx build system, which solves the above problem beautifully. Their approach was not new to me since the problem they are solving and how they are solving it is something that I had to do for Xcode projects in Tuist. You have a bunch of interdependent modules that a developer can interact with (e.g., test, build, lint), and you want the information to be codified in a graph that can be leveraged to introduce optimizations.

Some minds behind Nx are ex-Googlers that had the chance to work on Google’s build system, Bazel. Unlike Bazel, which has a steep learning curve, and you have to learn a Python-like syntax, Nx is extremely easy to get started and add to an existing project. The developer experience (DX) is top-notch.

Once the tasks and the dependencies are declared, you have access to a whole set of tools that can save you a lot of time. One of them is incremental builds and being able to run tasks only for the affected packages. How cool is that? They can also cache task outputs across local and remote builds. It reminds me so much of the work we had to do to bring caching to Tuist. You also get a graph command to visualize the automation dependency graph. So no more navigating across package.json files to understand how tasks are interconnected.

I’d recommend giving the tool a shot if you have a workspace. Shopify’s CLI has also had Nx set up since the project was created, and it’s been a wonderful experience so far. We haven’t had any issues with the tool, and it keeps improving with every release.

Nx is one of my references when devising iterations for the Shopify CLI

]]>
<![CDATA[When I started doing Javascript more actively, something that got my attention was the absence of a build system that allowed declaring a build graph explicitly. Most of the time, developers combin... ]]>
Hot-reloading and ESM https://pepicrft.me/blog/2022/12/01/hot-reloading-esm 2022-12-01T00:00:00+00:00 2022-12-01T00:00:00+00:00 <![CDATA[

While building Gestalt, I realized that many web frameworks don’t move away from CommonJS because their usage of modules in ESM would lead to a slower hot-reloading experience. This is primarily due to how module graphs are loaded with ESM – the entire graph needs to be fully loaded before the code starts executing. Imagine how slow hot-reloading would be if that had to happen every time we changed a file. Solving it would be doing dynamic imports in development or static ones in production. This can be achieved through a build process that is environment-aware.

// Static ESM imports
import { loadRoutes } from "gestaltjs/routes"

// Dynamic ESM imports
function doSomething() {
  const { loadRoutes } = await import("gestaltjs/routes")
}
]]>
<![CDATA[While building Gestalt, I realized that many web frameworks don’t move away from CommonJS because the... ]]>
Growing as a Staff Developer https://pepicrft.me/blog/2022/11/01/growing-as-a-staff 2022-11-01T00:00:00+00:00 2022-11-01T00:00:00+00:00 <![CDATA[

A couple of months ago, I reached Shopify‘s Senior Staff Developer level. They were exciting news and excellent proof that Shopify continues to be a place for growth. Yet they pushed me out of my comfort zone, throwing me into a new realm of responsibilities.

Settling into the new role is taking me some time. One reason is that I’m going through a process of accepting the change. My new role is less about being hands-on with coding solutions and more about the business. It’s about reading, writing, talking, and socializing change in the organization. It’s familiar, but I’m not as fluent and comfortable as I wish. Being in front of people and presenting an idea still feels terrifying.

Excellent communication skills are essential for this role, and mines have a lot of room for improvement. Therefore, I started working on them. I’m writing more regularly now and reading more slowly. While I do, I pay attention to how ideas are structured and connected. As someone with ADHD, focusing on a lengthy text piece feels painful. I also scheduled regular chats with people from other teams and leaders. The aim of these conversations is to have a more holistic view of the organization and its direction. This is useful to spot opportunities to support the organization best.

There are so many unknowns ahead of me, but it’s part of playing the infinite game of the software craft. This as an excellent opportunity to grow professionally and personally. And I’m fortunate to do it surrounded by many talented people I can learn from.

]]>
<![CDATA[A couple of months ago, I reached Shopify‘s Senior Staff Developer level. They were exciting news and excellent proof that Shopify continues to be a place for grow... ]]>
Typing file-system paths in Typescript https://pepicrft.me/blog/2022/09/16/typing-paths-in-typescript 2022-09-16T00:00:00+00:00 2022-09-16T00:00:00+00:00 <![CDATA[

Have you ever noticed how common it is in standard libraries to treat file system paths as strings? In fact, Node’s path module exports a handful of convenient functions, all of which expect string arguments. There are a few caveats in following that approach. The first and more prominent one is that developers naturally operate with paths as if they were strings, which often leads to bugs. For example, concatenating a string that represents a relative path (e.g., index.ts) to an absolute one (e.g., /project/src) leads to /project/srcindex.ts, which is wrong. These issues don’t happen if we use the functions provided by the node:path module, but once again, they are strings; why not treat them as such?

The second issue is that APIs with paths as arguments or output values are not explicit enough about the type of paths. This is somewhat solvable with documentation, but wouldn’t it be better if the compiler, Typescript, is the one guiding you to use the APIs correctly. For example, making the compilation fail if you pass a relative path when the API expects an absolute one.

And last but not least, maintaining a piece of business logic that has paths as strings put the developer in the position of having to make assumptions, and that’s always a terrible idea. Is this path here absolute? Or maybe it’s relative? If it’s relative, is it relative to the working directory? Or maybe the project’s directory? You don’t want project contributors to be asking themselves those questions. Instead, you want paths to be made early in your system and pass them around, making it clear that they are operating with an absolute path and not a string prone to misusage.

To solve the above issues, I open-sourced a tiny NPM package, typed-file-system-path, which provides primitives for modeling absolute and relative paths and operating with them. The API is simple. You have utilities to initialize a relative or an absolute path. They’ll through if you are initializing them with an invalid path. The primitives provide convenient functions to prevent having to import utilities from the node:path module:

import { relativePath, absolutePath } from "typed-file-system-path"

// Initialize an absolute path
// @throws InvalidAbsolutePathError if the path is not absolute.
const dirAbsolutePath = absolutePath("/path/to/dir")

// Initialize a relative path
// @throws InvalidRelativePathError if the path is not relative.
const fileRelativePath = relativePath("./tsconfig.json")

The inspiration for this project comes from Path.swift, a primitive that Apple built as part of their swift-tools-support-core and that I used extensively in Tuist. Up next is adding more convenient functions, and update Gestalt to use the AbsolutePath and RelativePath types.

]]>
<![CDATA[Learn about an NPM package that we published recently, typed-file-system-path, that adds primitives to work with file-system paths more safely using types.]]>
On learning Elixir https://pepicrft.me/blog/2022/08/25/on-learning-elixir 2022-08-25T00:00:00+00:00 2022-08-25T00:00:00+00:00 <![CDATA[

As you might have noticed, I’ve been learning Elixir for the past few weeks. Why? You might wonder. I’m a programming languages nerd. I like learning about how different languages solve the same challenges, which gives me new perspectives and ideas for solving upcoming problems.

I came across a talk about the Erlang virtual machine, BEAM, and it blew my mind. I then read about Elixir, and its similarity with Ruby instantly clicked with me. The more I read about it, the more I liked it. At the same time, I also started to feel that I’d better not spend more time with Javascript than what’s strictly necessary. The constant need to re-invent every layer of the stack, the VCs’ urge to make their way into those layers, and how uncommon it is to embrace good software practices made me highly uncomfortable and stressed. I’ve discussed this in past blog posts, so I won’t repeat myself.

One thing that fascinated me about Erlang is that it defers introducing complexity to your Elixir projects. Needs for which you’d introduce elements like Redis, Kubernetes, load balancers, and background jobs solutions into your system are well addressed by the VM when you get started. Its pattern matching is different, and I find their approach to functional programming easy to reason about and work with. I believe in building pure functions and dealing with immutable states, but the syntax of strongly functional programming languages like Clojure hasn’t clicked on me yet. Talking about sharing mutable states… this is something that bugs me when working with Ruby or Javascript. It’s common to see states being stored and mutated at the module level. Everything works until it doesn’t, or tests become flaky because of it.

I’m considering moving this blog to Elixir using Phoenix to consolidate my learnings. I came across the nimble_publisher package that allows embedding static content, for example, blog posts in a Git repo, into the BEAM binaries that end up being deployed. I know this might sound too much for a blog, but what’s better than your blog for a bit of over-engineering? Turning my blog into a long-running process will allow me to add some interactive bits to it.

After it, I’m also considering building a tool to scratch an itch that I’ve had for quite some time. Collecting and processing personal financial information from bank accounts and investment platforms to make better and more informed decisions. As always, it’ll be open source, and if it turns out to be helpful, I might host an instance and allow other people to use it too. But first, let’s see if I can build it for myself.

Are you also learning Elixir or are familiar with it? What do you like about it?

]]>
<![CDATA[As you might have noticed, I’ve been learning Elixir for the past few weeks. Why? You might wonder. I’m a programming languages nerd. I like learning about ho... ]]>
On finding passion in devising developer experiences https://pepicrft.me/blog/2022/06/27/on-finding-passion-in-devising-developer-experiences 2022-06-27T00:00:00+00:00 2022-06-27T00:00:00+00:00 <![CDATA[

What am I professionally? I don’t have a clear answer. I used to say I was an iOS developer with a passion for Swift, but that’s no longer true. Shopify turned me into a more generalist developer and, more importantly, helped me see technology as an implementation detail. I’m no longer as excited about a particular technology as I am about finding the best solution to a problem. But not any problem domain; I love the developer tooling space. It feels fantastic building developer experiences because I can scratch my own itches.

Moreover, I think I’ve developed a good sense for building great experiences through projects like Tuist, Gestalt, and a lot of inspiration from Ruby and Rails. Devising developer experiences is more of a product role, but I’m not a product designer. Should I dive into what it entails to be a product designer and apply it to the developer tooling domain?

It feels odd that product and development is a binary distinction in companies. It causes me a lot of impostor syndrome and a lack of identity. It’d be great if the gap between product and development was a spectrum, and you could grow within it too. Imagine one day wearing a technical hat because there’s a problem you want to go really deep into solving because you think it’ll positively impact DX. But the next day, you build prototypes around new workflows that you think users will love.

The reason why I love open source so much is that it’s not who I am but about what experiences I want to create. I feel highly empowered when I free any labels and can navigate domains.

]]>
<![CDATA[What am I professionally? I don’t have a clear answer. I used to say I was an iOS developer with a passion for Swift, but that’s n... ]]>
Modular projects, Typescript, and developer experience https://pepicrft.me/blog/2022/06/23/modular-typescript 2022-06-23T00:00:00+00:00 2022-06-23T00:00:00+00:00 <![CDATA[

Have you tried to set up a modular Typescript project with multiple NPM packages? It’s painful. Typescript tried to solve that with project references, an API to declare the project graph to Typescript for it to build the packages in the correct order. Still, it presents a terrible developer experience (DX) editing the code. It’s widespread, especially when the project is in a clean state (i.e., dist/ not populated with Javascript and definition files), that the language server protocol can’t find interfaces. One can mitigate the issue by calling tsc to emit declaration files, but they get outdated as soon as you start changing the public interfaces across packages.

The solution we adopted on the new Shopify CLI and Gestalt is using the paths option in the tsconfig.json file. The API is intended to be used for re-mapping imports within a particular module, but it also works to define aliases cross packages. Here is an example of how we use them in the Shopify CLI. Thanks to it, Typescript can resolve cross-package typed contracts instantly. Note that when transpiling or bundling the code of each package, you’ll need to tell the build tool to treat those imports as external dependencies and leave them as they are. Here is an example of how we do it for Rollup.

One caveat of the above approach is that since the Typescript configuration is static, you won’t be able to extract the aliasing configuration and share it across Typescript and build tools. This means you’ll end up with various sources of truth for the same information. I hope that Typescript allows the configuration to be more dynamic through a Javascript-based interface.

If you run into a similar use case, I hope you find the setup described in this blog post helpful. Also, if you know another strategy that works better than this one, I’d love to know about it.

Happy Typescripting!

]]>
<![CDATA[Have you tried to set up a modular Typescript project with multiple NPM packages? It’s painful. Typescript tried to solve that with ]]>
On embracing my chaos https://pepicrft.me/blog/2022/05/27/on-embracing-my-chaos 2022-05-27T00:00:00+00:00 2022-05-27T00:00:00+00:00 <![CDATA[

Over the past few years, I’ve tried and failed many times at giving my chaotic self some order — something that inevitably made me feel anxious.

I tried to organize myself using todo apps. I always used any random piece of paper that I found near me. I also tried to file, categorize, and prioritize issues on a GitHub repository. Still, I ended up resorting to a .gitignored TODO.md document. My note-taking apps are a mess. Thinking about how to label and organize my notes is an unnecessary mental burden for me. When something pops in my head, I want to jot it down and move on.

A caveat to my chaos is that it is not very compatible when collaborating with other people, for example, at work or doing open source. Some level structure is necessary for coordination to happen. Because of that, I sometimes pause, reflect, and give my chaos some structure to work with others toward a common goal. It’s not natural to me, but I don’t know about a better way. For example, I write up project roadmaps and visions, capture ideas or bugs on GitHub issues, or do brain dumps in the shape of blog posts. I did a lot of those when I maintained Tuist, and it had a positive on the community we were able to build around the tool.

I embraced chaos as one of my traits to mitigate the bits of anxiety that structuring the chaos brought me. Moreover, I adopted a tool like Logseq that allows me to capture the chaos in a raw state and defer giving some shape later. I usually do the latter if it’s essential to be able to get back to in the future or if it’s something that I plan to share with others.

]]>
<![CDATA[Over the past few years, I’ve tried and failed many times at giving my chaotic self some order — something that inevitably made me feel anxious. I tried to organize myself ... ]]>
Mitigating 'delete node_modules' https://pepicrft.me/blog/2022/05/06/mitigating-delete-node-modules 2022-05-06T00:00:00+00:00 2022-05-06T00:00:00+00:00 <![CDATA[

If you’ve worked in the Javascript ecosystem, you might already be familiar with the “delete node_modules” solution commonly suggested on StackOverflow and GitHub Issues. People make fun of it, but it’s a frustrating scenario that ruins the developer’s experience using a tool or a framework.

After immersed me in the Javascript ecosystem as part of my work at Shopify and on Gestalt, I understood better what leads to this scenario. It’s the combination of an ecosystem favoring many small packages over fewer but larger ones and the reliance on humans to follow semantic versioning from their packages. When one out of thousand packages introduces breaking changes that are not reflected by the version of the package, that causes the contract with other nodes in the graph to break, and the broken contract often surfaces as a broken workflow for the user.

solution to this problem would come down to having smaller dependency graphs and having more acceptance tests in packages that can manifest breaking changes on CI, but that’s unfortunately not an overnight change considering the size of the ecosystem. Because of that, in Gestalt, we decided to defend ourselves against sources of non-determinism like that one. We did so by leveraging the bundling process through Rollup. I know this sounds weird if you are used to using bundlers to optimize the artifact that’s served to the user, but believe me, it plays a crucial role in improving the experience for Gestalt users.

Our packages have their external dependencies as devDependencies. The versions are pinned through Gestalt’s lock file. They are tree-shaked and bundled as part of the bundle of the package, and that’s the bundle that we use for running our e2e tests, including in the NPM package that users install. If the bundle passes our e2e tests, it’ll work as expected on the user side. We make exceptions for mature dependencies and have solid test suites because we have a higher trust in their usage of semantic versioning.

We’ve been using Rollup for bundling, and we couldn’t be happier with it. It also helps transform CJS dependencies that we can’t interoperate with because they don’t follow the Node conventions. Here’s an example of the configuration used for bundling the @gestaltjs/core package.

Every tiny detail can have a significant impact on the developers’ experience. Therefore we can’t embrace “delete node_modules” as it is what it is when there are strategies we can adopt to minimize it.

]]>
<![CDATA[If you’ve worked in the Javascript ecosystem, you might already be familiar with the “delete node_modules” solution commonly suggested on  StackOverflow... ]]>
But they are developers too https://pepicrft.me/blog/2022/05/05/they-are-developers-too 2022-05-05T00:00:00+00:00 2022-05-05T00:00:00+00:00 <![CDATA[

I often hear a statement when justifying decisions in building developer tools: but they are developers too. It bugs me a ton because it throws all the developers into the same bag and assumes that they know what you know.

If we want to build great developer tools, we need to start by acknowledging that the developer community is diverse in terms of backgrounds, skills, and levels. Our lazy side would prefer a single persona because we can design the tools using us as a reference, and without researching much. But fortunately, the world is heterogeneous.

Acknowledging diversity is necessary but not sufficient. We need to have the empathy to connect with the various personas that will use the tools. Here are some examples:

  • A developer that doesn’t want to make many decisions.
  • A developer that has strong opinions and wants to customize their setup.
  • A developer is new to programming and is getting familiar with some concepts.

Once connected with those profiles we are building for, we can design a tool that either infers the experience based on the identified persona or provides an interface to indicate it. Alternatively, you can guide everyone through the same initial experience and give them opportunities to diverge and design their own journey. We can also embrace DHH’s conceptual compression idea to make the experience feel like you are peeling layers of concepts as needed.

If you are building developer tools, remember. Not everyone is like you. Embrace diversity and build for it.

]]>
<![CDATA[I often hear a statement when justifying decisions in building developer tools: but they are developers too. It bugs me a ton because it throws all the developers into the same bag and assumes that... ]]>
CLIs are products too https://pepicrft.me/blog/2022/05/04/clis-product 2022-05-04T00:00:00+00:00 2022-05-04T00:00:00+00:00 <![CDATA[

Over the years of working on command-line interface tools I observed that they are not often perceived as products. Consequently, organizations don’t embrace the same principles as UI-oriented products, which leads to complex tools designed by developers for developers.  The few projects that adopt a product mindset make a difference.

A manifestation of the above is seeing ideas that are not ported over to CLIs. For example, design systems are popularized across UI-oriented products. They play a crucial role in ensuring a consistent experience across features and products. The need for design systems grew organically when large organizations realized that it was becoming impossible to collaborate without a common foundation of blocks and principles. That manifested as inconsistently-styled experiences, which tightly connect with the user experience (UX). Design systems can play a similar role in CLIs, but few organizations spend some time devising and laying out a foundation to build upon. The ideas are the same, but they map to a different set of building blocks in the domain of terminal interfaces.

Adopting a product mindset requires looking at CLI commands as UI. Limited compared to browser-based products, but UI nonetheless that developers can experience. A command is a dialog between a person and a different domain through a terminal. Unless we put the attention to detail it deserves, we might end up designing conversations that feel like the person on the other side of the screen is a robot. They might serve their purpose, but they won’t be as enjoyable as if it felt like talking to another human.

Pay attention to how you name commands, arguments, and flags. A terminal is limiting, but remember, constraints foster creativity. An intent well captured with the command’s name and flags that become indirect complements of a sentence can yield a very expressive interface.

When you send output through the standard streams it resembles receiving a response in a conversation. Be clear and direct, and when doing something that takes time, make sure the person knows about it. If you couldn’t do what the user asked the CLI to do, tell them why, and provide the next steps that they can take to overcome the issue. Errors are also disregarded in CLIs primarily due to, in my experience, developers’ laziness. It’s quicker to just throw than go to the roots of the error to fix it or provide a better error experience.

If you build a CLI, remember, the how is as important as the what. The what makes a command satisfy the developers’ intent. The how is what can make the experience more enjoyable to use. Some smartphones had been designed before Apple introduced the iPhone. They all served a similar purpose, but one put attention to the how and created an experience that made the product a successful piece of technology. Resist the urge of being lazy and not consider scenarios other than the happy path. You’ll move slower, but remember, this is a long-term investment that pays off and you’r users will be thankful for it.

]]>
<![CDATA[Over the years of working on command-line interface tools I observed that they are not often perceived as products. Consequently, ... ]]>
Javascript, ESM, and tools https://pepicrft.me/blog/2022/05/04/javascript-esm-and-tools 2022-05-04T00:00:00+00:00 2022-05-04T00:00:00+00:00 <![CDATA[

I’m using Javascript, Typescript, and Node a lot these days as part of my work at Shopify and Gestalt and I’m really loving it. In particular, its module system because it allows extensibility in ways it’d be more challenging with compiled languages or interpreted languages like Ruby that have a shared namespace where the code is loaded into. Vite is an excellent example.

It hints that there might be a future where additional tooling and the indirection that comes with it its not necessary, but if you dig into that idea a bit, you realize it’ll never be possible. At least in the years to come. Tooling will remain necessary to polyfill the code to adapt it to various runtimes (e.g Deno). It’ll also be needed for UI frameworks that have built their template solutions upon Javascript (e.g JSX), remove type annotations from Typescript code, and accommodate NPM packages that don’t comply with CommonJS conventions to ensure interoperability with ESM. g This is not unique to the Javascript ecosystem. Compilers also transform and optimize code into binary to be able to run it in the target platform. When building for Apple platforms, there’s an artifact akin to sourcemaps, dSYMs, to be able to link stacktraces with the source code. The difference is that other ecosystems make it a core element of the programming language and that allows a more integrated experience. Achieving a similar level of cohesiveness in the Javascript world feels like juggling. You can integrate various tools under a framework to achieve a well-integrated experience, but you end up with a brittle setup that falls apart easily. This explains the well-known “delete nodemodules”_ and install dependencies again in the hope that the package manager will restore the state.

Despite how much I’d love to see a broader adoption of ES modules in the NPM ecosystem, and more conventions and standards pushed down to the Javascript foundation, I’ll doubt that’ll happen in the near future, and this influences how we build GestaltWe’ll do what’s in our hands to minimize tooling indirection manifesting as bugs or breaking project setups. Rails is better positioned there thanks to the Ruby, but we can leverage some Javascript capabilities to approximate the Rails experience. For example, we can provide a foundational set of utilities that are automatically polyfilled by the framework depending on the deployment targets. This is something can’t be done easily if frameworks encourage projects to import Node APIs directly.

Instead of dreaming with a future where tools are not necessary, we are embracing tooling and leveraging it to provide Gestalt users with a reliable and integrated developer experience. I think Gestalt will blow your mind like not many frameworks have been able to do.

]]>
<![CDATA[I’m using Javascript, Typescript, and Node a lot these days as part of my work at Shopify and Gestalt and I’m really loving i... ]]>
Users don't care about your web app's portable binary https://pepicrft.me/blog/2022/03/22/users-dont-give-a-shit 2022-03-22T00:00:00+00:00 2022-03-22T00:00:00+00:00 <![CDATA[

We, software crafters, naturally tend to distance ourselves from users led by excitement for technological cycles and innovation. Our industry is full of examples. For instance, the crypto trend is an excellent example of that. No one can obviate the fact that there’s innovation in blockchain. Yet, it’s not solving people’s problems. It’s doing more harm than good, but this is a topic for another blog post. The one distancing that got my attention recently is this idea of building a web app as a portable binary. Go started the trend with its ability to inline resources in the output binary. Deno has jumped on it with its deno compile command that many web developers are getting excited about.

Distributing software as a portable binary is an excellent idea for CLIs because you don’t want to require users to install additional dependencies in their environment. But in the context of web apps, users don’t give a shit about your web app being a binary. They care about the app’s usability, reliability, and performance more than anything else. How an app runs in a server is an implementation detail. In pre-Kubernetes and Docker times, there’d have been a strong argument in favor of binaries to ease deployments. But times have changed, and that’s not a problem anymore. Platforms like Heroku or Fly can infer your app’s project deployment.

I’m writing this down and sharing it to remind myself and others that might come across the same argument that we build ultimately for users. If you get excited about a technological cycle or innovation, ask yourself whether you are playing with a new technology or building for users. And in the case of the latter, wonder if users care about the technological decision you are pondering. Shopify was criticized back when they chose React Native for the mobile apps, but it was a decision that has had a positive impact on users’ experience, which really matters.

]]>
<![CDATA[We, software crafters, naturally tend to distance ourselves from users led by excitement for technological cycles and innovation. Our industry is full of examples. For instance, th... ]]>
OSS and extrinsic motivators https://pepicrft.me/blog/2022/03/12/oss-and-extrinsic-motivators 2022-03-12T00:00:00+00:00 2022-03-12T00:00:00+00:00 <![CDATA[

More and more, we see open-source projects being backed by investment rounds. It’s positive for the projects because they can innovate faster and sustain themselves by paying people to work on it full-time, making money one of the main drivers, and investors’ interests the wheel that steers the boat. Is that something good or bad? It depends.

When the motivations of the people contributing to a project are extrinsic, then the chances are that when money is gone, so are the motivations to contribute to the project. When the project is more community-driven, intrinsic motivations take preference, which helps the project sustain long-term. Note that community involvement and governance are not the same, even though they are often used interchangeably.

There’s a high correlation between projects that use money as an extrinsic motivator and the amount of marketing effort they’ve poured into them. They usually have marketing copies along the lines of “we are making the web faster” that reassemble the missions from Silicon Valley companies. I refrain from building anything upon those tools and frameworks no matter how good their marketing is. When I build software, I want it to sustain in time, and to achieve that, it’s important that the blocks I build it upon can sustain too.

Note that there’ll always be economic interests when developing open-source projects. Companies contribute to open-source projects because they benefit their business. However, money here is a secondary player. The community has a substantial role in steering the project than the companies that support the project through contributions. An excellent example of this is Rails. A company like Shopify has teams dedicated to contributing to Ruby and Rails. Shopify has interests, but it can’t drive the framework in a direction that only benefits Shopify.

I think this is beautiful about the Ruby community, and recently in Javascript through projects like Vue, Vite, Rollup, and Svelte. You can sense a community behind it that connects all the different pieces in harmony and builds a strong community around it. This is the type of OSS that aligns with my principles and upon which I build the software that I craft.

]]>
<![CDATA[More and more, we see open-source projects being backed by investment rounds. It’s positive for the projects because they can innovate faster and sustain themselves by paying peopl... ]]>
Platform-dependent CLIs https://pepicrft.me/blog/2021/12/14/platform-dependent-clis 2021-12-14T00:00:00+00:00 2021-12-14T00:00:00+00:00 <![CDATA[

I’m a firm believer that shaping products as developer platforms is an amazing idea to let developers from all over the world make your product diverse. Otherwise, you have products like Facebook and Apple‘s that work great in California but conflict with the rest of the world, and what’s worse, end up imposing a model that often becomes the source of serious problems. For example, the idea that people need to be connected and feel part of the community (the what) is accurate, but doing it through addictive technology (the how) has led to terrible consequences for people’s mental health. Instead, they could have provided building blocks to model social interactions and communities and let developers use them to model how communities and social interactions work in their countries. Wouldn’t that have been awesome?

Shopify is a platform. They acknowledged the definition of e-commerce changes across countries. They can’t build as many e-commerce versions as countries that exist in the world. But they can build the LEGO pieces and the core business logic and provide APIs for developers to build upon. The more I contribute to this platform from the inside, the more I realize how brilliant the idea is. It seems a simple idea, but I’m sure Tobi has put a lot of thought into this.

I have the opportunity to work on the CLI that first and third-party developers use to build and deploy to the platform. As part of this work, one of the challenges we came across is breaking or abstracting away the dependency that projects have with the platform. If you’ve used CLIs to build for other platforms, you might have noticed that most of them don’t require the platform in the development phase. If you create an iOS app, you only need the platform when you upload the app. Until then, you can use Xcode and the simulators to develop and test the app locally. If you create a web app with a framework like NextJS, you can run the app locally and only interact with Vercel when you need to deploy the app. There’s only a moment in time when the user has to navigate from the CLI and the local environment to the server-side platform.

But that’s not the case at Shopify. Theme development depends on the production storefront renderer to preview the themes during development. The same is true for Extensions. An extension only makes sense when it’s loaded within the context of the Shopify platform. For example, if the extension represents a checkout extension, you want to see it loaded in an actual checkout flow.

During development, the dependency on the platform makes it extremely challenging to provide a great DX. It’s not impossible, but something I’ve been thinking about ever since I came to that realization. I don’t have solutions yet, only ideas for improving the DX. For example, imagine Shopify providing a server-side Storybook-like functionality that acts as a disposable sandbox environment. The dependency with the platform remains, but there’s a platform-side dedicated tool to ensure the preview experience is the best. Storybook’s stories concept would map so nicely. For example, we could provide different checkout scenarios that you can easily switch between without going through the checkout process.

Another approach could be taking platform functionality down to the client to simulate locally. However, that requires first designing those pieces to be modular enough so that the preview piece can be pulled locally. Second, write them in a compiled language such that you hide implementation details of the business domain. If you are familiar with iOS development, it’d be the same as providing a simulator for the Shopify platform.

Improving the current state would shorten the development cycles, and developers would have more focus and motivation when building for the platform. I remember my days of iOS development when I could not preview my changes because of signing issues. It felt so frustrating that I don’t want Shopify developers to feel the same way.

]]>
<![CDATA[I’m a firm believer that shaping products as developer platforms is an amazing idea to let developers from all over the world make your product diverse. Otherwise, you have product... ]]>
On evolving opinions https://pepicrft.me/blog/2021/12/08/on-evolving-opinions 2021-12-08T00:00:00+00:00 2021-12-08T00:00:00+00:00 <![CDATA[

I recently came across a tweet that suggested me to undo my blog post on Web3 after sharing some bad things I’d uncovered in the technology. I couldn’t understand why an undo and not a follow-up. I see opinions as alive entities that evolve and change. The initial opinion that I had when I started reading about Web3 evolved as I dove into communities and solutions build upon it. I went from having a high level of excitement with a vaguely-defined mental model to lose a bit of excitement as I defined the model further. I find the journey of evolving thrilling.

However, that doesn’t seem to align with what some people expect: polarized and somewhat religious opinions. They expect you either be a React Native lover or hater, a Web2 believer or a Web3 geek, or a monolith vs a micro-services advocate. Liking Web3 but disliking parts of it is inconceivable for them. You either like it or hate it.

Because I don’t want to be biased by my own static opinions, I avoid engaging in these discussions and keep and open mindset. Moreover, since I value openness and I enjoy sharing my learnings and opinions, that sometimes means seeing seemingly contradictory opinions at different points in time. And that’s completely fine.

My opinions about Javascript development have been a roller coaster. There are things that I like about it, for example the beautiful abstractions developers are building with it, and things that I hate about it, like the convoluted setup and dependency graph that you find in some projects. Similarly, I gave up a while ago on being religious about Swift. That gave me a unique perspective on where the programming language shines, what are the limitations, and how Apple’s interests drive the direction of the project.

I’m currently forming opinions around the nature of the technologies and tools they use. Do I want to continue using and supporting open-source technologies with business interests baked into it? Is my relationship with open-source sustainable long-term? Do I see myself doing the same thing for the next 10 years? 20 years?

My days are filled with questions, and the answers to those lead to opinions and more questions. Expect my opinions to change. Expect me to share those opinions, and also expect my opinions to move along a spectrum. “The only constant in life is change” they said.

]]>
<![CDATA[I recently came across a tweet that suggested me to undo my blog post on Web3 after sharing some bad things I’d uncovered in the technology. I couldn’t unde... ]]>
Migrated to SvelteKit https://pepicrft.me/blog/2021/12/07/migrated-to-sveltekit 2021-12-07T00:00:00+00:00 2021-12-07T00:00:00+00:00 <![CDATA[

I migrated this blog to SvelteKit. I did it to consolidate everything I had learned about the framework, and be able to SSR static pages with dynamic content. For example, I’ll be able to collect data from external sources like GitHub and include it in the about page.

I ported over the same boring design I had in the previous Jekyll-based website. I love it because it reminds us the origins of the web as a tool to share documents. The focus is in the content and not that much on the continent.

Expect some follow-up blog posts from me talking about the things that I like and don’t like about Svelte and SvelteKit, and how it compares to other frameworks like Vue and React.

]]>
<![CDATA[I migrated this blog to SvelteKit. I did it to consolidate everything I had learned about the framework, and be able to SSR static pages with dynamic content. ... ]]>
Adapting to a platform https://pepicrft.me/blog/2021/11/08/adapting-to-the-platform 2021-11-08T00:00:00+00:00 2021-11-08T00:00:00+00:00 <![CDATA[

In a simplistic way, we can see web frameworks as convenient functions that take your app as input and return deployable artifacts. GatsbyJS generates static HTML, CSS, and Javascript that platforms like Netlify know how to deploy. Rails generates static assets and provides an entry point to run a process on a platform like Heroku. Note that the artifacts need to be adapted to the deployment target. Heroku does it through buildpacks and Procfiles that instruct the platform on building and running a server. Netlify achieves the same through a configuration file where developers describe how to build and deploy their websites. Traditionally, the adaptation process has either fallen on the platform or developers’ sides (e.g. through a CI deployment pipeline)

What if frameworks have adaptation as a built-in primitive? That’s what SvelteKit provides through Adapters. It’s an API for third-party developers to define how to adapt a SvelteKit app to different hosting providers. For example, the netlify-adapter adapts the output to Netlify and does things like turning endpoints and SSR pages into functions that run on-demand. Because the framework allows SSR, CSR, and static rendering, a SvelteKit project can contain the web app, documentation website, and marketing and landing pages. Cool? Isn’t it. Adapters decouple the deployment platform from the framework to prevent vendor-locking.

I’m still new to the framework, but I think its concepts are powerful, and adapters are an excellent example of that.

]]>
<![CDATA[In a simplistic way, we can see web frameworks as convenient functions that take your app as input and return deployable artifacts. GatsbyJS... ]]>
I want to be rich https://pepicrft.me/blog/2021/10/25/i-want-to-be-rich 2021-10-25T00:00:00+00:00 2021-10-25T00:00:00+00:00 <![CDATA[

It’s been a while since I started reading more about personal finances and investments. My primary motivation was to escape the tempting treadmill of scaling up costs as the income increases. It’s never late to learn about it, but the earlier, the better, so I’m glad it clicked in my head when I was 28.

As a software engineer, a fortunately well-paid job, it’s easy to fall into the treadmill trap. You earn above the average (prompter), immersed in a planned obsolescence culture (temptation). You want to have the latest iPhone, a new Macbook, a large TV, an Apple watch… We become friends of Amazon and spend without thinking about our financial future.

The first thing that I learned is that it’s important to have insights into your cash flow for two reasons. First, you can see whether you are spending too much money on things that bring no value. Second, you can see your family’s net worth and know how much money you can invest.

This leads to the following question: what’s my investment strategy? We first invested in a property in Berlin. That’s what people do in Spain, so we had a strong bias towards this move. Is it the most brilliant move to start with? I don’t think so. Was it a good idea in hindsight? I think it was by looking at the market and considering the interest rate of the mortgage. Then I read further and learned that it’s wiser to defer that type of investment and use the liquid money to increase the net worth through other investments.

The next move (and serious mistake) was seeking financial advice. We came across DVAG, a Deutsch corporation whose goal is to sell insurance. We naively fell into their trap of thinking that we needed all the insurances they offered us. The person who sold us those products was a Spaniard living in Berlin, so her selling strategy was to “teach” ex-pats how Germans do it. We got the following for my wife and me:

  • Riester and Rürup pension plans
  • An income protection insurances,
  • A lawyers insurance,
  • A bausparvertrag for our mortgage
  • An investment product (that invests in funds)
  • Private health insurance for me.

In hindsight, we were too naive. Planning to move to Spain allowed us to see everything from a different angle:

  • It is a company that follows a pyramid scheme earns commissions per sold product and from the products sold by the people under them.
  • We were sold products (the pensions) that were not designed for our profiles. They asked how much we earned and filled the costs as much as they could, ensuring there was a tiny bit of space for breathing.
  • They never shared the costs with us, and as you can imagine, they were pretty high (over 2% annually).

We canceled everything except Rürup that cannot be canceled but paused. We lost some money, but it was nothing compared to what we’d have not earned if we stayed with them. This was something positive about considering moving back to Spain. When I told her I was canceling everything, her answer was: Don’t expect the same service from me anymore. Of course, I later discovered that Generali would make them return all the commission they got for bringing me as a customer. Funny thing, she insisted a few times on getting my Germany deregistration confirmation (i.e. Abmeldung) to prove that I did it because I was leaving the country. If you ever come across this company, DVAG, watch out.

I learned a few lessons out of that experience, but the most important lesson is that no one will manage your finances better than you. Or in other words, investing money comes with the responsibility of learning how to invest it.

After that, I continued reading further. I started investing in ETFs (a mix of accumulative and distributing funds to take advantage of some tax benefits in Germany), individual stocks from companies that share dividends, and alternative investments such as microloans, whisky, and cryptocurrencies. As part of this effort, I created an additional spreadsheet to keep track of them. In particular, In particular, I’m interested in the diversification scheme and how investments are performing over time. If you are interested, the following two resources have been handy for me: Index Fund (European) Investor, El Club de Inversión (Spanish), and Banker on Wheels.

Alright, Pedro, you save money, which you decide to invest, but what’s your goal? My goal is to be rich. But not the definition of rich we are all used to. The definition by Robert Kiyosaki:

If you stopped working, how long could you survive?

I want to work less or even better, stop working, and not have to worry about money anymore. Right now, I have two dependencies. My employer, who is my primary source of income, and the future public pension from the Government, which we all know it’s getting harder to sustain. Unless I do something, the default is to continue working as much as I do until I retire. But there’s a better alternative to that, which I learned in the book “The cashflow quadrant”: I can be a mix of a business owner and an investor and leverage people and money to make money. Achieving that independence is also possible by being employed, but it often leads to working harder which is not healthy. It’s not a matter of working harder, but being financially more intelligent.

I’ve always been in the employed quadrant and worked hard, which has allowed me to grow a lot professionally. But I feel I’m approaching an inflection point, and I’ll have to take a leap. I might start using my spare time more wisely and gear it towards being the owner of my own business in the future. I think I have some necessary traits to get there, but there are some other areas I still need to work on.

I’ve you’ve followed me for a while, you might notice a shift in my relationship with open-source work and the content in this blog. If this intrigues you and you would like to chat about it further, don’t be a stranger and send me an email. These topics are always taboo, but I’m open to talking about them.

]]>
<![CDATA[]]>
The React chain https://pepicrft.me/blog/2021/10/13/the-react-chain 2021-10-13T00:00:00+00:00 2021-10-13T00:00:00+00:00 <![CDATA[

I’ve been thinking a lot lately about the role React plays when building a web app. Companies like GitHub and Shopify, both very successful software companies, introduced React recently in areas where it makes sense. This led me to the question: Is React and everything that comes with it (e.g., abstractions, tools, libraries) an influential piece in generating value for users?

There are great things about the React stack. You can more easily unit-test the business logic of your frontend, share and use components that atomically encapsulate structure (HTML), behavior (JS), and style (CSS). Moreover, you have access to beautiful abstractions to do theme-based styling and even leverage the Typescript compiler to validate your styling object. React turns building a web app into a LEGO game where many blocks are already provided by the community.

However, with React, projects pull in a chain of drawbacks that wouldn’t exist if we didn’t add React in the first place. The first of them is having an API. Sure, if you plan to have more clients in the future, an API is a must. But what if that’s not the future plan, or it’s far ahead? You end up optimizing for a future that might never come.

In many cases, we end up going down the path of GraphQL because libraries make it so convenient that we think we need it, but we don’t realize again that GraphQL was designed for a problem we are far from having. And as you probably know, but with an API, we introduce a new set of problems because we have two sources of truth for the data. Many Javascript libraries are trying to abstract that away through caching strategies. Some projects decide to go down the path of trying to model their state with yet another dependency, Redux, that ends up spreading like a virus and bringing more complexity to the frontend domain.

At this point, one might argue that it’s possible to solve that by doing server-side rendering (SSR). True, but the moment you hydrate the app on the client, you want the routing experience to be on the client, leading to components having to fetch data through the API. We can’t move away from it. React and, more generally SPAs, force you to have two sources of truth for your data. And I forgot to mention that SSR requires your React libraries to be compatible with it, limiting the options from the exciting pool of component libraries one has access to.

Furthermore, React means JSX, and JSX means you need additional tooling and process changes. Babel or any other transpiler needs plugins to transform the JSX into a valid Javascript syntax, and some CSS-in-JS libraries might couple themselves to the underlying tooling through macros. Because the Javascript that you write is not the one that gets executed, and it’s just a declaration that is then loaded in a virtual DOM and persisted to the document, debugging requires additional browser extensions. Don’t get me wrong. Having to install tools is not a bad thing; having to mess around with these things when starting a project takes the focus from the important thing, generating value through technology.

We should not forget that we can make our styling themeable by simply leveraging CSS building blocks, generate HTML server-side without a framework like React and the chain of tools that come with it, and that small touches of Javascript are sufficient to add some interactivity where it makes sense.

When does it make sense to follows the React approach then? I think it makes sense that the app will be very interactive, for example, if you are building a tool like Figma. It also makes sense if the value the abstraction brings outweighs the cost of maintaining and evolving a most likely convoluted set of Javascript libraries and NodeJS tools. You can also take the path of letting a framework do that for you, which is what RedwoodJS and NextJS are betting on. It also makes sense if supporting multiple clients is core to the product, and where developing an API and reusing mental models align closely with the product direction.

It does not make sense if your sole focus is to generate value through a web app. Throw a project with a Rails or similar framework and focus on the product and not the tools and the technology around it.

My sentiment with React and Javascript development is that it’s a bit like capitalism; it creates more problems that can be solved with more Javascript. Solutions are in rare cases evolving an existing foundation, but creating better versions of existing tools or creating new tools from scratch like Rome because everything that was built before is fundamentally wrong.

]]>
<![CDATA[I’ve been thinking a lot lately about the role React plays when building a web app. Companies like GitHub and ]]>
On cutting off some dopamine dependency https://pepicrft.me/blog/2021/10/11/on-cutting-off-dopamine 2021-10-11T00:00:00+00:00 2021-10-11T00:00:00+00:00 <![CDATA[

Over time, my relationship with the Internet has turned me into a dopamine-dependent. I’ve reached a point when my body often has a physical presence in the offline world throughout the day, but my brain wanders in the online space.

Should I tweet about this? What if I write a blog post about that? This is boring; let me check what people are talking about on Twitter. Look at that beautiful scene; I’ll take and share a photo on Instagram. This is hilarious; I’ll tweet what happened to me. I built this open-source project, I’ll share it broadly to gauge people’s reactions.

I was once gifted the Internet awesomeness, and now I gift my time and energy back to the Internet for free. In the meantime, life goes by, and I miss the opportunities to do fulfilling offline and social activities. Is this sustainable long-term? I don’t think it is.

The more I run on the dopamine treadmill, the faster it goes, and the more I need to have a sense of fulfillment. I’ve got online friends that I feel I need to feed with photos and stories, some followers on Twitter I feel I need to share updates with, a handful of open-source projects I feel I need to maintain. There are also newsletters I feel I have to keep up with, Reddit discussions I feel might be relevant to me, and podcasts I feel it’d be great to listen to. I become blinker, and consequently, I miss out on the beauty of slowness, the present, and the beauty of the offline world. It’s all me, my ego, the dopamine, and what I feel I need to do. It’s all feelings.

What’s worse is that it has a cascading effect. People see the facade, and if they feel inspired by a professional trajectory, they think they need to imitate what you do. The world then becomes a dopamine festival. Rockstars take the main stage, while the audience dream of becoming one in the future. I don’t need to learn how to play an instrument, nor find a band or a manager; I need time and some tools that people are already addicted to (e.g. Twitch, Twitter, TikTok). If I build my audience, I can play in the main stages too.

The Internet has paved the way to achieve a dopamine-dependent life whose meaning come from other people worshipping you and your work.

Seeking that life has some similarities with dreaming with becoming a millionaire. We pour a lot of energy, time, and sometimes health to finally realize you made your life more meaningless. We are on the treadmill, so we need recognition or money to sense a meaningful life.

The truth that many of us fail to realize is that we are social animals; thus long-lasting happiness and fulfillment often comes from social interactions in the offline world. If there’s something positive out of the COVID19 pandemic, it’s that it proved that we can’t replace offline relationships with Zoom calls or Clubhouse discussions.

Alright, Pedro, I get you, but how are you going to remove your dependency? First, I’ll limit the content consumption time to Friday mornings. If I find something interesting, I’ll use a read-later and save it to read it on Fridays. Regarding my usage of social networks, I’ll limit their usage to once per day. The list of social networks includes Twitter, Instagram, and Facebook. I like capturing my ideas and reflections in the open, and therefore, I’ll continue sharing them. However, I’ll use my personal website as a medium and not Twitter. The reason is that I don’t want to contribute to the stream of tweets that might contribute to increasing people’s anxiety. I’ll increase the amount of social and offline activities. For example, learning how to design and assemble furniture, gardening, or learning German has been on my list for quite a long time, and I continue postponing to leave space for my online endeavors. And last, I’ll remove any sense of obligation with anyone on the Internet. I’ll do things that bring me joy.

Let’s see how it goes.

]]>
<![CDATA[Over time, my relationship with the Internet has turned me into a dopamine-dependent. I’ve reached a point when my body often has a physical presence in the offline world throughout the day, but my... ]]>
Great solutions for the wrong problems https://pepicrft.me/blog/2021/09/28/great-solutions 2021-09-28T00:00:00+00:00 2021-09-28T00:00:00+00:00 <![CDATA[

As you might know, I’m a curious person. That leads me to reading about challenges tech companies run into and the solutions that they come up with, and connecting them with similar problems with the aim of forming mental models.

Why React? What’s the role of GraphQL? Why was Rust created? Where does it make sense to use it? How does it compare to Go? What are the drawbacks of building a CLI with interpreted languages? How does SvelteJS and SolidJS compare to VueJS and ReactJS? What is React Server Components trying to solve? Why does ES modules remove the need for intricate Javascript tooling? Why is “deleting node_modules” a thing in the Javascript ecosystem? Why is the trend in the Javascript community to build tooling in compiled languages?

Problems that are constantly arising push current solutions beyond their limits, and in less-opinionated environments like Javascript’s, new creative solutions emerge like flowers in a field. The result of that is a rich pool of solutions to choose from. The caveat is that solutions get so much attention that the problems they originally set out to solve remain background or get disregarded in the decision-making process. On top of that they present even more problems that get solved with more layers. In practice, this means simple projects with convoluted tech stacks that aim to solve problems they don’t really have.

Take React. It solved Facebook’s problems that are now becoming other companies’ problems too. GraphQL is a similar story and it’s now becoming the standard for the request-response model in web applications, even if there’s a single client consuming the API. Just today I came across a product that combines GraphQL with CDN to provide caching to GraphQL APIs. What will follow? Another product that solves synchronization issues between the source API and the caching layer.

We, developers, are usually solutions-oriented, and that makes the matter worse. We get tasked with solving something, and put too much focus on the solution. As a consequence, we end up proposing technologies and languages we are familiar with or that most people are talking about these days. This is very challenging for me. I need to make an effort to understand a problem well and consider multiple solutions before deciding for one. I have to tell my biases to shut up.

As the name of the post says, we often end up with great solutions for the wrong problems.

What should we do then? I think we have to embrace innovation and diversity of solutions. It’s something positive for the industry. However, I think we should mentor software developers to be more problem-oriented. They should be able to gain understanding of the problem that they have at hand, evaluate several solutions and understand the trade-offs of each of them, make the best unbiased decision, and document the rational behind the decision for future context and re-evaluation.

Moreover, I think it’d be great if open-source projects include in their READMEs the problem/s they are optimizing for, and some examples of projects for which the solution would and wouldn’t be suitable.

Mastering this is, in my opinion, what sets the difference between a junior and a senior developer. Towards the staff role, developers should be able to find problems to solve in domains with some uncertainty.

It’s not easy. As I mentioned earlier, I have to make an effort every time. However, once the decision is made and the solution executed, it feels extremely rewarding and puts you in the mood of hunting new problems.

]]>
<![CDATA[As you might know, I’m a curious person. That leads me to reading about challenges tech companies run into and the solutions that they come up with, and connecting them with similar problems with t... ]]>
Developer platforms and diversity https://pepicrft.me/blog/2021/09/16/platforms-and-diversity 2021-09-16T00:00:00+00:00 2021-09-16T00:00:00+00:00 <![CDATA[

If we think about how tech companies build products these days, we’ll realize many present a single model that they push onto the world. Companies like Facebook and Twitter model for how social interactions happen on the Internet. Others like Spotify, model how people produce and consume music and podcasts.

If we think about it further, we’ll realize that the approach is incompatible with the diverse world we live in. Diverse problems require diverse solutions. But what we get instead are solutions designed for a western-centric simplified version of the world, and we expect everyone to embrace them.

Having a simplified version of the world is convenient for the business. Still, it might lead to severe problems like the Myanmar genocide incited on Facebook or the e-scooter chaos in Berlin.

As someone who works in tech, this annoys me greatly, especially when the company prides itself upon its diversity efforts and building a product that rejects world diversity. How crazy is that?

I understand building as many product versions as nuances exist in the world would be cumbersome. Still, companies could focus on the domain’s core and provide a platform for developers to codify the diversity of the world. This is what Shopify does with its apps ecosystem. Shopify focuses on the business logic and primitives of e-commerce and provides developers with extension points on the platform to translate e-commerce to the meaning in their countries. I believe this is key to Shopify’s success, and it’s one of the reasons I like it working here.

Imagine Facebook doing something similar. They already have a platform with primitives, logic, and a complex graph of social interactions. Developers could build upon that to create social networks in their countries following their countries’ social norms and cultural nuances. I believe Jack aimed to do that with Twitter, but I can’t find the link to it.

So the takeaway I’d like to leave you with is that shaping products as platforms and providing APIs for developers is an exciting model to embrace the world’s diversity.

]]>
<![CDATA[If we think about how tech companies build products these days, we’ll realize many present a single model that they push onto the world. Companies like ]]>
Spain, it's not time to be reunited (yet) https://pepicrft.me/blog/2021/09/14/spain-we-dont-meet-yet 2021-09-14T00:00:00+00:00 2021-09-14T00:00:00+00:00 <![CDATA[

As some of you might know, we’ve been in Barcelona looking for a flat to relocate from Berlin for the past few weeks. A few things happened during the COVID19 pandemic that prompted us to think about whether we wanted to stay in Berlin longer. In particular, Shopify became a distributed company and opened a legal entity in Spain. Furthermore, my wife got a job at Shopify. What are we doing here? The idea of moving to Spain sounded very appealing: no language barrier, better weather, and more healthy food. It felt the natural next step for us, and therefore we decided to spend weeks in Barcelona to see how it felt being back here.

It took weeks of many questions and answers to finally admit it was not the right time for us to be back.

What follows is a brain dump of what led us to make that decision. Note that this is based on my experience, and therefore it’s subjective.

First, it’s very comfortable living here. It’s too comfortable that it makes us feel uncomfortable. We like the challenges and the learnings that come from them. We feel moving back would rush us into a more appropriate life for an older version of ourselves. Being abroad means being exposed to other cultures, and we love that a lot. When we left the country, we became more aware of what we didn’t know and learned how nuanced and complex the world is.

Many people here say that “in Spain, we have a good life”, but I realized that statement is subjective. What’s good? If you reduce that to weather and food, sure, you’ll have a good life here. But in other areas such as innovation and education, Spain is far distant from other countries in Europe, which also means good life to us.

What are we doing then? For now, we’ll go back to Berlin. We need to rest in what’s our home. As weird as it sounds, we miss Berlin’s weather. It’s been too humid and hot these days in Barcelona (we might have become Germans). The plans for later this year or early next are to visit Amsterdam, to assess it as our potential next home.

]]>
<![CDATA[As some of you might know, we’ve been in Barcelona looking for a flat to relocate from Berlin for the past few weeks. A few things happened during the COVID19 pandemic that prompted us to think abo... ]]>
A future note to self about Omniauth https://pepicrft.me/blog/2021/08/20/a-future-note-to-self-about-omniauth 2021-08-20T00:00:00+00:00 2021-08-20T00:00:00+00:00 <![CDATA[

Every time I try to set up Omniauth on a Rails codebase I run into the same issue:

Not found. Authentication passthru

And every time I have to spend a fair amount of time understanding what’s causing the error.

Luckiy it won’t have anymore after writing this note for my future self.

Besides the standard steps to set up Omniauth and Omniauth GitHub, we need to add omniauth-rails_csrf_protection to bring CSRF Protection to the requests that are sent from the authentication pages. After adding the dependency to the Gemfile, we need to add a new initializer, omniauth.rb, to allow sending POST requests from the Omniauth links:

OmniAuth.config.allowed_request_methods = [:get, :post]

In that same initializer, we need to set the host to ensure Omniauth passes the right redirection URL when initiating the authentication flow. Otherwise the Omniauth provider might fail due to mismatching URLs:

OmniAuth.config.full_host = "https://myapp.com"

If we generated the Devise views under our project’s app/views directory, we can notice that the Omniauth links are already configure to be POST in the _links.html.erb file:

 <%= link_to("Sign in with #{OmniAuth::Utils.camelize(provider)}",
       omniauth_authorize_path(resource_name, provider),
       { method: :post }) %>

And last but not least, we need to instruct Omniauth on what to do once the authentication flow has finished. We do that by configuring a controller in the routes.rb:

Rails.application.routes.draw do
  # Devise
  devise_for :users, controllers: { omniauth_callbacks: "users/omniauth_callbacks" }
end

In the controller each provider is represented by a method with the same name of the provider. Note that the request’s env attribute provides all the user metadata that we need to find or create the user in our database and authenticate them using Devise’s sign_in_and_redirect method:

class Users::OmniauthCallbacksController < Devise::OmniauthCallbacksController
  def github
    @user = ... # Create the user

    if @user.persisted?
      sign_in_and_redirect(@user, event: :authentication)
    else
      data = auth_data.except("extra")
      session["devise.oauth.data"] = data
      redirect_to(new_user_registration_url)
    end
  end

  def failure
    redirect_to(root_path)
  end

  def auth_data
    request.env["omniauth.auth"]
  end
end

And that should be it. I hope this is also useful for other users running into similar issues with the Gem.

]]>
<![CDATA[Every time I try to set up Omniauth on a Rails codebase I run into the same issue: Not found. Authentication passthru ... ]]>
./dev https://pepicrft.me/blog/2021/08/16/dev 2021-08-16T00:00:00+00:00 2021-08-16T00:00:00+00:00 <![CDATA[

One of the things that I appreciate as a developer is having a consistent experience across projects. As you probably know, this is often not the case when running a project locally. Some ask you to run yarn run ios. Others prefer an executable like bin/rails server instead. This adds friction when changing when jumping between projects. Can we mitigate the friction?

This is something I’m changing in my projects with an executable called dev. All my projects have it going forward. That’s the only thing I have to remember.

Since I have Ruby in most of my projects, I leverage Foreman and a Procfile to run concurrent processes. This is an example of the Procfile.dev of one of my Rails projects:

rails: bin/rails server
vite: bin/vite dev

Then all I need in the dev executable is:

#!/usr/bin/env bash

bundle exec foreman start -f Procfile.dev

It’d be great if I could have an up command too to configure the environment. However, and as you might know, configuring environments deterministically and reliably is hard. Many companies let that be a developer’s responsibility. Others like Shopify have needed years to have a tool that does an incredibly good job at that. And companies like GitHub, prefer to take the development to the cloud.

]]>
<![CDATA[One of the things that I appreciate as a developer is having a consistent experience across projects. As you probably know, this is often not the case when running a project locall... ]]>
Seabolt support for M1 https://pepicrft.me/blog/2021/08/10/seabolt-support-for-m1 2021-08-10T00:00:00+00:00 2021-08-10T00:00:00+00:00 <![CDATA[

As part of building Chimera, an AppleOS tool for capturing networked knowledge, thoughts, and ideas, I encountered an issue trying to set up Neo4j on an M1 laptop (i.e. arm64 architecture). It turns out that Seabolt, the connector that neo4j-ruby-driver uses to communicate with a running instance of Neo4j, doesn’t have support for M1s. It was a bit of a bummer, but luckily I found this fork that someone created to add support.

If you run into this same issue in the future, you can either run the steps on that repository or download the compiled version that I built myself. Here is the sha256 checksum to validate you downloaded the correct binary.

shasum -a 256 seabolt-1.7.4-dev-Darwin.tar.gz

Once you verify the binary is correct, you can untar the content, and copy the dynamic library into the directory where the driver expects it:

cd build/dist/lib
cp libseabolt17.dylib /usr/local/lib/libseabolt17.dylib

And that would be it. The Neo4j Ruby driver should be able to initialize successfully.

A note on Chimera

It’s the first time I mention Chimera, so you might be wondering what’s that tool. You are probably familiar with tools for capturing networked notes like Roam Research and Obsidian. They are great because they remove the friction of giving your ideas, knowledge, and thoughts a structure other than the one they have in your brain. However, they are designed and optimized for the web. If you try to use them from your phone, the user experience is terrible. And because ideas can arise at any time, and you usually have your phone with you, I think having an app optimized for native will take the experience of capturing them to a whole new level. So that’s what I set out to build; a tool for networked thoughts, ideas, and knowledge. I’ll focus on Apple platforms first, following their human interface guidelines. I’m very excited to use Tuist myself and learn about SwiftUI to make this happen.

]]>
<![CDATA[As part of building Chimera, an AppleOS tool for capturing networked knowledge, thoughts, and ideas, I encountered an issue trying to set up ]]>
Some Rust thoughts https://pepicrft.me/blog/2021/08/04/rust-thoughts 2021-08-04T00:00:00+00:00 2021-08-04T00:00:00+00:00 <![CDATA[

A while ago, I started reading about the Rust programming language out of curiosity. Many things fascinated me about the language. It has a powerful dependency manager similar to the Swift Package Manager but more thoroughly designed. Unlike Swift, the compiled programming language I’m the most comfortable with, you can code Rust with your editor of choice and easily target several platforms because projects can be cross-compiled, and the standard library is available on all those platforms. This is not the case for Swift where leaving Xcode ruins your developer experience, and many utilities and primitives still live in macOS’s Foundation framework.

Because I did not end up using it, I forgot most of the things that I learned. Therefore, I decided to get back to it, getting my hands dirty and applying the things I read about. This blog post is a braindump of some thoughts that I’ve got so far. Expect more of these to come in the future.

I like its module system. It reminds me of Ruby’s but is more opinionated about the file structure. It encourages organizing the code in a modular fashion and ensures the file structure represents that structure. In Swift, for instance, namespaces can be created leveraging language’s constructions, but the build system doesn’t have any opinion on the file structure. As a result, it’s common to see file structures not matching the code’s.

The one thing that I haven’t had the chance to work with yet is the ownership-based approach to memory management. I think I’ve got the idea, but it’ll take some coding for the idea to click in my head and think in terms of who owns what and for what. It’s exciting to read that it’s one of the features that makes Rust so unique because it leverages the build system to catch what otherwise would be runtime issues.

The standard library looks pretty complete. Besides the basic types someone expects from a standard library, there are also handy utilities like Path that are handy when building apps that interact a lot with the file system. For instance, in Tuist we had to resort to a Swift Package to have such a model because Foundation does not provide it.

The collection of community Crates is enormous. It’s not at the level of NPM or RubyGems, but I’ve found Crates for everything I needed so far. I find this crucial when choosing a programming language. You don’t want to end up building something that someone else might have built before.


And that’s it for this this post. I’ll continue playing with it and dumping my thoughts on this blog as I continue forming more mental models around the programming language and its tooling.

]]>
<![CDATA[A while ago, I started reading about the Rust programming language out of curiosity. Many things fascinated me about the language. It has a powerful dependency manager similar to the Swift Package ... ]]>
Planning open-source work https://pepicrft.me/blog/2021/07/30/planning-open-source-work 2021-07-30T00:00:00+00:00 2021-07-30T00:00:00+00:00 <![CDATA[

One of the things that I find the hardest when building open-source software is planning the work. On one side, there are all these new features and improvements that you’d like to build. On the other side, there are PRs to review, tickets to triage and prioritize, and questions that come up. Leaning mostly on the former is not great because users and contributors feel ignored, but focusing solely on the latter isn’t good either because the project doesn’t evolve.

What to do then? I don’t know.

On Tuist, when I think about what to work on, many things come to my mind: there’s a new website design to implement, some caching issues that we need to tackle, documentation improvements that I’d like to add, functionality that I’d like to start building onto TuistLab… It’s too much. I’d like to do everything, and I end up doing nothing. The mental overhead causes paralysis. Working on one thing leaves you wondering if you should be working on something else instead. You feel bad because you are building something new instead of supporting existing users with the issues that they face. It’s tricky, isn’t it?

It’s something I’m trying to get better at, but it’s not easy. There are a few things that are clear to me though. There needs to be work on adding new features and improvements. I have a product mindset and I enjoy talking to users and turning that into solutions for them. The time that I invest into supporting users, I’ll try to time-box it. For example, I can tackle an issue per week and no more than that. I’d make exceptions if there’s a critical bug or regression in a new release. I have to say that in the area of support, we are lucky to have users, contributors, and maintainers helping there. Without their hands things would have gotten unsustainable.

So here’s the thing I’m thinking about going forward. I’ll focus on a project at a time and limit the support work per-week with exceptions. I’ll decide the project based on my motivations and needs and what’s important for users too. For example, I know caching of binaries is becoming and important feature for users. I’ll use GitHub projects to organize the work on projects and have a sense of progress. It’ll be helpful to involve others in a project I’m working on. And last and not least, I’ll continue engaging with users on Slack, GitHub, and Twitter. We need to continue building trust with Xcode developers because every new person that joins brings new ideas and challenges for us to solve.

]]>
<![CDATA[One of the things that I find the hardest when building open-source software is planning the work. On one side, there are all these new features and improvements that you’d like to... ]]>
Contributors' first experience https://pepicrft.me/blog/2021/07/21/contributors-experience 2021-07-21T00:00:00+00:00 2021-07-21T00:00:00+00:00 <![CDATA[

When building open-source software, getting external contributions is usually one of the most difficult things. Most of the times developers are busy working on their projects, and they are hesitant to devote time to another project. That’s understandable. Why would you? They usually do it when they have feedback or find an issue that they’d like to share with the maintainers of the project.

In rare scenarios, developers go further and contribute with code themselves. It’s a huge effort because they need to get familiar with a new codebase (architecture, guidelines, patterns, business logic, testing strategies…). This is where many contributors drop in the contribution funnel. They had an idea for something to contribute, but they felt overwhelmed by the project. Have you ever been in that situation? It’s even worse when the project doesn’t have documentation for contributors. The only thing they can do is to clone the repo and figure things out themselves. Unfortunately, this works when the project is small, but as you can imagine, it becomes indigestible in large projects.

Because external contributions bring diversity of ideas and new energies to the project, providng the best contribution experience is an important aspect in Tuist. There’s still room for improvement, but I’m glad of what we’ve achieved so far. First of all, we have documentation for contributors. They can learn how to get started, test their changes, and report bugs. Moreover, the project’s modular architecture minimizes the surface of the things they need to learn before being able to contribute. For instance, if you want to optimize the generation of Xcode projects, you can focus on the TuistGenerator target and ignore the others. When everything is under MyProjectKit and the different components are strongly coupled, contributors have a hard time reasoning about the project’s logic and forming mental models. Tuist’s architecture is documented too.

Furthermore, we include a Ruby CLI tool, Fourier, to automate all common tasks. It also ensures that everyone’s interaction with the project is consistent. That makes it easy to debug reproducibility issues when they arise.

Contributors also have access to a Slack group and community forum where they can seek advice and talk with other contributors and maintainers. We are luckyto have a very supportive group of contributors that are always open to give anyone a hand.

One of the things I’d like to improve next is the configuration of the environment to ensure everyone’s environment is consistent. For example, ensure everyone is using the same Ruby, NodeJS, and Swift versions. We’ll balance building for contributors and users. Both make Tuist the best tool of its class, and therefore both need the same level of attention and support.

]]>
<![CDATA[When building open-source software, getting external contributions is usually one of the most difficult things. Most of the times developers are busy working on their projects, and they are hesitan... ]]>
What I learned as a manager https://pepicrft.me/blog/2021/07/20/what-I-learned-as-a-manager 2021-07-20T00:00:00+00:00 2021-07-20T00:00:00+00:00 <![CDATA[

As you might know, Shopify allowed me to try the people management track and become an engineering manager. I’ve been doing that for the past two years. Along the process I learned a lot and made so many mistakes and I don’t regret having given it a shot. In hindsight, I think the experience will help me be a better engineer.

This post is a list in no particular order of the things that I learned and experienced:

  • People are unpredictable.
  • Management work is hard to measure. You can’t count it and it’s hard to reflect on it.
  • Achieving a balanced team state is an impossible task because of external factors that you don’t control: reorganizations, people leaving, priority shifts.
  • A team larger than 5 people is not a good idea. Your management starts to suffer and your team notices it.
  • Exiting someone from the company is tough.
  • Being recognized for your work is unusual, but when it happens and it comes from your reports, it’s very rewarding.
  • Sometimes you don’t have answers for all the questions, and that’s fine. You need to be comfortable being in that situation.
  • Seeing people leaving is sad. It’s hard to to wonder if you could have done things better.
  • It’s great seeing people progressing in their career and growing the impact of their contributions.
  • A road map you come up with today won’t be valid in a few weeks from today. The world is dynamic, so is the company and its priorities.
  • Reorganizations are sometimes hard to digest. When they happen people move around, other people leave, and you have new objectives you have to accommodate to.
  • Context switching is an important skill to have. You mustn’t let it make you a zombie at the end of the day.
  • The role of business partner is crucial to find answers to your questions and provide guidance when necessary.
  • Getting feedback from other managers when doing impact reviews is very useful.

And at this point you might wonder what led me to go back to individual contributor. I enjoy building. I enjoy opening my laptop, putting my headphones on, and creating things with code. I do that with Tuist and I used to that before taking the manager role. I also like mentoring people through working together in problems. That’s how I met Marek, who recently joined the organization.

Shopify is a great place to grow as a manager. You have great tools and excellent managers you can learn from. There’s even a framework to ensure management is consistent across the organization. However, and as I mentioned earlier, my path is on the technical track, and I’d like to continue solving problems with code.

]]>
<![CDATA[As you might know, Shopify allowed me to try the people management track and become an engineering manager. I’ve been doing that for the past two years. Along the ... ]]>
Propose, prototype, and build https://pepicrft.me/blog/2021/07/15/propose-build-release 2021-07-15T00:00:00+00:00 2021-07-15T00:00:00+00:00 <![CDATA[

One of the things I like about Shopify is an internal tool called Vault. It’s the backbone of the organization. You can find people, navigate the report structure, find answers for your questions, follow projects and share updates with the rest of the organization, and praise someone. It’s crucial for the organization’s effectiveness.

When it comes to planning work, many organizations built their processes upon existing tools. As a result, they end up accommodating their organization to the tool and not the other way around. For example, they introduce a tool like Jira and all of a sudden everything is ticket-oriented. Teams have backlogs full of tickets. Some people get obsessed with cleaning backlogs and prioritizing tickets. Ticket gurus emerge and training are organized to use Jira the right way. That always annoyed me and I was glad to see that Shopify invested in doing the opposite: design a framework, GSD (Get Shit Done), to plan work and build tools around it.

The tool that you use to manage the project is an implementation detail of the developer or the team. You wanna use GitHub issues? Go ahead. Do you prefer Trello instead? That’s fine too. The focus is shifted from the tool to the actual project and what’s trying to solve. I think this is the trap many organizations fall into: they get too distracted with the tools.

A project is first proposed. Stakeholders ensure the problem or need is well defined, and that it’s worth solving right now. This ensures people are working on the right things. If you can’t relate your work to the organization’s needs, then you need to take a step back and try to understand the thing you are tackling. Proposals can include a video, and go upwards in the hierarchy for approval.

Once approved, the project is prototyped to explore different potential solutions. Once stakeholders agreed on a solution, the project moves to build and the actual execution happens. Simple, yet powerful.

Along the process, Vault can be used to keep track of the project. The project champion is responsible for sharing updates with the stakeholders. A project has a feed where you can see a timeline of the project’s evolution. It also has deadlines and checks that you can run with the project contributors assess the health of the project: is it moving steadily? is the direction clear? are there any unexpected roadblocks?

And because the organization is large and there are many projects are being executed, you can follow the projects interest you. You can filter the signal from the noise and only get the updates from the projects that you are interested in.

One thing that I think it’s also cool is that there’s a champion role. A person ensures the project moves forward and that decisions are made. And yes, if it’s a development project, that’s the developer. Developers also feel empowered when driving projects.

As I’ve said a few times in the past, Shopify has done amazing work at having the best tools for their needs and this is yet another example of that. Product is important, but also the tools the people in the org use to build the product. There’s so much inspiration to gain from Shopify that I can apply to the way the Tuist organization is managed.

If Shopify sounds an exciting place you’d like to work, let me know and we can talk about opportunities over here.

]]>
<![CDATA[One of the things I like about Shopify is an internal tool called Vault. It’s the backbone of the organization. You can find people, navigate the ... ]]>
Back to Jekyll https://pepicrft.me/blog/2021/07/10/back-to-jekyll 2021-07-10T00:00:00+00:00 2021-07-10T00:00:00+00:00 <![CDATA[

I recently changed my stand in regards to the technologies that I use when building software. In particular, I decided to minimize the amount of Javascript that I use in my projects. In my experience, Javascript is usually synonym of indirection, complexities, and instability: dependency updates that blow up your whole stack, cryptic build errors that are hard to debug, setup that is hard to reason about and unnecessarily complex.

On the other side, Ruby and its ecosystem is more harmonious and peaceful. If something works, it’s very unlikely that it breaks. If there’s already a Ruby gem that does something, people are more inclined to contribute to it instead of reinventing the wheel. No hype fatigue. The language that you ship is the language that you write. You don’t need layers of transformations to accommodate your code to the environment in which it’ll run.

Because of all of the above, I’ve migrated this blog back to Jekyll from GatsbyJS and I’ve taken the opportunity to overhaul the design with something more boring and developerish. I want the focus to be on the content and not the aesthetics.

I’ve so decided to stay away from unnecessary abstractions upon standards like HTML and CSS. I got hyped into the CSS-in-JS and TailwindCSS without realizing the value they provide is not really necessary in my projects. Moreover, the closer to the standards and the fewer abstractions the better for the long-term sustainability of the projects. If I write HTML and CSS today, I can open it in years from now, and it’ll very likely work. If I to do the same with a React-based website that is processed by Gatsby through Babel, the chances are that it won’t work in a few years.

It’s been great to learn about those technologies and seeing companies pushing the web forward through them, but HTML and CSS and a bit of Javascript are enough to create value and share my ideas with everyone.

Stay safe!

]]>
<![CDATA[I recently changed my stand in regards to the technologies that I use when building software. In particular, I decided to minimize the amount of Javascript that I use in my project... ]]>
Swinging the pendulum back to engineering https://pepicrft.me/blog/2021/06/01/swinging-the-pendulum-back-to-ic 2021-06-01T00:00:00+00:00 2021-06-01T00:00:00+00:00 <![CDATA[

Over the past two years, I’ve been engineering manager at Shopify. I managed the Mobile Tooling and React Native Foundations teams over here. I’m grateful that Shopify allowed me to experience what being a manager is like.

I learned that people are unpredictable and that sometimes there are behaviors you can’t explain. I changed my mindset from creating impact myself to create impact through people. This required figuring out who to put together on which problems to create the most creative solutions. I teamed senior and junior people up to level up people through mentoring. I defined a vision for the team and learned that in large corporations priorities change so much that you can’t stop iterating on them. I set up secondments for people in my team to have a first-hand experience with the products we build tools for, and keep our trust battery with them high. I put up promotion cases for people in my team and had to exit a person from my team because he was not meeting the expectations of the role. It’s not a pleasant experience, but I stretched your emotional intelligence by going through it. I leveraged my community and open-source connections to bring talent to Shopify that I’d like to work with. I’ve done tiny code contributions trying not to step in the way of the people in the team but most of the time I did. I evangelized my passion for building great developer experiences that are easy to use. I fought complexity and indirection introduced by configurability. I recognized my team’s work internally and externally and shared with the community how awesome it’s that Shopify invests in their own tooling. I had weekly 1&1s with the people in my team, provided feedback, and valued their impact through company-wide calibration sessions. I set up an on-call policy to provide support to the rest of the organization. And in all this journey I was extremely supported by the organization and its tools and resources.

I learned a lot by being a manager but I miss coding so much. I miss getting my hands dirty by building new tools and improving the existing ones. I tried to squeeze coding time into my manager’s responsibilities but ended up frustrated because doing deep coding was impossible due to the frequent context-switching my responsibilities required. For that reason, I’ll soon swing the pendulum of my career back to be an individual contributor again. In hindsight, I think it was a great idea to go through the management experience because I’ve learned so many useful things that will make me a better engineer.

The change will happen in a few months when the new manager is up to speed and ready to take the team. I’ll remain in the React Native Foundations team and work with the folks on the team to shape the experience of building React Native apps at the company. Exciting times ahead!

]]>
<![CDATA[Over the past two years, I’ve been engineering manager at Shopify. I managed the Mobile Tooling and React Native Foundations teams over here. I’m grateful that Sho... ]]>
Focused Xcode projects https://pepicrft.me/blog/2021/05/24/focused-xcode-projects 2021-05-24T00:00:00+00:00 2021-05-24T00:00:00+00:00 <![CDATA[

A while ago, and inspired by Facebook’s internal tooling, we added a new command to Tuist, tuist focus. As it names says, it’s intended to be used when you want to work on a given target. If you have a project of, let’s say 300 targets, you don’t want all of them when you open the project with Xcode. The reason? It makes Xcode slower. The more you open, the more needs to be indexed. You change a file, and Xcode needs to figure out how the change impacts the whole project.

Focus takes your project’s dependency graph, and prunes the elements that are not necessary for you to work on target X. That includes references from schemes. Since there are times when you might want to focus on more than one target at once, you can pass the list of targets as arguments: tuist focus MyApp Search. But that’s not all. We integrated this concept with the caching of target binaries. In a nutshell, the direct and transitive dependencies of your focused targets are replaced with their binary representation. The binary can come from a local cache, and soon, from a server-side counterpart, TuistLab. Amazing, isn’t it? This is what I’ve been hoping to see landing in Xcode, but instead, we’ve been told that the only solution is to replace Xcode’s build system with Bazel. I don’t have anything against Bazel. I think it’s damn amazing. But it’s not compatible with Apple’s way of building tools, and because the integration leads to a bad developer experience, or good but with a huge ongoing investment, I avoid it.

Surprisingly, and as the stats show, tuist focus is not as used as tuist generate. It’s hard to know the why, but my guess is that this is a workflow that developers are not that used to, and it requires some ongoing education and evangelization from our side. Or maybe there’s something not working as expected and we don’t know? If that’s the case we’ll soon find out because we are dog-fooding Tuist by building TuistLab.

As I mentioned in past blog posts, the opportunity to explore ideas with Tuist is what makes the project so exciting, and tuist focus is a good example of that. If you are using Tuist, I recommend you to give the command a shot. You won’t get disappointed.

]]>
<![CDATA[A while ago, and inspired by Facebook’s internal tooling, we added a new command to Tuist, tuist focus. As it names says, it’s intended to be used when you want to work ... ]]>
On not having focus https://pepicrft.me/blog/2021/05/19/on-not-having-focus 2021-05-19T00:00:00+00:00 2021-05-19T00:00:00+00:00 <![CDATA[

One of the things that I struggle a lot with these days is focus. Because of that, I realized I cannot longer do deep and focused work. I spend my days context-switching all the time, and although I’ve got used to it, I don’t like it.

There are several reasons why that happens to me. The first, and well-known these days, are social networks. You’ve probably seen me talking about them in the past, so here I go again. I tend to spend my free time doing endless scrolling to keep up. I think that’s one of the main contributors to feeling exhausted, yet I can’t stop doing it. Moreover, at work there’s a lot going on. Because of the size of the company, and all the context I’ve got over the past years, I spend my days helping here and there: sharing context with other people, unblocking the users of our tools, digesting information that is floating in the organization. Again, I think I’ve got better at this, but I miss having focus.

On top of all the above, I’m so curious that I can’t stop exploring new ideas and problems to solve. As a result, my mental energy scatters across many different places, and I end up nor learning nor doing anything. I think the English saying for what happens to me is “Biting off more than you can chew” (thanks Google).

So here’s what I’m going to try. First, I’ll accept my time and energy boundaries and be mindful about them. I’ll continue learning how to prioritize and say no to things. Both, at work and in open-source duties. Moreover, I’ll center my open-source efforts around only two projects, Tuist and Buildify. As much as I can, I’ll try to spend less time on social networks (attempt number 125125).

Hope you are having a great week and stay safe!

]]>
<![CDATA[One of the things that I struggle a lot with these days is focus. Because of that, I realized I cannot longer do deep and focused work. I spend my days context-switching all the ti... ]]>
Tuist 2.0 and next https://pepicrft.me/blog/2021/05/18/tuist-2-and-next 2021-05-18T00:00:00+00:00 2021-05-18T00:00:00+00:00 <![CDATA[

As we are approaching the release of Tuist 2.0, I started thinking what’s next for the project. The focus until 1.0 was around project generation. We provided developers with a graph-based project generation that abstracts away Xcode’s intricacies. As we passed that milestone, we started thinking about building workflows and optimizations built upon project generation. The graph was a powerful project element that other tools lacked so we felt we needed to leverage it further. We added tuist focus to generate projects that are optimized for developers’ intents. tuist signing made it easy to configure the environment and the generated projects for signing, and tuist cache warm allowed caching project targets as binaries for later usage when generating projects. We also started exploring the idea of standardizing and integration of third-party dependencies through a new manfiest file, Dependencies.swift, and we just released support for defining tasks defined in .swift file that get compiled and executed. Quite a ride, isn’t it? Along the process we met many talented developers that joined on on this ride, and became the fuel that makes the project move forward.

After releasing tasks and overhauling our website to reflect the new brand, we’ll start working towards 3.0. What does that mean for the project? Besides improving project generation, for example by making it faster and handling more project scenarios, I think we should focus on the following elements:

  • Thinning Tuist: We built many commands into Tuist that should be opt-in and extracted from Tuist. For example, the tuist doc command doesn’t necessarily have to be implemented by Tuist. Our work on plugins and tasks will allow that. Those commands will be distributed as plugins and live in different repositories.
  • Dependencies.swift: We should continue investing in this project and have great support for Pods, Packages, and Carthage frameworks. How to integrate third-party dependencies into Tuist project is a recurrent theme and therefore we should provide first-class support for it.
  • Cross-repository dependencies: At the moment, a Project.swift can’t define a dependency with a Project.swift that lives in another repository. Because of that, teams end up creating a Package.swift alongside the Project.swift to be able to consume that dependency as a package. Although this works, it prevents developers from leveraging graph optimizations such as tuist focus. I think we could build a decentralized dependency resolution logic into Tuist inspired by the SPM and step into SPM’s domain. Just a tiny bit 😁.
  • Lab: If graph is a cornerstone component in Tuist, a server would take Tuist to a whole new level. Lab would be that server. Developers would have the option to self-host it, or use our paid hosted instance, and therefore financially support the project. Lab would allow things like:
    • Reporting build insights on PRs
    • Define tripwires and alert teams on Slack or PRs when they hit them.
    • Cache your project’s binaries remotely and share them across the teams.
    • Have an internal registry of frameworks and libraries that you can reference from your projects.

I’m sure more ideas will pop up down the road, but the ones that I shared above will most likely be our focus as we enter this new chapter. We’ll continue to listen developers, their challenges, needs, and collaborate to figure out how Tuist can help them the best way possible.

Thanks for reading.

]]>
<![CDATA[As we are approaching the release of Tuist 2.0, I started thinking what’s next for the project. The focus until 1.0 was around project generation. We provided developers with a graph-based project ... ]]>
Building mental models https://pepicrft.me/blog/2021/05/10/building-mental-models 2021-05-10T00:00:00+00:00 2021-05-10T00:00:00+00:00 <![CDATA[

As you probably know, I started building Buildify, an open-source and AGPL-3-based tool for deployments. Like I did with Tuist, I’m in the process of building mental models around the business domain. It’s one of the hardest steps, and the most important one for the viability of the project. If those models are not solid enough the application might become unusable, and if it’s not flexible enough it might limit the development of future features.

As I keep thinking about the problem, some ideas are starting to emerge. For example, providers like Google Cloud and AWS, offer a serverless solution for containarized apps. In other words, you can provide them with your Rails application in a Docker image and they’ll take care of scaling the resources are needed. This simplifies things a lot on our side because we are mostly responsible for building deployable artifact that can then be handed over to the cloud provider. For long-running services those deployable artifacts will be Docker images. In the case of lambda functions and static websites it’ll be a tar file containing the HTML, JS, and CSS following a conventional structure. And for mobile apps, it’ll be the Android App Bundle or app archive without signing. The signing will be done server-side.

I’m also planning to abstract away the provider through an interface. That way contributors can add support for more cloud providers. Looking at Google Cloud and AWS, they both have similar offering, but with different names. The initial version will have the implementation for AWS. Since AWS offers Mac minis, we can use those to run certain tasks that can only run in macOS environments, like signing an iOS app.

A tool, buildify-runner, will take care of running deployment tasks. I’ll write it in Rust so that it can be run in any hosts without requiring anything else to be installed in the system. It’ll pull the build DAG, and execute it trying to parallelize as much as possible. For example, when deploying a RedwoodJS, the runner will install the NPM dependencies, and build the static files and the functions separately. For reproducibility reasons and stability, the runner will use Nix. Thanks to that, we get caching of dependencies out-of-the-box. I believe using a tool like Nix for setting up the environment will be crucial for providing a great developer experience.

The concept of previews that got popularized by platforms like Netlify and Vercel, will be present in Buildify too but with some enhancements. It’ll work with databases and mobile apps too. In a nutshell, we’ll create disposable databases whose lifecycle is tied to the lifecycle of a repository branch. When the branch gets merged or becomes stale, the database will be dropped automatically.

The process of onboarding new apps must be as seamless as possible. People that have never deployed a project before should have a running app without having to familiarize themselves with infrastructure and deployment concepts. To achieve that, the runner will have a command for cloning a repo, parsing its content, and reporting to the backend all the projects found in the repository. For example, if it’s a Rails app, Buildify will detect it and create the database and a Redis instance for being able to run background jobs.

And last but not least, because my background is mainly mobile development and tooling, I think this platform should work for mobile apps too. The release process will be a bit different though because continuous deployment cannot be extrapolated to mobile apps. In this case we’ll use release branches where each commit will represent a releasable candidate that can be uploaded to the App Store and Google Play Store.

I’m very excited to kick off this project. I’ve been reading a lot lately about projects like Ghost and Plausible, whose revenue is re-invested in the project to make it better and continue to help their users. I feel we need to take back from VCs the problem of easing deploys, and build a community-driven and open-source project that goes hand in hand with those great open-source web frameworks that we’ve seen over the years.

As you can imagine, I’ll be less active on Tuist, although I’ll continue to provide advice and direction on the project.

]]>
<![CDATA[As you probably know, I started building Buildify, an open-source and AGPL-3-based tool for deployments. Like I did with Tuis... ]]>
ViteJS and Rails https://pepicrft.me/blog/2021/04/22/vite-and-rails 2021-04-22T00:00:00+00:00 2021-04-22T00:00:00+00:00 <![CDATA[

I recently had to set up a React frontend for a Rails app, and I decided to use ViteJS instead of Webpack. What’s interesting about ViteJS is that in development, it uses ES modules instead of smashing all your Javascript into a single file because that’s expensive and unnecessary during development. When building for production, it uses ESBuild for generating a bundle. Unlike the traditional Webpack setup that uses Babel, ESBuild is significantly faster because it’s implemented in Go.

I have to say the process of setting it up was pretty straightforward thanks to vite_ruby, a Ruby Gem that eases the process of integrating the tool into Rails’s asset pipeline. Moreover, it provides helpers to add the necessary helpers to load the generated Javascript and CSS files. The resulting configuration is leaner than its Webpacker counterpart and easier to reason about. Vite is not as mature as Webpack, but it’s already got a good community of plugins around it. For example, the legacy plugin takes the role of @babel/preset-env to polyfill our Javascript to support old browsers. The React plugin can reload your component changes instantly to make your development experience smooth.

I really like the amount of utilities one gets when building for the web. You can choose the one that works best for your project, and accommodate it thanks to the numerous APIs that they expose.

]]>
<![CDATA[I recently had to set up a React frontend for a Rails app, and I decided to use ViteJS instead of Webpack. What’s ... ]]>
Learning Rust https://pepicrft.me/blog/2021/04/18/learning-rust 2021-04-18T00:00:00+00:00 2021-04-18T00:00:00+00:00 <![CDATA[

I became weirdly excited for Rust lately. It’s a programming language that I’ve been planning to learn for some time and I finally set out to learn it.

It’s just the beginning of my learning process but I have to say I like the openness of its community and tooling, and the interesting concepts such as ownership that come with it. Swift feels authoritarian and constrained by Apple after getting immersed in Rust. I also like about it that you can easily do cross-platform compilation and use the editor of your choice. Let’s see how it goes, but I think it’ll become my go-to compiled programming language over Swift. Using it is so liberating and you can feel it’s purely community driven.

]]>
<![CDATA[I became weirdly excited for Rust lately. It’s a programming language that I’ve been planning to learn for some time and I finally set out to learn it. It’s just the beginning of my learnin... ]]>
Migrating documentation to Docusaurus https://pepicrft.me/blog/2021/04/12/projects-documentation 2021-04-12T00:00:00+00:00 2021-04-12T00:00:00+00:00 <![CDATA[

Writing a project’s documentation is not as exciting as coding, but over the years I got to understand the key role of documentation in the developer experience. Shopify for instance has a team dedicated to maintain our internal documentation ensuring it’s well structured and navigatable.

Tuist’s documentation website was built as part of Tuist’s website, which is developed using Gatsby, and it grew a lot since then. Because we haven’t had a person dedicated to overview the evolution of the documentation, we ended up with a lack of cohesion and hard-to-navigate documentation. Developers have a hard time finding what they need, and getting started on the project.

For that reason, I set out to improve the documentation website. As a firm believer of choosing the right tool for the job, I took the opportunity to move the documentation to Docusaurus, a React-based utility implemented by Facebook to create static documentation websites. It provides all the features that we need: MDX, code snippets, search. For the past weeks, I’ve been working on moving the documentation over to the new website, which is available at https://docs.tuist.io. As part of the process I set up redirects from https://tuist.io/... URLs to the new ones, and fixed a handful of broken links. It’s looking great so far. I’m waiting for Algolia DocSearch’s response to enable search on the website. Once everything is moved over, I’ll work on adding a tutorial along the lines of what RedwoodJS does. It’ll be useful for developers that come across the project and decide to give it a shot. The tutorial will guide them through most important Tuist features that they should be aware of.

It’s amazing everything React enables. Trying to achieve this with other static site generators would have taken way more work and the result wouldn’t have been the same. If you have a project that needs documentation, you should definitively consider Docusaurus, even if you have no prior experience on React.

]]>
<![CDATA[Writing a project’s documentation is not as exciting as coding, but over the years I got to understand the key role of documentation in the developer experience. Shopify for instance has a team ded... ]]>
TailwindCSS or Theme-UI https://pepicrft.me/blog/2021/04/12/theme-ui-and-tailwind 2021-04-12T00:00:00+00:00 2021-04-12T00:00:00+00:00 <![CDATA[

I’ve been using TailwindCSS a lot lately. I like the fact that styles are contained within the HTML elements through classes. You can copy and paste an element styled with Tailwind and you can be certain it’ll look the same. Unlike other styling solutions, Tailwind doesn’t require the UI to be JS-based so that you can leverage CSS-in-JS and Babel transformations. But like any solution to a problem, it comes with caveats. In the case of Tailwind is the steep learning curve to learn the semantics of the classes. I find myself doing frequent back and forths between VSCode and the documentation.

Theme-UI solves the steep learning curve issue well. It introduces the notion of theming and responsive values without having to abandon the CSS language. Moreover, it’s built upon a theming specification defined by the same author. The downside though is that you need a JS-based UI and you can’t copy and paste styled HTML elements as you’d do with Tailwid.

My preferred option these days? My preference has been swinging, between the two, but these days I’m more inclined towards Theme-UI. The reason being is that I’d like to familiarize myself further with the CSS semantics and learn how to build great web UI experiences using as raw building blocks as possible. Copying and pasting styled components feel pretty much like solving code problems by pasting snippets from StackOverflow. You can create and solve things, without understanding the fundamentals of the solution.

]]>
<![CDATA[I’ve been using TailwindCSS a lot lately. I like the fact that styles are contained within the HTML elements through classes. You can copy and paste an element styled with Tailwind and you can be c... ]]>
Community-driven and organization-driven open source https://pepicrft.me/blog/2021/04/06/organization-and-community-driven-oss-projects 2021-04-06T00:00:00+00:00 2021-04-06T00:00:00+00:00 <![CDATA[

Yesterday, while reading about Rust and its package manager, Cargo, I realized how diverse the list of Crates (packages) for building CLIs is compared to Swift, and made me think about the connection between that and how Rust and Swift are driven.

On one side there’s Apple, a large business whose final goal is to sell hardware. They announced Swift and everyone got excited not only because it was a new and more modern programming language, but because it was open source. Since then, we’ve seen more open source work coming from Apple: swift-log, swift-metrics, swift-nio, and swit-collections. It’s great to see work being done in the open, the community gets excited about it, but the one who steers the boat is Apple, and the ultimate decision on what goes into Swift is made by Apple and not the community. There’s nothing wrong with it. It’s just another approach to open source where there’s a business that needs a strong ownership to ensure the open source work supports the business. The caveat though is that the community around it doesn’t flourish as much as it’d do in community-driven projects like Rust. Everyone is hoping for that next thing that will solve their problems and help them with their needs. Reactive programming is not new in Swift, yet they made everyone think Combine is the way. Nowadays, no one thinks about exploring new reactive programming approaches. Similarly, no one thinks about exploring new solutions to package management. In fact, we are neglecting community work done in the past. This inevitably leads to a slightly authoritarian open source environment where diversity can’t find its space.

On the other side, Rust is entirely community-driven. Even though it was born within Mozilla, it became a community project. Because of that, there are plenty of utilities, packages, and resources to build great software upon it. One might see such large list of options as something negative, but I’m getting to appreciate after having done years of Swift and iOS development. It’s easy to find a community utility that suits your needs best, or explore new alternatives because the project welcomes those.

I think both are valid approaches with understandable motivations behind them. Personally, I’m enjoying a lot the freedom more community-driven projects like Rust, Javascript, and Ruby provide.

]]>
<![CDATA[Yesterday, while reading about Rust and its package manager, Cargo, I realized how diverse the list of Crates (packages... ]]>
The role of flexibility in scaling up Xcode projects https://pepicrft.me/blog/2021/03/21/flexibility-to-scale-up-xcode-projects 2021-03-21T00:00:00+00:00 2021-03-21T00:00:00+00:00 <![CDATA[

I often wonder why Apple continues to build features that are closed into Xcode, for example Swift Package Manager’s integration. While some developers might see that as something positive, because that means it can be seamlessly integrated into Xcode’s UI and workflows, I see it as a complication for scaling up Xcode projects. Let me unfold this last thought in this post.

You can’t build a tool that satifies every team’s needs. You can’t design Xcode to work for the tiny startup that is building a simple app, as well as for a company like Facebook that has a complex project with many inter-dependent targets. Apple optimizes for one type of app, the one that is the most common across all the projects: a mono-target app that might have extensions and support for multiple platforms. In that setup, Xcode works fine. You create your project using Xcode’s menus, add a new target when needed to extend your app, and add third-party dependencies through the Swift Package Manager integration. Xcode, the project format, its build system, Swift, and all the surrounding tools are designed for that. There’s nothing wrong with that, except that those developers that need more are left out of the equation. Their projects take a long time to compile, tiny changes cause the build to break, and Xcode is not as responsive as it used to be. Companies like Uber have suffered a lot from this. Others have adopted Bazel as a build system to escape the problem, and need dedicated resources to keep it working with every Xcode/Swift update.

There must be a better way to scale. Finding an answer to this question is what fuels me to build Tuist. I think the answer to this comes down to flexibility. The flexibility to extend, optimize and even replace the build process without having to leave Xcode. The flexibility to declare my project’s graph, optimize it and validate it early to prevent errors down the road that cause developers’ frustration. That flexibility could be achieved with Xcode opening more APIs instead of building inside itself and treating it as a black box. However, that’s not the direction Apple is taking, and we are seeing that with Swift Package Manager’s integration into Xcode. One of the reasons why other programming languages are used over Swift when building software at scale is the flexibility of their tooling. In Ruby, you can customize your test-running logic andd add types to the language. In Javascript, you can implement plugins that extend your build process and that add new linting rules that integrate seamlessly with the editor. Having a Language Server Protocol (LSP) is a good step forward, but it’s not enough.

Because Apple doesn’t open those APIs, Tuist is taking the role of opening them for developers. We are doing that by leveraging Xcode project generation. We abstract the project.pbxproj format with a more declarative format that can be extended easily. Developers love that, and the fact that Apple doesn’t change their mindset is positive for Tuist because they create a room for the project to thrive.

]]>
<![CDATA[I often wonder why Apple continues to build features that are closed into Xcode, for example Swift Package Manager’s integration. While some developers might see that as something positive, because... ]]>
Open source, people, and happiness https://pepicrft.me/blog/2021/03/20/open-source-people-happiness 2021-03-20T00:00:00+00:00 2021-03-20T00:00:00+00:00 <![CDATA[

When looked from the consumer standpoint, open source often reads as software publicly available that I can check out, use, and improve. However, there’s more than that. In a world where everyone seems to be obsessed with building the next TikTok and making the world a better place, open source takes you, a software crafter, to what this pandemic has proved is the key ingredient for happiness, human connections.

I started building Tuist motivated by some challenges that I wanted to overcome, and over time, we turned it into a group of aligned people collaborating towards the same goal; extremely talented software crafters from different locations building upon a foundation that I helped build. How did it happen?

First, Tuist is the reflection of the education that I received from my family. They taught me that happiness means spending time with people. That unlike capitalism tries to prove us, happiness is not about climbing ladders, getting a higher salary, or working for your dream company. For that reason, I made people a cornerstone in Tuist.

Since the moment people show up in Tuist, we spend time connecting with them and empathizing with the motivations that led them to adopt the tool. They feel heard by other humans beings, and that’s a great feeling in an industry that is trending towards dehumanizing technology with solutions like bots. Moreover, we invite them to contribute to the codebase. We give them the necessary pointers and even pair with them on nailing the first contribution. They go from I don’t know how to take the first steps in this project to nailing their first contribution and getting inspired to ship more.

We trust the people that join and let them prove us wrong. This is something that I learned at Shopify. Assume good intentions from the people around you, and work on having a charged trust battery with them. That has a tremendous effect on people. They feel inspired to contribute further, build their own tools, and do the same with other people joining the community. It cascades rapidly, and because doing great things for people can get very far, it has the side effect of bringing more diversity of ideas to the table from people that otherwise wouldn’t have contributed.

When I tell people that I do that without getting money in return they think I’m crazy. They think I’m wasting my time that I could otherwise spend becoming richer. What they don’t realize is how happy I am helping other developers overcome challenges that I had to go through, and seeing community members spreading goodness across other developers. It’s not about money, it’s about people.

When I was young, I used to work over the weekends in my family’s cafe. What I remember from those days, and I can still see it in my parents, is how happy they are earning an average salary but having the opportunity to interact with people all the time. Open source is my modern cafe where “cafe con leches” became code.

I have to say though that I’m privileged of having a paid job that allows me to spend spare time on Tuist. Money is a component that we can’t remove because we need it for living, but it can be a secondary one.

]]>
<![CDATA[When looked from the consumer standpoint, open source often reads as software publicly available that I can check out, use, and improve. However, there’s more than that. In a world where everyone s... ]]>
Data-driven open source https://pepicrft.me/blog/2021/03/16/data-driven-open-source 2021-03-16T00:00:00+00:00 2021-03-16T00:00:00+00:00 <![CDATA[

Yesterday, we announced that Tuist has now stats that allows us to understand how users use the tool and therefore, invest our time working on Tuist more wisely.

As expected, there were some negative reactions to this:

  • Oh! I’m glad that I can opt out.
  • And yet another tool that succumbs to the data-driven method…

If I put myself on their shoes and after seen what large corporations have done with the data I’d react the same way.

However, there’s a subtle difference. We are an open-source project, and such, the time can we can devote to the project is limited. It’s people working outside of their usual working hours to make the tool better. If we don’t know how the tool is being used, we don’t know if the limited time that we have is well-invested. It’s not about us using the data to sell them something, it’s about us using the data to make the tool better.

Interestingly, there are more tools out there that they might be using without realizing that they are also collecting data for the same purposes: Homebrew and CocoaPods. Every pod install they’ve run, has sent data to CocoaPods to better understand how frequently Pods are used. Every brew install X has sent data to Google Analytics to know what formulas are used the most. And yes, Fastlane does it too.

Because we know this is concerning, we have taken the approach of not only having all the code for collecting and sending events public, but we’ve built our own Rails app with its own database on Heroku to store the data. Tuist’s data is our data, not Google’s. It stays within our domain. And because we own it, we’ll apply the same values to it that we’ve already applied to the code that users are already using.

]]>
<![CDATA[Yesterday, we announced that Tuist has now stats that allows us to understand how users use the tool and therefore, invest our time working on Tuist more wisely... ]]>
Building Tuist as a platform https://pepicrft.me/blog/2021/03/12/building-tuist-as-a-platform 2021-03-12T00:00:00+00:00 2021-03-12T00:00:00+00:00 <![CDATA[

Having seen Shopify acting as an e-commerce platform that developers can extend made me think whether the same idea would be applicable to Tuist. What if instead of us trying to codify all different workflows and configurations, we gave developers an API to do it? Damn, it’s so good. I started thinking about it thoroughly, and I think I have a good idea around how that could be.

First, I think Tuist’s focus should be on the project graph, the generation of projects, and the build and test commands. We’ve invested four years into building a solid graph that we can optimize and turn into Xcode projects. Developers love that! They no longer have to think about linking or embedding build phases. Tuist makes them an implementation detail and knows what changes can be applied to the resulting Xcode project to be fast to index and compile.

What APIs can we expose? The two that I think would be very valuable are Setup.swift and Tasks.swift. The first already exists. Developers can define how to configure an environment before interacting with the project. However, I’d change the approach to be imperative instead of declarative. Basically, instead of Tuist parsing the file and then translating the steps into system commands, we’d run the Setup.swift file directly as if it was a command line tool. Tasks.swift is not yet implemented but it’d allow developers describe their workflows in the same way they currently do with Fastlane. The difference is that Tuist’d provide them with information about their projects, like the dependency graph. How cool is that? Let me dump some pseudo-code below:

import TuistAutomation

let tasks = [
  .task(name: "swifltint") { tuist in
    let graph = tuist.graph()
    let sources = graph.sources()
    tuist.system("swiftlint", ....)
  }
]

And this would align with the work that we are doing with plugins. Developers would be able to extract their tasks and up commands and wrap them in a plugin that they share across projects and with the community. Same as if they were sharing Fastlane lanes ❤️.

It’s exciting seeing everything that project generation enables. As I’ve said a few times, project generation is a means in Tuist to help developers with the challenges at scale. Using project generation for only getting rid of git conflicts will not help you with a plenty of problems that you’ll see down the road.

]]>
<![CDATA[Having seen Shopify acting as an e-commerce platform that developers can extend made me think whether the same idea would be applicable to Tuist. What if instead of us trying to codify all differen... ]]>
Tuist and JS bundlers https://pepicrft.me/blog/2021/03/10/tuist-and-js-bundlers 2021-03-10T00:00:00+00:00 2021-03-10T00:00:00+00:00 <![CDATA[

I think there are a lot of similarities between Tuist and JS bundlers. First, they both are functions that take input and return an output. In the case of a JS bundler, it takes Javascript or any variation of it, and converts it into Javascript that is compatible with the target platform (e.g. browser). In the case of Tuist, it takes your project definitions, and generates an Xcode project that you can use to build your apps. What’s beautiful about putting a function in between, is that it opens the door for optimizations and transformations that otherwise wouldn’t be possible. Javascript bundlers use it to transform code, for example JSX syntax into plain Javascript, or minifying the output Javascript so that it can be downloaded faster when users open a website. What’s great about those Javascript bundlers is that they all have the concept of plugins. They allow developers participate in that transformation process with their own functions. Tuist is no different. We take you definition of projects and figure out what optimizations we can apply to ensure that the resulting Xcode project is fast in Xcode and compiles fast. One of the transformations that we apply is caching. We transform some of the targets into their binary representation.

The difference between the Javascript bundlers and us is that we don’t allow developers to define their custom transformations. Fortunately, that’s going to change soon thanks to plugins. Developers will be able to encapsulate their transformations into a plugin that they’ll be able to distribute to Tuist users. How cool is that? We took a very closed and monolith project format, .pbxproj files. And we are turning it into a more open format that developers can extend and optimize.

This is the beauty of Tuist.

]]>
<![CDATA[I think there are a lot of similarities between Tuist and JS bundlers. First, they both are functions that take input and return an output. In the case of a JS bundler, it takes Javascript or any v... ]]>
Owning your workflows https://pepicrft.me/blog/2021/02/16/owning-your-workflows 2021-02-16T00:00:00+00:00 2021-02-16T00:00:00+00:00 <![CDATA[

The more I work with Javascript and Ruby, the more I realize how empowering it is to design your workflows. Having worked with Xcode and Swift for many years, I was used to Apple dictating your working style. You need to debug an issue? This is how you do it. Is your app is not performing? Here’s instruments to trace your app performance. Need to do something custom at build time? Here’s a script build phase to extend the build process.

Sure, by doing that, Apple has stronger control over the ecosystem, and therefore, over the developers’ apps. However, there are situations when the proposed method doesn’t work for your scale. Then you need to come up with creative ways to accommodate Apple’s processes to your needs. An excellent example of that is replacing Xcode’s build system with Bazel and figuring out how to get Xcode to compile with Bazel instead of using its build system. The result of that setup looks more than a hack. It’s brittle. One day it works; the day after, it doesn’t. And because Apple is so distant from many real challenges developers face, they continue building around their utopian vision of app development. That one where apps are small, a few targets, and dependencies are seamlessly distributed via Swift Package Manager.

The reality is far from that. The day-to-day is way more convoluted than what Apple thinks. And because Apple doesn’t embrace that and provides more flexibility to customize processes, teams have no choice other than to wait for the next WWDC to see if Apple decides to tackle the issues they are facing.

In Javascript, you can choose your build tool. You feel like using Webpack? Go for it. Is it too slow? You can use alternatives like esbuild. You need to lint some code? There’s eslint. Don’t you have the rule that you need? You can implement your custom rules. You need to generate code at build time? You can use Babel and implement your own macros. Would you love to extend VSCode’s interface to show useful debugging information? You can use the Extension API.

And because of all of that, Ruby and Javascript are excellent as general-purpose programming languages, while Swift remains that language for building apps for Apple’s ecosystem that dreams of becoming something more than that.

]]>
<![CDATA[The more I work with Javascript and Ruby, the more I realize how empowering it is to design your workflows. Having worked with Xcode and Swift for many years, I was used to Apple dictating your wor... ]]>
Focusing on the problems https://pepicrft.me/blog/2021/02/09/focusing-on-the-problems 2021-02-09T00:00:00+00:00 2021-02-09T00:00:00+00:00 <![CDATA[

One of the things that I noticed when building tools for developers, either through Tuist or my work at Shopify, is that we developers tend to get incredibly excited about what our new idea would enable, and put the need or problem aside. I believe that’s the source the complexity and the configuration over convention that we see in many tools.

While working on Tuist, it’s common to see users creating issues saying they need something without giving context on why they need it. And it’s also common to see other contributors and maintainers moving the discussion along without figuring out the reason that prompted them to create the issue in the first place. We can’t design great solutions if we don’t understand the needs very well.

My role as lead at Shopify and core maintainer at Tuist often comes down to reminding people about the importance of understanding the need or problem. In the case of Tuist, this is done through discussions on GitHub issues, and at Shopify we often do it through user interviews. It sometimes requires a few whys until the developer surfaces their motivation. In some odd cases, it leads to the realization that they don’t know the need or that there’s already a solution for it.

Once identified, I nudge people to find the simplest solution that solves the problem. Still during this phase developers think far ahead and imagine how developers would use the feature and why that usage justifies the level of configuration they want to introduce into the feature. But do they need it? Most of the times the answer is not now.

In the space of Xcode project generators, this mindset is a key and highly-appreciated differenciator compared to other alternatives. Although they turn the .pbxproj into a more readable and shorter version of it in a YAML file, it’s still the same complex concepts and configuration that makes the process of evolving Xcode projects a difficult task. With Tuist, every request to port an Xcode feature into Tuist’s APIs is looked from the angle of what are you trying to achieve with it. I like how DHH calls it on his keynote talk from RailsConf 2018, conceptual compression. If we, crafters of tools, own complexity to provide simplicity, the resulting tools will noticeably spark more joy when using them.

And last but not least, I the process of turning a problem into a solution requires a thorough thinking. We should let the idea sit in our mind for days, explore different solutions, and very importantly, understand how they fit into the project and aligns with its direction. That sometimes means saying no to the idea. I think taking this process as a marathon and not a sprint can make a huge difference in developer experience. If you treat your GitHub issues as an inbox where you goal is to get down to zero and implement everything you’ve been asked for, there’ll be plenty of interesting ideas, but that they don’t know how to talk to each other.

]]>
<![CDATA[One of the things that I noticed when building tools for developers, either through Tuist or my work at Shopify, is that we developers tend to get incredibly excited about what our new idea would e... ]]>
Tuist and the Swift Package Manager https://pepicrft.me/blog/2021/02/05/tuist-and-spm 2021-02-05T00:00:00+00:00 2021-02-05T00:00:00+00:00 <![CDATA[

It’s common to see developers wondering why they should use Tuist instead of the Swift Package Manager (SPM) for modeling their projects. I think it’s normal. It happened to me a few times too. Some of them even made me wonder if I should continue investing time into Tuist. There are some ideas and principles that are common to both tools. One can use Tuist to define a CLI tool like you’d do with SPM in the same way SPM could be used to to define the targets of your project. However, there’s a fundamental difference that is worth bringing up in this tiny blog post.

The Swift Package Manager is dependencies-oriented while Tuist is projects-oriented. SPM does a good job resolving and pulling dependencies, and providing a standard CLI to build and test your packages. The developer experience integrating it with Xcode is questionable good, but it works in most cases. I say it’s questionably good because when used at scale, like it’s the case in Tuist’s codebase, it’s a bit frustrating seeing Xcode very often invalidating the dependency graph, or failing to resolve it and having a project that can’t compile. Because SPM is dependencies-oriented, the workflows are designed around that. This might change depending on the direction that Apple takes with the tool, but if that doesn’t change, I doubt we’ll improvements that projects need at scale.

On the other side, Tuist is designed as a tool to make the experience of maintaining, interacting and scaling up your projects the best. Because the people behind Tuist maintains large-scale projects, we are building features that can make a huge difference in developers productivity: project description helpers, focused projects, caching, and standard interface for third-party dependencies (not only packages). If Apple is that giant corporation that come up with utopian visions of developer experience towards which many teams are biased (i.e. authoritative bias), we are the tiny startup that stays close to the developers and their pains and solves the problems that they bring up. And because we don’t have to align our ideas with any business’s direction, it’s easier to explore and execute ideas.

That being said, it’s possible that Apple changes the direction of the Swift Package Manager and turns it into a project manager. However, and as we’ve seen in the past, I think they’ll continue distant from the real problems that developers face. The future of developer tooling around managing your Xcode projects is exciting.

]]>
<![CDATA[It’s common to see developers wondering why they should use Tuist instead of the Swift Package Manager (SPM) for modeling their projects. I think it’s normal. It happ... ]]>
Tackling technical debt in Tuist https://pepicrft.me/blog/2021/02/04/tackling-technical-debt 2021-02-04T00:00:00+00:00 2021-02-04T00:00:00+00:00 <![CDATA[

I’ve recently spent a lot of time in Tuist tackling technical debt. It’d been a while since the last time I have to pause some other work for weeks to do something that would be beneficial for the long-term of the project.

This time the work was replacing models that are very core to Tuist’s domain: the graph and all the models associated to it. When I built the first graph structure, I didn’t put too much thought into how it should be. I was led by intuition. I added a reference here, a subclass there, and everything seemed to work. It worked so well that we have built the majority of the features on top of it.

We could have continued building upon that graph, but the further we moved, the clearer it was that we needed more flexibility, safety, and an easier to reason about graph. Unlike the old graph that used in-memory references to represent dependencies, we implemented the new one as a struct.

The edges of the graph are represented by dictionary key-value relantionships, and the nodes by enums. Everything is defined in a model so at a glance you can see its format. Moreover, we built a traverser that wraps it and provides efficient methods to traverse it. Those are useful when generating the projects because the logic needs to traverse the graph a few times to obtain information like what linking build phases should be added.

The new graph is already used by many components, but there are still some left. Doing this work made me realize how core this model is and why it makes Tuist’s project generation so unique. It’s certainly not as interesting work as building new features, but I’m motivated by the fact that this refactor will enable so many future improvements and new features.

If you see me not talking too much about new Tuist features is because I’m spending most of my time making this possible.

]]>
<![CDATA[I’ve recently spent a lot of time in Tuist tackling technical debt. It’d been a while since the last time I have to pause some other work for weeks to do something th... ]]>
Decision records https://pepicrft.me/blog/2021/02/03/decision-records 2021-02-03T00:00:00+00:00 2021-02-03T00:00:00+00:00 <![CDATA[

One of the things I’ve been terrible at is at keeping decisions records in projects. It happens often working on Tuist that I come across something that I need to know why it was done in a particular way and I can’t remember. It also happens that users ask about something and I have to repeat the same thing over an over. Because of that, I’m considering adding a decision record to the Tuist repository where we can keep track of these things, as well to the repos at Shopify that my team is responsible for maintaining.

As a developers we have to make many decisions along the day, and the result of those decisions is most of the times code. However, if the code doesn’t speak for itself, it’s extremely useful to add a companion narrative that adds the context necessary to understand the story of the code. It takes practice to build this habit, but I think it’s an important one to build as you become a more senior engineer. Piling up undocumented decisions in a project is the perfect recipe for misunderstandings and frustration.

I’m working on building that habit and making decision records a core piece of every repository that I’m responsible for maintaining.

]]>
<![CDATA[One of the things I’ve been terrible at is at keeping decisions records in projects. It happens often working on Tuist that I come across something that I need to know why it was d... ]]>
Scaling up an open-source project https://pepicrft.me/blog/2021/01/28/scaling-up-an-open-source-project 2021-01-28T00:00:00+00:00 2021-01-28T00:00:00+00:00 <![CDATA[

One thing that I’ve been struggling a lot with lately is the amount of distractions that come with the growth of an open-source project. In the case of Tuist, those distractions have come in the shape of notifications on GitHub, mentions on PRs, direct messages on Slack, and interesting conversations happening on Slack channels. Working on Tuist lately has felt more like running on a treadmill. I don’t like it. I like being able to craft new features and improvements with no distractions. Just me and Xcode.

To regain the focus that I used to have then the project was small and only a few people I started taking action. First I’m time-boxing the time I use checking notifications, Slack messages, and pings on PRs. I think spending between 15’ and 30’ minutes per day is enough. If something comes up after that daily time it has to wait until the next day. Moreover, I’m trying to document and automate as much as I can. I introudced a new tool into the project, fourier, that will become the place for all utilities necessary to work on the project. Most of the answers on how to do things will be in either the tool or the documentation so people shouldn’t need me to answer “how do I do X”? Also, I’m encouraging people to use asynchronous discussion channels like GitHub PRs and issues. Unlike Slack where people feel the freedom to catch your attention, it’s you who decides when is the time to read the discussion on GitHub. You are the one controlling your attention.

Let’s see how that goes. One of the reasons why some maintainers give up on open-source projects is because they can’t cope with the maintenance burden. I don’t want the same to happen in Tuist and I’ll work towards that.

]]>
<![CDATA[One thing that I’ve been struggling a lot with lately is the amount of distractions that come with the growth of an open-source project. In the case of Tuist, those d... ]]>
The beauty of a standard command line interface https://pepicrft.me/blog/2021/01/18/standard-automation 2021-01-18T00:00:00+00:00 2021-01-18T00:00:00+00:00 <![CDATA[

There’s something beautiful in entering a directory that contains a project and knowing how to interact with it. It’s like being part of a communication where the terminal is the channel and you both know the language. You know that build will turn your code into binary, that test will validate that your code does what’s supposed to do, and run will show you what the code does. Since all the projects speak the same language, you can move between projects freely and being able to interact with them seamlessly.

Unfortunately, such beauty hasn’t existed for years in the Swift community. Fastlane gives you the tools to define the lanuage; you get words, but you are the one coming up with the language. The reality has proved that defining a coherent language while building an app it’s not easy. Teams end up with complex and hard to optimize logic that is a nightmare to maintain. It’s not Fastlane, their building blocks are great, but their approach naturally leads to this.

When people ask me why Tuist provides now commands like build and test, my answer is simply that there’s the need in the Swift community for a different approach to interact with projects. An approach where we write the language, we make it simple and enjoyable to use, and you focus on building great apps.

You describe us the projects, and we do the rest.

]]>
<![CDATA[There’s something beautiful in entering a directory that contains a project and knowing how to interact with it. It’s like being part of a communication where the terminal is the channel and you bo... ]]>
Reflecting on 3 years at Shopify https://pepicrft.me/blog/2021/01/18/what-i-have-learned-at-shopify 2021-01-18T00:00:00+00:00 2021-01-18T00:00:00+00:00 <![CDATA[

A few days ago, it was my 3rd anniversary at Shopify, and I’ve got the idea of sharing in a short blog post what are the things that I like from Shopify and that allowed me to grow:

  • Great mission: The company has a clear and realistic mission that they go after, and most importantly, leadership believes in it. I’m noticing more and more companies emerging with CEOs with a utopian view of the world and throw many investors’ money into the problem with the dream that the world will be shaped as they want it to be.
  • Leadership understands technology: And, therefore, can make informed decisions. When we decided to adopt React Native, it was a meditated decision that the CEO could see aligning with the company’s direction.
  • They trust you: All that friction you see in other companies where you have to escalate the “can you give me access to” ladder doesn’t exist at Shopify. They trust people, and people trust each other. We have a concept called “trust battery”, and it’s essential to have it fully charged with the people around us.
  • Technology is a means and not an end: Shopify is often criticized for being old-school with their Rails monolith, and very recently, for adopting React Native as the default technology for building mobile apps. However, I learned to appreciate that any technology in the right hands can get you to achieve great things. I’ve seen companies writing native Swift apps or complex micro-services backends whose business and user experience is broken. What’s the point of being on the latest if you can’t provide excellent value to your users?
  • You have a path for growth: Since I expressed my interest in leading a team, my manager trusted me to give it a shot and start taking steps towards it. Since then, they’ve provided me with resources, guidance, and support to help me grow every day. Similarly, I’ve seen other folks changing teams and paths to explore new areas.
  • Not afraid of black boxes: Unlike many companies that are afraid of change when they find their comfort zone, Shopify likes and embraces change because they accept the world is continually changing, and we need to evolve with it. A couple of examples of that are the adoption of React Native and becoming digital by default. As soon as the pandemic started, the CEO accepted that the new normal wouldn’t be the same and that we should be the first to figure out what it’ll be like and not the last. The whole company is learning how to be remote, and it’s fantastic to see everything they are doing to support this transition.
  • Extremely talented people: Shopify is full of crafters that are passionate about what they do and from which you can learn a lot. I’ve had the opportunity to learn a lot about people management, design developer experiences, and manage projects.
  • A great developer experience can make a difference: The opportunity to work on tooling full time is one of those things that got me into Shopify. It’s a company that doesn’t see tooling as an after-thought. A great tool can save you time and bring you the focus you need to build the product. We have teams dedicated to tooling for different development areas: local environments, cloud environments, test infrastructure, mobile, web…

Overall, I feel valued. I get the space that I need to be creative. I’m surrounded by a fast and inspiring environment that motivates me to challenge my static visions of the things around me continually.

]]>
<![CDATA[A few days ago, it was my 3rd anniversary at Shopify, and I’ve got the idea of sharing in a short blog post what are the things that I like from Shopify and that allowed me to grow: ... ]]>
My first RFC in the React Native project https://pepicrft.me/blog/2021/01/07/first-rfc-in-rn 2021-01-07T00:00:00+00:00 2021-01-07T00:00:00+00:00 <![CDATA[

Today I created an RFC for the first time in the repository 🥳. I’ve pondering a bunch of ideas for a long time regarding how the experience building React Native apps could be improved and I finally gave them a structure and formalized them in a RFC.

I’m not sure if the community will be willing to embrace that direction, but I’m convinced the developer experience can improve significantly. It reminds me to the moment I had the realization that Tuist was necessary and I set out to build it.

]]>
<![CDATA[Today I created an RFC for the first time in the repository 🥳. I’ve pondering a bunch of ideas for a lon... ]]>
My tech stack in 2020 https://pepicrft.me/blog/2020/12/28/my-tech-stack-2020 2020-12-28T00:00:00+00:00 2020-12-28T00:00:00+00:00 <![CDATA[

I’m a bit reflective today; I guess because we are approaching the end of this so-odd year. Therefore, I’d like to share what has been my preferred tech stack in 2020 and what most likely continue to be in 2021.

React for building web frontends

I like React’s approach to building declarative UIs. The React’s movement led to an explosion of community components that make building UIs feel like LEGO, and great tools and processes like CSS-in-JS that save you a lot of time and improve the developer experience. I haven’t tried Vue myself; therefore, I can’t say much about it. I’m not a huge fan of Facebook steering the framework, but I’m optimistic it’ll become (if it hasn’t been done already) a community-driven project.

Gatsby for building static websites

After embracing React, using Jekyll and its partials felt old-school. The introduction of dynamic behaviors resulted in writing imperative Javascript that calls DOM APIs. I wanted to use React’s declarative approach, and GatsbyJS allowed that. When I came across GatsbyJS, were a bunch of concepts that I had to learn. For example, its method to decouple the UI from the data sources using a GraphQL API. The learning curve was a bit steep, but after passing it over, using Gatsby is a pleasure. And because it’s built upon React, I can reuse components and utilities built for React. When I see frameworks like Publish in Swift, a programming language that I’m emotionally attached to, I always think there’s no way I give up on React’s awesomeness and convenience.

Netlify for deploying static websites

Netlify is an excellent example of an abstraction that improves the experience of putting a static website on production. They provide an environment where my Gatsby websites can be built, and placing the resulting HTML artifacts in a CDN network that I can point my domains to. One of its breakthrough features is deploy previews. They deploy PRs to temporary environments so that people reviewing your PRs can check out the changes live. They provide many other features, but I haven’t used them yet.

Rails for building web services

I might be biased here because I work at Shopify, but I enjoy building web services with Ruby on Rails. I can confirm both the programming language and the framework spark joy when using it. I’m not an expert in any of both, but I reached a point where I feel fluent, and I can’t have something up and running quickly. What I also like about Rails and Ruby is that there’s less fatigue compared to Javascript. Suppose you need a database ORM; ActiveRecord is there to help. If you need to expose a GraphQL API, you can use GraphQL Ruby.

These are the Gems that I usually add to projects: Devise for authentication, Pundit for authorization, Sidekiq for running background jobs, GraphQL Ruby for defining a GraphQL API, Webpacker for using React as a frontend that interacts with a GraphQL API, and Rubocop for code linting.

Heroku for deploying web services

I’m not an infrastructure person, so I appreciate services that put my code into production. Netlify is that service for statically-generated websites, and Heroku is its counterpart for long-running services like Rails applications. I can create a new project, link it to a repository that contains a Rails app, and in a matter of minutes, I have a Rails app up and running on production. I can add a PostgreSQL database and a Redis instance that I can point Sidekiq to through add-ons.

As a side note, I have to say I like the run feature, which allows me to open a Rails console with the production instance and debug issues in production:

heroku run rails console

GitHub Actions for continuous integration

Since GitHub introduced it, GitHub actions became the go-to continuous integration service. I like it because it’s integrated into GitHub’s UI. The user interface is fluid compared to other services that I’ve used in the past, and it’s straightforward to reuse CI logic across projects by defining actions. It took a long for GitHub to step into the business, but I have to say they beat the market with an extremely high-quality product.

TailwindCSS for styling web interfaces

TailwindCSS is probably the discovery of the year. The framework introduced me to the concept of utility classes in CSS; after learning a set of semantic HTML classes that match to CSS styles, you can easily style your HTML without having to jump back and forth between HTML and CSS files. Moreover, classes delimit properties such as color and margin to a pre-defined set of values. As a result, UIs look more consistent and harmonious.

After coming across the framework, I followed the authors of it, x and y, and I’ve been a massive fan of the work that they’ve been doing. Another masterpiece from them is the RefactoringUI book, which teaches you ideas to build beautiful and clean UIs. I also paid for a license for TailwindUI, their set of pre-defined layouts implemented with TailwindCSS.

Ruby for command line tools

Even though I’m building Tuist in Swift to make it easier for users to contribute, Ruby is my programming language for building CLI tools. It comes with the system, I’m incredibly familiar with it, and there are many useful Gems from the community that I can reuse. Moreover, its dynamism makes it very suitable for experimenting with ideas and workflows. Doing something like that in Swift would require going through Xcode and its compilation cycles, which takes the focus away from prototyping and hacking.

VSCode for editing code

I love VSCode. It’s a masterpiece from Microsoft. I can use it with multiple programming languages like Ruby and Typescript, extend it through community-built extensions, and configure it through workspace settings. Even though it’s built with Electron, which inevitably means using more computer resources than other editors built natively, I haven’t tried any other editor that is as close to as significant as VSCode is. Between developer experience and efficient computer resource usage, I lean more towards the former in this domain.

React Native for building apps

I like Swift and Apple’s direction with SwiftUI, but I can’t reuse the work that I do for Apple platforms in other platforms like Android and Windows. Shopify’s adoption of React Native has taught me that you can build great products in React Native; you need to make the technology an implementation detail and focus on the product. In recent years I’ve shifted from treating technology as a goal to using it as a means to build tools that solve users’ problems. To achieve this, I think React Native is the most suitable option, and I can apply all the concepts and learnings from building web UIs with React.

…and this is my preferred stack. What about yours? If you write a blog post about it and share it on Twitter, don’t forget to tag me (@pepibumur); I’m curious to see what other folks in the industry are using.

]]>
<![CDATA[I’m a bit reflective today; I guess because we are approaching the end of this so-odd year. Therefore, I’d like to share what has been my preferred tech stack in 2020 and what most likely continue ... ]]>
Sparking joy working with Xcode https://pepicrft.me/blog/2020/12/10/sparking-joy 2020-12-10T00:00:00+00:00 2020-12-10T00:00:00+00:00 <![CDATA[

I learned by working with Ruby and Ruby on Rails during my time at Shopify that using tools and programming languages that spark joy is crucial for developers’ motivation. Even though we developers love to understand complexities, we enjoy working with simplicities and conveniences day-to-day.

In the Swift community, we’ve seen a proliferation of tools to help developers with different needs (e.g. generation of Swift interfaces from resources, code linting, dependency management) that inevitably led to a non-cohesive and complex developer experience where Fastlane acted as the glue. On top of that, the authority bias is nudging teams to have more than one dependency manager in their projects to stick to Apple’s recommendations.

Below there’s a common Xcode project setup with all the elements that are part of it and how they depend on each other. Note the estensive list of elements that you need to work on the project. There are a lot of caveats in such setup:

  • Reproducibility is harder that might result in developers spending their time debugging inconsistent results across environments. This can happen for instance if Homebrew decides to install a new version of a tool on CI that introduces breaking changes.
  • The setup is difficult to reason about for a new person joining the project. Only the people that have been part of the design can have such an overview. It results in a terrible bus factor, which in large companies translates to the infra team or the go-to person that knows everything about the project setup.
  • Optimizations are not easy. Different pieces are so coupled to each other that introducing optimizations that have a significantly affect on developers’ workflows is a challenging task.
  • Because there are many potential points of failure, when errors arise, they are harder to debug. Is this failing because of my CocoaPods version? Is it because of this pod lane that I’m using? Might it be related to the version of Ruby I’m using?

The diagram shows an example of a complex Xcode project setup

That’s a setup prone to errors and stress for developers with which I’d never want to work. We’ve spent most of our time building great tools but not that much thinking about providing a cohesive experience when bringing them together.

For this reason, we continue investing in Tuist. We believe there’s an opportunity for providing a Rails-like experience for developers building apps with Xcode. An experience that combines primitives from Apple like xcodebuild and tools from the community.

Anyone can and should be a project architect. To make this possible, developers need simple setups and APIs to describe their projects. Having an infrastructure team is useful to steward the project’s growth, but there shouldn’t be any strong dependency between feature teams and them. The trap many companies fall into is building this strong dependency where every time you need to do something that touches the architecture, you have to do it through the infra team. The setup must be simple. Embracing complexity is the formula for creating an environment in which developers don’t want to work. Tuist owns complexity and the optimization of workflows to provide a simple and efficient workflows through the CLI. A more straightforward setup will make the environments more reproducible, and thus developers will have to spend less time debugging issues when they arise. To work with Tuist, teams only need to install Tuist, and that’s it. No dependency on Ruby, Homebrew, nor Fastlane. It’s you, Xcode, your project, and Tuist.

Developers might see this setup as rigid like many companies saw and continue to see Rails. But look at companies like GitHub and Shopify that were able to build excellent products in part thanks to the fact that developers could focus on building the product and not fighting the underlying tooling and frameworks. As apps become larger, the need for a Rails-like foundation becomes more important, and that’s the place I think Tuist can take in the community.

]]>
<![CDATA[I learned by working with Ruby and Ruby on Rails during my time at Shopify that using tools and programming languages that spark joy is crucial for developers’ motivation. Even though we developers... ]]>
Tree-shaking Xcode projects https://pepicrft.me/blog/2020/11/11/tree-shaking-xcode-projects 2020-11-11T00:00:00+00:00 2020-11-11T00:00:00+00:00 <![CDATA[

You might have seen me talking about Xcode projects’ tree-shaking with no idea of what I’m talking about. This is a concept inspired by the same concept in the Javascript land. Over there, it refers to the process of stripping away from the resulting Javascript bundle those bits that are not necessary because there are no execution paths that go through them. The goal is to minimize the size of the file that is served that the website opens faster. I liked the idea and made me wonder if something like that would be useful in Xcode projects. It turned out it is.

What’s tree-shaking an Xcode project Have you tried to open a large project in Xcode? Indexing is not immediate, the list of schemes is probably large and hard to navigate through, Xcode’s features like searching are slower than usual. This is something we are, in fact, experiencing in Tuist’s codebase, and it’s very annoying. What if the generated project had a focus on a given target and removed everything that is not necessary to work on that target? In a modular codebase, that’s a common thing to do. You work on the Search team, and most of your work is done in Search.framework. There’ll be scenarios when you need to work on core frameworks, but most of the time, it’s not the case. Well… that’s what Tuist does when you run the following command:

tuist focus Search

We traverse your project’s dependency graph and remove the elements you don’t need to work on Search. Let’s look at the example below:

An image that shows how the tree-shaking of projects works with Tuist

We have a simple modular app with a layer of feature frameworks and another layer of utility frameworks. When we focus on Search we get that target and its dependant targets (e.g. SearchExample, SearchTests, SearchUITests) as sources, and its dependencies as binaries (if they exist in the cache). This means Xcode doesn’t have to index anything related to App, Settings, and Home, and clean builds of the framework will only have to compile Search. For the user, that also means that they can safely clean their environment (i.e. deleting DerivedData) without feeling concerned about leading to a slow build.

As part of the tree-shaking process, we also delete from the workspace the projects that have no targets after deleting the unnecessary targets, and update the schemes to remove the references to no-longer-existing targets.

What if I wanted to modify something in Core? We thought about that too, tuist focus supports a list of targets you’d like to focus on. You can run the following command, and you’ll get the sources of Core too:

tuist focus Search Core Neat! Isn’t it? I think tree-shaking is a powerful idea that will boost developers’ productivity working with large Xcode projects. I can’t stress enough how much value this brings to Tuist’s project generation functionality compared to other solutions out there whose main focus is having a new language for .pbxproj files that is not prone to Git conflicts. Tuist goes beyond that and puts itself in the shoes of teams that need a tool to optimize these optimizations to make their developers productive and focus on building features.

]]>
<![CDATA[Tree-shaking is a concept inspired by Javascript and used by Tuist to generate lean Xcode projects that are processed and compile faster.]]>
Module caching in Xcode projects https://pepicrft.me/blog/2020/11/10/module-caching-in-xcode-projects 2020-11-10T00:00:00+00:00 2020-11-10T00:00:00+00:00 <![CDATA[

As you might know, we’ve been working on a new feature for Tuist, caching, to speed up build times of Xcode projects.

Large companies usually resort to build systems like Buck or Bazel to introduce remote caching into their projects. Those are great build-systems. Apple is bringing talent that has previously worked with them to improve the Xcode and Swift Package Manager‘s build-system. However, the build system is not accessible for small and medium-sized companies because they can’t afford to have a tooling or infrastructure team to migrate from Xcode’s build system. Moreover, using another build system has proven to have significant costs for the infra team and inconveniences for the users. Since Xcode doesn’t support replacing its build system, companies have to resort to project-generation in combination with some hacks to get Xcode to compile using Bazel or Buck. Moreover, with every new Xcode update, they have to do some work to ensure their setup doesn’t break and that developers can always be in the latest Xcode version. Not ideal, is it?

With Tuist, we have taken a simpler approach to caching that is inspired by solutions that we have already seen in the community. Here’s how our implementation relates to tools that you might already be familiar with:

  • Carthage: Carthage takes a different approach to dependencies. Rather than integrating the source code through a project and a workspace, it uses dynamic frameworks that can be easily dragged and dropped into a target and Xcode will automatically set the linking build phases. The clear advantage here over CocoaPods is that your clean builds don’t compile the source code of those dependencies. However, it requires developers to understand their project’s dependency graph, to ensure that the frameworks are copied into the right products. A badly configured project might lead to apps crashing at launch time, or Apple rejecting your bundles because you have copied a framework into another framework. From Carthage, Tuist gets the idea of turning your project targets into binaries.
  • Rome: Takes Carthage frameworks and store them in a remote storage. Thanks to this, Carthage frameworks are only built once and shared across all the developers in the team. Nothing changes regarding how those frameworks are integrated into the project. From Rome, Tuist gets the idea of reusing binaries by storing them on a remote storage.
  • Swift Package Manager: SPM’s approach to dependencies is similar to CocoaPods. The main difference is that since it’s developed and maintained by Apple, the integration in Xcode projects is more seamlessly. Xcode has built-in workflows for resolving Swift Packages at launch time, and the build system knows how to build and link those dependencies into your app. From the Swift Package Manager Tuist gets the idea of defining your projects using manifest files that are written in Swift.

So how does Tuist combine all the above elements to provide caching?

It generates Xcode projects (like CocoaPods), where the targets that you don’t plan to work on are replaced by binaries (like Carthage), that are stored in a remote storage (like Rome) that gets populated from CI. All of that happens at project generation time when developers run the following command:

tuist focus MyFramework

It reads as: I’d like to work on MyFramework, please, replace direct and transitive dependencies with binaries, and tree-shake my project to remove elements that are not necessary to work on that target (e.g. other targets, their schemes).

Because what you get in the end is a standard Xcode project, you don’t have to worry about future Xcode versions breaking your setup, or using hacks on the Xcode side to make the developers’ experience seamlessly.

The image below represents the layers of indirection that are introduced when using alternative build systems. Note that build files have to be translated into another representation that is then passed to a project generator to get an Xcode project. Moreover, the final Xcode projects need to need to trick Xcode into using Bazel or Buck as a build system.

An image that shows the setup that companies adopt when using Buck or Bazel

The diagram represents the common setup when using alternative build systems.

In the case of Tuist, there are no layers of indirection. It takes your project definition and generates an Xcode  project ready to be used with Xcode’s build system. Because of its simplicity, the setup is easier to reason about, optimize, and debug. Moreover, it makes caching accessible to more users:

An image that shows how caching works with Tuist

The diagram represents how the caching works in Tuist.

Some final words

We’ll never be able to build a solution like Bazel and Buck because we are not experts in build systems, nor we think it makes sense to take people away from Xcode’s build system. We believe though, that some of the ideas from Bazel and Buck can inspire future improvements in Xcode.

Apple seems to be betting on evolving its build system to be something closer to what Google offers with Bazel. However, it’ll be challenging to enable it in existing projects that might have deviated a lot from a standard Xcode project. One of my guesses on how Apple is going to proceed is that they’ll first evolve the monolith .pbxproj format for projects into a new format that abstracts some of the intricacies that we have been traditionally exposed to (e.g. linking build phases, build settings) and is less prone to git conflicts. With a more limited format, it’ll be easier for them to reliably enable build caching because they’ll have a well-defined set of project flavors that they’d need to optimize for.

I’m very excited to bring this feature to Tuist and democratize caching for medium and small companies. There’s a lot we need to improve, and project scenarios to handle gracefully, but it’s looking promising, and can’t wait to see more projects using it and making their developers productive through Tuist.

]]>
<![CDATA[Bazel and Buck is the solution large companies have adopted to make Xcode build fast. However, it's complex and not accessible to medium and small companies. In this blog post, I share the approach Tuist is taking and how it's inspired by tools the community is already using.]]>
Growing Tuist's community https://pepicrft.me/blog/2020/10/31/growing-tuist-community 2020-10-31T00:00:00+00:00 2020-10-31T00:00:00+00:00 <![CDATA[

As you might already know, I devised and started working on Tuist a few years ago. I was motivated by the fact that modular Xcode projects were a nightmare to maintain and that existing project generation solutions were taking a direction that would lead them to surface the same intricacies and complexities present in Xcode projects. I envisioned Tuist as more than just a project generation. I wanted it to be a platform that makes development convenient and removes indirection layers introduced by tools written in different programming languages. Rails was my most massive inspiration. The framework places convention over configuration to provide a great user experience. Rather than installing a handful of tools, like it happens when developing apps with Xcode (CocoaPods, Carthage, SwiftLint, SourcerySwiftGen), you install one, and it works. I was thrilled to bring the same idea to the Swift community.

When I embarked on that journey, it was clear that the community would play an important role. That’s one of the things that makes Rails so unique too. It’s made by people that have a lasting commitment to the project. This is rarely seen in the Swift land, where it’s more common to see people come and go often. Many clickbait-type projects reach a spike of hype and stars on GitHub, and then they are abandoned or barely maintained. Tuist would be different. It’d have a mission, it’d place UX in the first place, and people would be encouraged to show a lasting commitment to the project. It’d help developers with the challenges they face when scaling up projects. Users would be invited to share their challenges to help shape the direction of Tuist.

In this blog post, I’ll share what had worked well in building the community and the things that still have room for improvement.

What has worked

Documentation for contributors

Joining a community as a contributor might feel intimidating. Where do I start? Projects usually place a CONTRIBUTING.md file in the repository with some basic guidelines, but that’s not enough - people need a walkthrough that explains how the project is architected and how they can clone it and be able to run it locally. We did that on Tuist’s documentation. We realized new contributors appreciate that a lot because they have an exact starting point.

When we see new contributors joining the community, we take the opportunity to engage with them and get some feedback from them to improve the documentation. We don’t treat it as a static piece of the project but rather as a dynamic one that needs to evolve alongside the project and the growth of the community.

We’d like to invest more in it and add a tutorial that guides the user through all the features they have access to with Tuist.

Pairing with newcomers

I learned that introducing people into the project by pairing with them is the most effective way to bring a diversity of ideas to the project and empower them to contribute further to the project. It’s hard to explain how energizing it is seeing people contributing for the first time to a project and shipping their first feature after one or a few pairing sessions.

Build a modularized architecture

If you build the project as a monolith, familiarising yourself with the project means familiarizing yourself with the entire monolith. However, if you split that up into smaller feature domains, it’s easy for new contributors to familiarise themselves with a particular area of the project and contribute to it. In the case of Tuist, we invested a lot in that from the very beginning, and it’s paying off. Most of our features are modeled following a functional paradigm. Projects are loaded and then passed through a series of mappers representing different features to eventually reach the components that turn them into the Xcode project. Thanks to that, we’ve been able to introduce mappers that bring support for caching, generate type-safe APIs for accessing resources, or turn your project into a visual graph.

In the case of Tuist, we followed the uFeatures architecture that is detailed here, and it has worked very well.

Monorepo

One of the areas of projects that is often disregarded is documentation. Getting the documentation out of sync with the project can have a very negative impact on the experience users have with the project. To prevent that from happening in Tuist, documentation (developed as part of a Gatsby website) lives alongside the source code of Tuist in the same repository. Developers are required to update it as part of their work on improvements and new features. Moreover, their changes are automatically built by Netlify, which offers a preview automatically.

If we were not using a monorepo, building a new feature would consist of more than one PR with references between them. That leads to “I’ll open a follow-up PR with the documentation update” which, in the end, it doesn’t happen. Thanks to GitHub Actions API, it’s straightforward to define which workflows should be triggered based on the file changes.

Every improvement new feature on Tuist must have code changes, unit tests, acceptance tests (if needed), CHANGELOG update, and documentation. If any of those elements is missing, the PR is not merged until it has it.

Having a Slack group

Even though I’m trying to avoid synchronous communication lately because it makes people believe that the answers should be synchronous too, I have to say Slack played an important role in the growth of Tuist’s community. Once in a while, people are joining and engaging with other users and contributors. I always take the opportunity to engage with them and ask them what brought them to Tuist. It’s fantastic to hear first-hand what features from Tuist motivated them to decide to use it. Interestingly, many users like the idea that they can use Tuist to define workspaces and have excellent documentation that they can follow through.

Engage through Twitter

Since the inception of the project, we’ve had a Twitter account that we use to share updates with users and people interested in the project. We share all kinds of Tuist, from new releases to tips. Having the account also allows contributors to tag Tuist whenever they proudly share their contributions to the project. Most of the community of developers that work with Xcode are on Twitter, so it’s the place to be if we want them to know about Tuist and give it a try.

Believe in our ideas

The Xcode community is settled on solidified ideas that have either been established by Apple or the community. This is great because you don’t have the fatigue you’d have in Javascript trying to find the solution among many that work best for you, but has the downside that constrains new ideas. We’ve experienced that from the very beginning when people started comparing Tuist with XcodeGen and Fastlane. Yes, we leverage project generation and provide commands that developers would usually define in Fastfile. Still, we take a different approach with different goals. It’s easy to get distracted by the community opinions, but it hasn’t been the case here.
Tuist’s community is not afraid of throwing new ideas and exploring them further. Thanks to that, we have features like project description helpers, module caching, and auto-generation of documentation that wouldn’t otherwise be in Tuist.

Trust people

Shopify has taught me this. One of the best values that you can have in your project/company is trust. It’s a value ingrained in Shopify’s culture, and wanted it to be part of Tuist’s too. Since the beginning, we trusted people to do all sorts of things in the project: propose and own the implementation of new features, publish new releases, provide support yo the community… When you do something like that, they feel part of the project and are empowered to contribute further.

This has the downside of trusting people that might end up proving that they are not trustworthy. However, we are lucky that hasn’t been the case in Tuist yet. If we happen to have this scenario, we’ll handle it.

Recognize people’s work

We take the time to recognize the work that people do, both privately on Slack and through mentions on Twitter. I’ve noticed some communities have automated through bots on Slack, but honestly, it saddens me that we have reached a point where we need to say thanks through a bot. In some cases, I’ve gone as far as to send the person a little gift (e.g., stickers with a hand-written card and a book about open source).

What hasn’t work so well

Delegating

Although there are a few maintainers and contributors, there’s still a lot of that falls over me, and that sometimes results in a bottleneck when there’s a lot of work in the backlog, and I don’t have enough attention after work that I can devote to it.

I’m currently seeking domain owners, but it’s hard because Tuist is a side project. They sporadically contribute to the project but can’t commit to a role in the project. This is the classic issue with open source projects that we don’t know how to approach either. And it concerns me because it can lead some of us to burnout and give up.

I recently started building a companion web app that integrates with GitHub, Discourse, and Slack and provides utilities to make this easier. For instance, one of the ideas that I have in mind is having tweet requests (TR), a way for contributors to propose tweets shared from Tuist’s main account. I called this app Backbone. It’s in a very early phase, but you can check out the project on this repository.

Vision

I haven’t done a good job sharing what’s the vision of the project. I’ve been hinting it through Slack messages and posts in the community forum. Still, it’s hard for someone new to the community to imagine the future of Tuist and how the current projects align with it. I guess it’s normal when the project is young. There are many things yet to be defined, but as things start to mature, having a vision makes it easier for teams to align with the upcoming features.

Final thoughts

Tuist is my baby 👶 - I like working on it a lot. We’ve built a community of users, contributors, and maintainers that make the project so unique. We have gone from merely being a project generator to a platform that provides streamlined workflows to focus on the most critical tasks. Since I started building Tuist, I was disappointed that only the big companies in the industry could access those features that most projects need to scale up (e.g., easy modularization, caching). I tasked ourselves with democratizing that and making it accessible to anyone.

It’s been three years, and the project keeps moving forward, fueled with great minds and creative people. I don’t know what’ll come next, but I’ll indeed say that it’ll help projects of any size with the challenges they face or are about to face.

]]>
<![CDATA[In this blog post, I share my experience building the Tuist community. I talked about the things that have worked well, and the areas where there's still some room for improvement.]]>
The exciting adventure of building a web app https://pepicrft.me/blog/2020/10/02/the-exciting-adventure-of-building-a-web-app 2020-10-02T00:00:00+00:00 2020-10-02T00:00:00+00:00 <![CDATA[

I’ve been playing lately with building a web app that complements Tuist with some features that require storing state in a server. Since I have a mobile development background, being familiar with Swift, iOS, and very recently React Native, I’m learning a lot along developing both a web app and a backend that exposes a GraphQL API. This blog post is me reflecting on what I’ve learned so far from the technology choices we’ve made.

For the backend, I chose Rails. It’s easily deployable to Heroku by simply doing a git push. Moreover, I can use a programming language that I love a lot, Ruby, and save a lot of time by adding dependencies that bring functionality that otherwise I’d have to implement myself - we use Devise for authentication, Rolify for defining roles, and CanCanCan to codify permissions when accessing models.

The API is GraphQL. We use the Ruby GraphQL gem that makes it so easy to define a schema and translate it to internal business logic. Unlike standard Ruby applications that tend to put the logic in models or controllers, we are using the service pattern heavily. Every unit of business logic is defined in a service that takes the necessary input, performs the operation, and either returns a value or raises an error. That makes those units easy to reuse from other components like controllers. As a result, our models are very lean; they only contain validations and a few callbacks to automatically populate some fields.

I like this setup because I’m learning a lot about GraphQL. I have to say I can’t imagine myself doing REST APIs anymore. There’s one thing that I’d like to read more about, and that’s how to solve the N+1 issue when running queries because, with the GraphQL approach, it’s more likely to happen. However, it’s not an issue right now if we consider that the amount of data we are sending is minimal and the queries relatively easy.

We only use Rails server-side rendering solution, ERB, for the authentication workflow because it’s provided by Devise out of the box. For everything else, we moved the rendering to the client-side by using React for describing the views and Apollo to interact with the GraphQL API and cache the responses. It provides a lovely hooks-based interface that makes interacting with the API a pleasure. Moreover, with use a code generation tool that turns our GraphQL queries and mutations into Typescript code, so we don’t have to write network code at all. Thanks to that, our focus on the frontend is on what data to fetch and how to represent it.

Last but not least, we are styling the UI using TailwindCSS. Ever since I came across it for the first time, I’ve been using it in every project that involves styling HTML. Styles are applied through classes with semantic meaning, limiting each attribute to a limited set to value. Thanks to it, it’s easier to achieve visual consistency that otherwise wouldn’t be possible if we had to pick values for every new HTML element. And that goes without mentioning the relief of not having to think about naming classes. I got a license for TailwindUI, which provides a set of beautiful components built upon TailwindCSS.

I’m enjoying and learning a lot during the process of building this web app, and I’m taking the opportunity to learn about designing for the web: patterns, semantic hierarchies, how to create components that look clean and modern.

Do you have experience building web apps? If you don’t mind sharing your stack, I’d love to hear about it on Twitter.

]]>
<![CDATA[I’ve been playing lately with building a web app that complements Tuist with some features that require storing state in a server. Since I have a mobile development b... ]]>
Generating Typescript code from a GraphQL schema https://pepicrft.me/blog/2020/09/30/graphql-codegen 2020-09-30T00:00:00+00:00 2020-09-30T00:00:00+00:00 <![CDATA[

Today, I learned about a tool called GraphQL Code Generator turns a GraphQL schema into typed models and utilities for interacting with a GraphQL API. In my case, I’m using it in a React application where I’m using Apollo as the client. Using the tool is as simple as adding a configuration YAML at the root of the project:

schema: schema.graphql
generates:
  app/javascript/graphql/types.ts:
    documents: 'app/javascript/**/*.graphql'
    plugins:
      - typescript
      - typescript-operations
      - typescript-react-apollo
    config:
      reactApolloVersion: 3

And then running yarn graphql-codegen. The tools outputs a .ts that contains all the necessary code for interacting with the API. For example, the snippet below shows how to fetch the current user by using the generated code:

import { useMeQuery } from 'graphql/types';

const MyComponent = () => {
    const { data, loading, error } = useMeQuery();
    return <div>{data}</div>;
};

In the past are those days with Objective-C and Swift when I had to write the client-side models manually using the API documentation. Who wants to do that again after seeing such a powerful workflow? By the way, Shopify has a similar tool that generates models in Swift & Kotlin - it’s called Syrup and it’s open source.

]]>
<![CDATA[Today, I learned about a tool called GraphQL Code Generator turns a GraphQL schema into typed models and utilities for interacting with a GraphQL A... ]]>
What I like from Ruby and Rails https://pepicrft.me/blog/2020/09/30/what-i-like-from-ruby 2020-09-30T00:00:00+00:00 2020-09-30T00:00:00+00:00 <![CDATA[

The more I use Ruby and Rails, the more I like it. I’ve played with Typescript lately, and it continues to feel heavy: parenthesis and brackets everywhere, layers on indirection through tools to accommodate the Javascript to the browser or to your preferred way of writing it. It’s a powerful programming language but it doesn’t spark the same joy that Ruby does.

Ruby is lean. There’s an interpreter in your system that you pass your code to. Bundler ensures that the directories of your dependencies are loadable and that’s it. No Babel, Webpack, Typescript…. It simply works. That’s what you want at the end of the day, not spending time figuring out issues or how to configure underlying tools.

The only reason why I’d use Javascript is to be able to describe a website using React’s approach to represent states and encapsulate dynamic behaviors through hooks. That’s why I use Gatsby for creating statically generated sites over alternatives like Jekyll, or the so-talked-about in the Swift community, Publish, that would lock myself to Xcode.

Going back to Ruby and Rails, what do I like about it?:

  • I only need the interpreter and the dependency manager to run a project.
  • I can write tests in plain Ruby classes and using the language’s standard library.
  • Bundler’s approach to structure dependencies is more sensible and prevents the “delete node_modules” issues.
  • There’s less library fatigue. There are fewer options that are better tested, and that makes deciding for a dependency easier.
  • Being able to open a Rails console in a remote server with ActiveRecord models loaded is damn amazing.
  • Thanks to its dynamism, you can build plugin systems that otherwise wouldn’t be possible with statically compiled languages like Swift.
  • Companies like Shopify are using it at scale, and IT WORKS. Some internal tooling to support the scale leverages the dynamism of the language.
  • The language and its ecosystem feels more harmonious. Working with Javascript is sometimes stressful because sometimes you need to go deep into an endless rabbit hole of patches over patches.

This is my preferred stack when building software these days:

  • Static websites: GatsbyJS with Typescript
  • CLI tools: Swift if it’s for macOS environments, and Ruby otherwise.
  • Apps for Apple platforms: Swift (I’m planning to learn SwiftUI at some point).
  • Web APIs: Ruby on Rails.

And you? What language/technology do you like the most and why? Let me know on Twitter Twitter.

]]>
<![CDATA[The more I use Ruby and Rails, the more I like it. I’ve played with Typescript lately, and it continues to feel heavy: parenthesis and brackets eve... ]]>
Modularization in open source projects https://pepicrft.me/blog/2020/09/29/modularization-in-open-source 2020-09-29T00:00:00+00:00 2020-09-29T00:00:00+00:00 <![CDATA[

I recently came across a blog post from Shopify where they share how they are componentize the main Rails application into smaller pieces with clearly defined boundaries and loose coupling between them. This made me think about the uFeatures architecture that I proposed back when I was iOS engineer at SoundCloud, and that I naturally inherited in Tuist.

Typically open source Swift CLIs are organized in two targets, one that represents the executable (i.e. main.swift), and another one, typically named with Kit, that contains all the business logic of the tool. The main motivation for doing is being able to implement tests for the business logic. However, since everything lives within the same boundaries, there’s a huge risk of the business logic growing into large group of components strongly coupled, and an architecture that is hard to reason about, maintain, and evolve. It’s a good starting point, but not a good idea long-term because it’ll compromise developers’ efficiency contributing to the codebase, and complicate onboarding new contributors to the project.

As I mentioned earlier, Tuist follows the uFeatures architecture. There are Tuist-agnostic targets like TuistSupport (inspired by Rails’ ActiveSupport) and TuistTesting that contain utilities that transversal to all features and core utilities: abstractions built upon foundational APIs and extensions. There’s a TuistCore target that contain models and business logic that is core to Tuist; for example the dependency graph and the models that represent the projects. This target also acts as a dependency inversion layer so that feature targets don’t have dependencies among them. Thanks to this, we can build a feature without having to build the others. This makes iterations cycles faster when working on individual features. Then features are organized horizontally. Most of them represent the different command namespaces that are exposes by the CLI. In some cases like automation commands, they are grouped under TuistAutomation. Cloud-related utilities live in TuistCloud. This is great for new contributors because if they want to fix or improve something in the tuist build command, they only need to onboard on the TuistAutomation target. Isn’t it great?

Last but not least, we have the TuistKit that glues all the features together into a command line interface that is hooked from the entry main.swift file. Commands are classes responsible for parsing the CLI arguments and throwing errors when they are incorrectly used, and delegate the business logic to Service. For example, there’s a GenerateCommand and a GenerateService.

One might think that this is over-engineering a project, but I’d certainly disagree. Defining clear boundaries in a codebase by leveraging Swift’s access levels will lead to a better architecture, which in turn, eases contributions and the addition of new features. Starting the modularization way after creating the project will be a hard challenge to undertake because the code will most likely be strongly coupled. We tried to do that at SoundCloud and, from what I know, there’s still a lot of code that lives in the main app that is hard to extract.

I can’t imagine Tuist being a monolith codebase these days.

]]>
<![CDATA[I recently came across a blog post from Shopify where they share how they are componentize the main Rails application into smaller pieces with clearly defined boundaries and loose coupling between ... ]]>
Finding focus https://pepicrft.me/blog/2020/09/07/focus 2020-09-07T00:00:00+00:00 2020-09-07T00:00:00+00:00 <![CDATA[

One of the things that I struggle a lot with these days is having focus. Despite my several attempts to mitigate distractions, they always find their way to make it into my attention span. The result of that is that I feel stressed, and when I’m stressed I can’t think clearly. I feel like I’m jumping from one thing to the other without being able to do deep work in any of them.

One side of me thinks that the solution to this is removing those distractions. For example, applying some ideas from the minimalism movement: be only in the strictly necessary Slack channels, don’t spend so much time in social networks, tidy up the desktop and phone setup and remove any apps clutter. That has helped, but it’s not enough, I still feel distracted and often stressed.

What else can I do? I think I have to learn to accept that distractions will always be there and that I have to become better at saying yes and no to things. I struggle with saying yes, this is important and therefore requires my attention, and no this is not relevant right now and therefore I should let it go.

I’m also trying to get comfortable with not reading Twitter often. I got used to being a passive consumer of everything that it’s going on in the world, and that’s not sustainable. My brain is exhausted with keeping up with everything that is happening. Instead, I’m teaching myself to be more offline than online, and more active than passive. For example, I’m trying to read more paper books and newspapers, or write more often in my blog or in random pieces of paper. It’s very tough because my brain somehow got used to being distracted all the time, but when I get it into the mood of being offline I quite like it.

It’ll be a long and tough re-education process but this is what I’m up to these days: trying to be less stressed to have more focus and be able to do deep work again.

]]>
<![CDATA[One of the things that I struggle a lot with these days is having focus. Despite my several attempts to mitigate distractions, they always find their way to make it into my attenti... ]]>
Pairing sessions to introduce people to Tuist and open-source https://pepicrft.me/blog/2020/08/22/pairing-tuist 2020-08-22T00:00:00+00:00 2020-08-22T00:00:00+00:00 <![CDATA[

I recently started having pairing sessions with developers interested in contributing to open-source; it’s something that usually intimidates people, but that becomes easier if someone guides you through the first contribution. You have a person that can answer any question you might have and that can give you an overview of the projects ad the decisions made in the past.

I’ve had two of them and I love it. Getting new people to contribute to Tuist brings new and more diverse ideas to the table. Moreover, they bring more energy which is the fuel for the project to move forward.

To make the first-time experience great, I pick a feature that I know is implementable end-to-end in one session. For example, in the session we worked on a tuist lint code command to lint projects’ code using SwiftLint, and in the second one we implemented a tuis doc to auto-generate documentation from targets.

Doing these sessions made me realize how great idea it was following the uFeatures architecture in Tuist. It’s very easy to add a new feature alongside the others, and build its business logic by composing existing pieces of business logic. Moreover, it’s easy to compare the existing architecture, with what they know from building iOS apps. For example, I tell them that the CLI is the App Delegate, that commands are like views, and since views should not have logic, we extract the business logic into services.

If you maintain an open-source project I’d strongly recommend doing something like this.

If you are reading this and would like to pair on the project too you can let me know and I’ll be happy to schedule a session and do a bit of hacking together.

]]>
<![CDATA[I recently started having pairing sessions with developers interested in contributing to open-source; it’s something that usually intimidates people, but that becomes easier if someone guides you t... ]]>
Thinking in terms of problems https://pepicrft.me/blog/2020/08/03/thinking-in-terms-of-problems 2020-08-03T00:00:00+00:00 2020-08-03T00:00:00+00:00 <![CDATA[

One of the things that I find the most challenging these days when building tools for developers is thinking in terms of problems.

Very often Tuist’s users ask for features that they have seen in Xcode and that they’d like to see in Tuist too: is there support for script build phases? or can I run a command right after generating the project? It’s temptin to say that we don’t support it, but that we can add support for it. However we’d end up with the same concepts and complexities that motivated them to use Tuist.

The approach that I take instead is aking them whys: why do you need a script build phase or why do you need to add this build setting? Thanks to this, we have been able to simplify some of Xcode’s complexities like defining dependencies. If we understand what are users’ motivations, we can provide them with simpler and more optimized solutions. And this is something that excites me a lot about the way we are building Tuist. It’s all about helping teams with the challenges that the face. The how is up to us to figure out.

For example, I know that one of the challenges that teams face is slow builds. Some teams went straight into adopting build systems like Bazel or Buck because they saw that’s something that worked for other companies. However, they might have not realized that the fact that it worked for them doesn’t mean that it’ll work for them too. This often leads to teams going too deep into rabbit holes, or what’s worse, introducing layers of complexities that make projects hard to maintain and work with. If you have seen projects using CocoaPods, Carthage, and the Swift Package Manager for managing dependencies, you probably know what I’m talking about.

As one of the maintainers of Tuist, one of the roles that I set for myself is evangelizing this idea on how to build tools. I’d love them to think of Tuist as a product; a product that solves concrete problems that we understand well. If we want to build something that developers love to use, we need to understand why we are building it in the first place.

  • We built updates into Tuist because the existing solution for managing the installation of system dependencies (e.g. Homebrew) might yield non-deterministic results that cause frustrations to developers.
  • We built project generation because Xcode difficults defining a modular project consistently, which is crucial for sharing code and keeping build times low with architectures like uFeatures.
  • We are adding cache because developers use ⌘+K often and end up wasting a lot of time doing clean builds with Xcode.
  • We are adding automation because Fastlane’s approach in practice result in large and complex Fastfiles that are hard to maintain.
  • We are synthesizing interfaces for resources to prevent runtime errors when accessing non-existing resources.

If you are also working on tools for developers, I’d recommend adopting this mindset. Listen to your users, understand their needs, and build the best solution for them. Resist the temptation of building just what they ask for.

]]>
<![CDATA[One of the things that I find the most challenging these days when building tools for developers is thinking in terms of problems. Very often Tuist’s users ask for features... ]]>
The beauty of not expecting something in return https://pepicrft.me/blog/2020/08/01/the-beauty-of-doing-things-for-people 2020-08-01T00:00:00+00:00 2020-08-01T00:00:00+00:00 <![CDATA[

Yesterday I had a thought-provoking chat with an acquaintance. He thinks that when we connect with someone is because they are a means to achieve goals - for example a business partnership. He was, in fact, trying to do that with me, and I felt really disappointed. I was invited to a casual lunch, then spent a nice afternoon with them having coffee and some drinks, until at some point he realize that I was too naive to get the game that he was trying to play.

I feel very uncomfortable with this attitude and the more time I spend in Germany, the more I realize that this is a cultural thing from Spain. It’s not the first time I’ve noticed this pattern: people playing the game of being your friend, but in reality treating you as a tool. It’s ok if you play the same game, but not so cool if you are out of the game.

After that chat, I couldn’t avoid reflecting on myself, and whether I behave the same way. I think in my case I’m more driven by the beauty of building connections with people. I think that’s the reason I’m so engaged devoting my free time to Tuist. Some people would perceive it as a waste of time, but I see it as a fulfilling thing because I get to know and talk with people that otherwise I wouldn’t have been able to. In fact, the other day I met Marek, also core contributor of Tuist, and had breakfast together in Berlin. For me, that has more value than money, success, likes, follows, stars, and any other kind of interest that people might have these days.

This is something that I learned from my parents and that I’ll never forget. Happiness is not about success and money, is about being healthy, and being friend of your friends.

What I’m learning to do though is how to distance myself from those people that either treat me or other people as means. I’ve been there. I’ve suffered a lot from it, and it’s consumed a lot of my energy. It’s easy to fall into the trap of only seeking success and feed your narcissistic desires.

And my random thought about people ends here while I’m somewhere in the countryside of a small town in the south of Spain called Murcia.

]]>
<![CDATA[Yesterday I had a thought-provoking chat with an acquaintance. He thinks that when we connect with someone is because they are a means to achieve goals - for example a business par... ]]>
A shift towards product development https://pepicrft.me/blog/2020/07/13/shifting-towards-product 2020-07-13T00:00:00+00:00 2020-07-13T00:00:00+00:00 <![CDATA[

Working on building tools for developers has helped me realize that what I like even more than coding is going through the product thinking process. That’s why I’m so engaged building Tuist, and recently Galaxy.

I used to be excited by playing with technology itself. SwiftUI? Oh! I want to test that out and see how it feels. A new reactive framework? I want to add it to my app and see how it compares to RxSwift. However, that’s no longer the case. I see technology as a mean provide something to the users and tackle problems/needs they might have. And the side effect of this shift is that I don’t worry anymore about having to keep up with the latest with the latest. Instead, my focus is on users, what do they need?

I enjoy seeing how developers work at Shopify and wondering what kind of tools we could build to help them. I also enjoy seeing developers joining Tuist’s Slack group and bring new interesting challenges I didn’t think about before.

These are questions that I often ask myself:

  • How can I make convenient inconvenient steps of developing apps with Xcode?
  • How should Tuist’s website design be to convey the ideas behind the project?
  • What can I do to build a healthy and engaging community of users?
  • What can I do to reach those developers that still struggle to scale up their Xcode projects?
  • How can I solve challenging problems like reliable team signing of apps or faster builds?
  • How can I make sure we don’t disregard little details that make the user experience using Tuist great?

In a recent interview at Shopify, someone asked me what I like from my job. I did not expect that question, but I could answer it without thinking twice: I like to make complex things simple.

That’s what I’ve enjoyed doing at Shopify and with Tuist, and that I’ll continue exploring further in the next few years.

By the way, I just got a book on Web Typography because I want to understand how different typographies influence the way a design is perceived.

]]>
<![CDATA[Working on building tools for developers has helped me realize that what I like even more than coding is going through the product thinking process. That’s why I’m so engaged building ]]>
Transitive React Native dependencies https://pepicrft.me/blog/2020/07/01/react-native-transitive-dependencies 2020-07-01T00:00:00+00:00 2020-07-01T00:00:00+00:00 <![CDATA[

Today I learned about how dependencies are organized by NPM and Yarn. Let’s say we have the following scenario of dependencies:

  • A -> B -> C (3.2.1)
  • A -> C (1.2.3)

Javascript dependency managers will structure the dependencies following the structure below:

node_modules/
  a/
  b/
    node_modules/
      c/ #3.2.1
  c/ # 1.2.3

With this scenario package managers like Bundler or Swift Package Manager would error, but NPM and Yarn do not. Javascript bundlers like Webpack and Metro might be able to resolve those and generate a valid Javascript code that runs successfully (perhaps with a larger size than expected).

The problem comes when the C dependency is a React Native dependency that includes native code. The React Native CLI uses CocoaPods and Gradle to link the native code that is distributed as part of the NPM package. In the above scenario, we can’t link both versions of the C dependency so the CLI decides to link only the direct dependencies.

That means that adding B as a dependency of my React Native app also means that I have to add C. Otherwise, the app will crash when it tries to access the non-existing native code from C.

I find it very weird that the app needs to know about transitive dependencies as well, but I can’t think of a better way for the CLI to solve it. I guess one thing that it could do is to extend its logic to lookup transitive dependencies as well, detect conflicts, and fail if it finds any.

This is just me and another misterious quirk of React Native development.

]]>
<![CDATA[Today I learned about how dependencies are organized by NPM and Yarn. Let’s say we have the following scenario of dependencies: A -> B -> C (3.2.1)... ]]>
Streamlining app development https://pepicrft.me/blog/2020/07/01/streamlining-app-development 2020-07-01T00:00:00+00:00 2020-07-01T00:00:00+00:00 <![CDATA[

One thing that I noticed after Shopify‘s commitment to React Native is that it foster a culture of turning ideas into mobile apps. Doing app development is no longer a thing that only mobile developers do. In hindshight, it was a good company decision, but it presents another set of challenges that my team, React Native Foundations, will have to figure out how to overcome. Those challenges have to do with steps further down the process that are not obvious if you are not a mobile developer per-se: How do I share my app with another person? How do I run the app on my device? How can I upload my app to Google Play Store and App Store?

Yesterday, while thinking about all of this, I came up with the idea of grouping phases along the process of developing apps in 4 categories:

  • Start: Starting a project happens when a team has an idea and they want to make it tangigle. Traditionally they used the React Native CLI, but unfortunately, that’s not enough. Creating a new project also involves setting up continuous integration and signing. What we are doing here, and we’ll share more about it on the company’s engineering blog, is moving the creating of projects to a web-app. Developers give their app a name, select what functionality they’d like to opt-into, and the service takes care of the rest. Cool, isn’t it?
  • Develop: Once the project is created, developers need to interact with it from the terminal and their editor; they need to build, test, run, and lint their code. This is something that the React Native CLI provides, but that we are wrapping inside an internal CLI tool that minimizes the amount of configuration required by providing a more opinionated development experience. The plan is also to integrate the tool with our internal infrastructure to provide useful workflows.
  • Share: Once the app is in a usable state, developers usually want to share it with other people. In concrete terms, it would be something like uploading the app to Google Play Store and Testflight. However, at Shopify we have Shipit Mobile and a tophat which makes the sharing process very convenient without having to depend on a third-party provider.
  • Release: Last but not least, once teams feel the app is ready to be rolled out to the final users, they need to sing and upload it to Google Play Store and Apple Store Connect. This is something for which they can use our internal Shipit Mobile platform.

It’s exciting being able to translate the above user intents into workflows that developers can follow. In the past few years we have been building individual tools that we are slowly bringing together to provide a more cohesive experience that developers enjoy.

]]>
<![CDATA[One thing that I noticed after Shopify‘s commitment to React Native is that it foster a culture of turning ideas into mobile... ]]>
WWDC's FOMO https://pepicrft.me/blog/2020/06/25/wwdc-fomo 2020-06-25T00:00:00+00:00 2020-06-25T00:00:00+00:00 <![CDATA[

I’m avoiding opening Twitter these days. It makes me a bit anxious receiving WWDC news through people racing to be the first one to publish the clickbait-type of tweet. I used to have energy to be part of that race and not suffering from FOMO, but it’s become unsustainable for me - it’s not just anxiety, I feel that I’m wasting my time watching or reading about “the new things”, even if I don’t really need them.

In the past few months, I’ve worked on turning things around in the way I consume tech. Rather than letting my excitement decide where to spend my time, I consume the content as I need it for my day-to-day tasks. For example, reading about people talking about SwiftUI, especially now around WWDC, made me feel that I should read and talk about it too. However, a more thorough thinking stopped me from doing it. I used that time to learn things that I might need for improving Tuist, which is what I enjoy working on these days.

It’s a uncomfortable stance at first, but I believe in its long-term benefits in my mental health. Being ok with not knowing about what was presented during WWDC allows me to use my time more wisely and have a mental space that otherwise I wouldn’t have.

]]>
<![CDATA[I’m avoiding opening Twitter these days. It makes me a bit anxious receiving WWDC news through people racing to be the first one to publish the cli... ]]>
First thoughts on Sorbet https://pepicrft.me/blog/2020/06/03/first-thoughts-on-sorbet 2020-06-03T00:00:00+00:00 2020-06-03T00:00:00+00:00 <![CDATA[

We started using Sorbet to add types to Galaxy’s codebase. Types are great to catch type-related issues statically, and prevent them from blowing up on production. This tiny blog-post contains my impressions using it for the first time:

  • Sorbet has a great adoption process. It can load your codebase, analyze it, and add sigil annotations to the files.

  • After adopting it, I was able to run srb tc successfully. Sorbet flags some of the files as ignored, and lets you gradually change the granularity of the type checks. That means you can adopt types at your own pace.

  • The typecheck (srb tc) command runs incredibly fast and that makes it a good candidate to be part of your local development workflows.

  • I don’t like the syntax for adding type annotations, but I don’t dislike it either. It feels a bit detached from the implementation code. For example, for annotating a method, I’d expect something along the lines of:

    def my_method(x: String): String
    end
  • It can use runtime reflection to generate Ruby Interface files for third-party gems.

Overall, the impression has been quite possitive. I don’t many Ruby projects, but if I had to, I’d definitively set them up with Sorbet. If you are a Ruby developer and you haven’t checked it out yet, I’d recommend you to give it a try in one of your projects.

]]>
<![CDATA[We started using Sorbet to add types to Galaxy’s codebase. Types are great to catch type-related issues statically, a... ]]>
Working on new features or tackling technical debt https://pepicrft.me/blog/2020/05/18/new-features-technical-debt 2020-05-18T00:00:00+00:00 2020-05-18T00:00:00+00:00 <![CDATA[

One of the things that I find the hardest when working on Tuist these days is finding a good balance between adding new features and tackling technical debt. The most exciting part is always building new things. Indeed, yesterday I came across a piece of code to authenticate users with Apple Developer Portal using the internal API that Fastlane has always used. Part of me was eager to add that logic to Tuist for future ideas that we have in the backlog. The other part of me was thinking that I should rather spend time fixing issues and working on some technical debt tickets that are necessary before continuing the work on some features like cache. What should I do?

I’m not sure. Most of the times I lean towards the former because I’m devoting my free time to the project and I want to work on exciting things. However, I do it thinking that developers are getting a bad impression of Tuist due to tiny flaws that shouldn’t be there. I like building great user experiences, and flaws defeat any great worked we might have put into building Tuist.

I don’t have a perfect framework on how to do this yet, so I’ll continue exploring and finding a more sustainable relationship with open-source.

]]>
<![CDATA[One of the things that I find the hardest when working on Tuist these days is finding a good balance between adding new features and tackling technical debt. The most... ]]>
Add and remove footer using NSBox https://pepicrft.me/blog/2020/05/10/adding-an-add-remove-bar 2020-05-10T00:00:00+00:00 2020-05-10T00:00:00+00:00 <![CDATA[

If you use macOS, you have probably realized many apps have the following UI component on their settings:

Screenshots that shows the UI controller that many apps use to add or remove items from a list

I had to add one of those to the settings view of Angle, and then I realized that it’s not a pre-defined component that you can drag & drop and use. Does that mean we have to implement a custom view for it? You are right! And that’s what I ended up doing.

It took me several iterations until I got it right. Since I’m sure I’m not the first one coming across this need, I’ll leave the code snippet here:

import Foundation
import AppKit

class AddRemoveFooter: NSBox {

    // MARK: - Attributes

    fileprivate var addButton: NSButton!
    fileprivate var removeButton: NSButton!

    // MARK: - Init

    override init(frame frameRect: NSRect) {
        super.init(frame: frameRect)
        setup()
    }

    required init?(coder: NSCoder) {
        super.init(coder: coder)
        setup()
    }

    // MARK: - Internal

    func setAddAction(_ action: Selector?, target: AnyObject?) {
        addButton.action = action
        addButton.target = target
    }

    func setRemoveAction(_ action: Selector?, target: AnyObject?) {
        removeButton.action = action
        removeButton.target = target
    }

    // MARK: - Fileprivate

    fileprivate func setup() {
        setupStyle()
        setupButtons()
    }

    fileprivate func setupButtons() {
        contentView = NSView()
        contentView!.translatesAutoresizingMaskIntoConstraints = false
        NSLayoutConstraint.activate([
            contentView!.leadingAnchor.constraint(equalTo: self.leadingAnchor),
            contentView!.topAnchor.constraint(equalTo: self.topAnchor),
            contentView!.bottomAnchor.constraint(equalTo: self.bottomAnchor),
            contentView!.trailingAnchor.constraint(equalTo: self.trailingAnchor),
        ])

        addButton = NSButton(title: "+", target: nil, action: nil)
        addButton.bezelStyle = .shadowlessSquare
        addButton.translatesAutoresizingMaskIntoConstraints = false

        removeButton = NSButton(title: "﹣", target: nil, action: nil)
        removeButton.bezelStyle = .shadowlessSquare
        removeButton.translatesAutoresizingMaskIntoConstraints = false

        contentView!.addSubview(addButton)
        contentView!.addSubview(removeButton)

        NSLayoutConstraint.activate([
            addButton.leadingAnchor.constraint(equalTo: contentView!.leadingAnchor, constant: 0),
            addButton.topAnchor.constraint(equalTo: contentView!.topAnchor, constant: 0),
            addButton.bottomAnchor.constraint(equalTo: contentView!.bottomAnchor, constant: 0),
            addButton.widthAnchor.constraint(equalToConstant: 30)
        ])

        NSLayoutConstraint.activate([
            removeButton.leadingAnchor.constraint(equalTo: addButton.trailingAnchor, constant: -1),
            removeButton.topAnchor.constraint(equalTo: contentView!.topAnchor, constant: 0),
            removeButton.bottomAnchor.constraint(equalTo: contentView!.bottomAnchor, constant: 0),
            removeButton.widthAnchor.constraint(equalToConstant: 30)
        ])
    }

    fileprivate func setupStyle() {
        self.boxType = .custom
        self.alphaValue = 1
        self.borderColor = NSColor.gridColor
        self.borderType = .lineBorder
        self.borderWidth = 1
    }

}
]]>
<![CDATA[If you use macOS, you have probably realized many apps have the following UI component on their settings: [Screenshots that shows the UI ... ] ]]>
My first coding video on Youtube https://pepicrft.me/blog/2020/05/07/my-first-coding-youtube 2020-05-07T00:00:00+00:00 2020-05-07T00:00:00+00:00 <![CDATA[

I never thought I’d end up doing this, but today I just recorded an uploaded a video to Youtube that is meant to be the first of a series about Tuist. I recorded myself with Photo Booth, the screen and the voice using Quicktime, and edited it with Final Cut.

<iframe width=”560” height=”315” src=”https://www.youtube.com/embed/wCVPWJvJGng” frameborder=”0” allow=”accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture” allowfullscreen

</iframe>

]]>
<![CDATA[I never thought I’d end up doing this, but today I just recorded an uploaded a video to Youtube that is meant to be the first of a series about Tuist. I recorded myse... ]]>
To build, or not to build https://pepicrft.me/blog/2020/05/06/to-build-or-not-to-build 2020-05-06T00:00:00+00:00 2020-05-06T00:00:00+00:00 <![CDATA[

These days I’m a rollercoaster of emotions ― I guess as a result of COVID19 and spending so much time at home. In particular, these days I’m thinking a lot about Tuist and my devotion for it. I really like building it, working on building something valuable for the users ― building something that is simple and more intuitive than what they’d get exposed to in Xcode. However, there’s also another version of myself telling me that I’m wasting my time building something while Apple is building the Swift Package Manager and everyone is waiting for the “official” thing to meet everyone’s needs.

Another negative thought that I’ve had lately is: what if I end up wasting my time replacing feature after feature what Xcode provides? That has never been the goal for Tuist, but the more I work on it, the more I realize people want to see in their projects what they see in Xcode. I don’t think Tuist should do that because that’ll bring the same accidental complexity Xcode ended up with. But that means a lot of energy conveying Tuist’s ideas and convincing people to go down a different path. I don’t like pushing ideas onto people, or at least feeling that I’m doing it. I tend to have a lot of ideas, which I codify into tools that I build, or guidelines that I publish, like the microfeatures one. If people like them fine ― if they don’t like them, that’s fine too. However, once they are onboard with some original ideas, doing shifts or doing gentle pushbacks is an uncomfortable thing for me to do. I guess it’s something I’ll have to learn to do, considering I’ve been envisioning a handful of Tuist’s core ideas.

What I’m doing a lot these days, which helps a lot, is looking at the project from an angle of positivisim:

  • The opportunity the project gave me to connect with very talented people with great values.
  • The opportunity to re-imagine existing workflows that people have accepted for years and challenge myself with simplifying them.
  • Building simple things in a world of complexity gives me a huge sense of accomplishment.
  • By building this project I’m impacting people’s lives through the companies that have already adopted the project.
  • Having the opportunity to work on building a community and a product that people can talk about and enage with.

And well, those are my thoughts about my relationship with Tuist in the morning of May 6th 2020. I love this project and I’ll continue building great stuff into it.

]]>
<![CDATA[These days I’m a rollercoaster of emotions ― I guess as a result of COVID19 and spending so much time at home. In particular, these days I’m thinking a lot about Tuist ]]>
Cognitive overhead https://pepicrft.me/blog/2020/05/02/cognitive-overhead 2020-05-02T00:00:00+00:00 2020-05-02T00:00:00+00:00 <![CDATA[

Bootstrapping and publishing an app to the App Store is not a straightforward process. I tried to do it myself yesterday and a lazy me got stuck when I had to create signing artifacts, write automation scripts, and set up things on the App Store Connect side.

It made me think that Apple is imposing, perhaps without being aware, a barrier for newcomers to iOS development. As a newcomer, you want to code a few views, and get the app on Testflight so that you and others can try it out. This is what they need to do beforehand:

  • Understand how signing works and what are certificates and provisioning profiles.
  • Know how to set up the app’s build settings to sign the app successfully.
  • Figure out the difference between build, archive, and export an app.

Many iOS developers nowadays don’t have a good grasp of how those things work because they are typically hidden behind an abstraction automation layer. Or in other words, some Fastlane files.

In the aim of streamlining this process making it possible to sign and upload the app without the cognitive overhead that the current process requires, I’ll try to leverage Tuist’s foundation to provide a very easy workflow:

  • tuist init
  • tuist connect setup
  • tuist release

That’s how I imagine the process to be for the users, so I’ll start designing everything from there. If you are into tools and frameworks development, I’d recommend start designing them from the experience that you’d like to provide to your developers. Otherwise, you might end up with something that is not user-friendly.

I’ll keep you posted on the progress that I’m making towards this.

]]>
<![CDATA[Bootstrapping and publishing an app to the App Store is not a straightforward process. I tried to do it myself yesterday and a lazy me got stuck when I had to create signing artifacts, write automa... ]]>
Graphed knowledge https://pepicrft.me/blog/2020/05/02/graphed-knowledge 2020-05-02T00:00:00+00:00 2020-05-02T00:00:00+00:00 <![CDATA[

Most of the note-taking apps that we can find out there are designed around the same organization principle: notes are linearly organized and grouped into higher-lever abstractions that are folders.

Unfortunately, our knowledge is not linear, hence using those apps to dump our brain into the cloud requires a pre-conversion, which takes you away from your root thought and perhaps the opportunity to connect it with other thoughts and ideas.

Some people might not find this annoying, but I certainly do. The way I think is pretty much like Internet and many things in this planet work: as a network of interconnected thoughts where the close ones have something in common.

What I would expect from an app where I can dump my thoughts an ideas is therefore a simple interface to build a graph where the nodes are units of knowledge or ideas.

There are apps like Notion that make that possible with the use of hyperlinks but the interface is fairly overwhelming. Moreover, the experience in mobile is terrible as a consequence of using a webviews.

There are also apps around the concept of mind-mapping. They provide a visual interface to modify the graph. Although that might be handy for short-term graphs, I don’t think it’s the most suitable interface for quick brain-dumps. I think the graph must exist, but it should be an implementation detail.

How do I imagine the app then? I imagine a native mobile app, meaning by native that it uses native primitives and patterns. Most of the time I’ll be in brain-dump mode, so as soon as I open it, I get a form. Moreover, I’d get a default list of labels and other ideas to connect them with.

The other mode the app would have is journey. I could get lost in the graph and revisit those notes that I left for my future self. Pretty much like Pinterest but most knowledge and ideas oriented. Isn’t it beautiful?

I started building it with my wife and sister-in-law. My wife will focus on envisioning and designing the product. My sister-in-law and I will do the coding part. All the projects will be open source in the GitHub organization logosapp.

]]>
<![CDATA[Most of the note-taking apps that we can find out there are designed around the same organization principle: notes are linearly organized and grouped into higher-lever abstractions that are fol... ]]>
Catching crashes at launch time on Android React Native apps https://pepicrft.me/blog/2020/04/20/catching-react-native-launch-crashes-on-android 2020-04-20T00:00:00+00:00 2020-04-20T00:00:00+00:00 <![CDATA[

One thing that I noticed about React Native is that with the setup that most teams have on CI launch-time crashes can go unnoticed. Those crashes often happen when the contract between React Native and native is not met. That scenario is not caught when transpiling the Javascript or running tests on either the Javascript or the native side.

What’s the consequence of that? Crashes landing on master, developers frustrated because the app don’t launch after rebasing changes from master, or even worse, users getting an app that doesn’t launch.

At Shopify, I tasked myself to put a system in place to catch those errors on CI. In this short blog post, I’ll share what we ended up doing.

Since we use the Firebase test lab, the contract with whatever we build needs to build a test. After a bit of reading because I’m not very familiar with Android as a platform, and in particular how testing works in it, I managed to implement the following test:

@RunWith(AndroidJUnit4::class)
class LaunchTest {
  @get:Rule var rule = ActivityTestRule(MainActivity::class.java, true, true)
  @Test
  fun default() {
    Thread.sleep(20000)
  }
}

As you can see, it does nothing but just launching the main app activity and wait for 20 seconds. I first tried to subscribe to the React Native loading events but I couldn’t find a public interface for that. 20 seconds should be enough time for an app to boot on an Android emulator. If the test fails because it takes more than 20 seconds to boot, there’s probably something else to be looked at because that’s a terrible experience for the user.

The test passed for an app that launched successfully, but it also passed for an app that was supposed to crash. Why was that?

Disabling the developer support mode

As you might know, React Native has a developer support mode that is enabled when the app is compiled for debug. That mode prevents the app from crashing and shows a red error screen instead. Because of that, the activity was not crashing causing the test to throw a false positive. The 2 first options that I ended up discarding where the following:

  • Use the release variant: Although that could have probably worked, it’s not common to run tests using a release configuration. Moreover, we’d have had to sign the app before sending it to the test lab, which is something that we didn’t want to do.
  • Add a debugTesting variant: That extended from debug, and set a build config variable that we can read from the Application to disable the developer support mode. However, that resulted in compilation issues that bubbled up from React Native dependencies.

What I did in the end was defining a custom test runner that leverages shared preferences to pass some variables to the application when it’s being run from the test:

class LaunchTestRunner : AndroidJUnitRunner() {
  override fun callApplicationOnCreate(app: Application?) {
    val preferences = InstrumentationRegistry.getInstrumentation().targetContext.getSharedPreferences("TESTING", 0)
    val editor = preferences.edit()
    editor.putBoolean("IS_LAUNCH_TEST", true)
    editor.commit()
    super.callApplicationOnCreate(app)
  }
}

Thanks to that, we could adjust the logic in the application class, to read the value and adjust the developer mode accordingly:

override fun getUseDeveloperSupport() = BuildConfig.DEBUG && !applicationContext.getSharedPreferences("TESTING", 0).getBoolean("IS_LAUNCH_TEST", false)

Moreover, we had to change the testing configuration to use our custom test runner:

testInstrumentationRunner 'com.shopify.app.LaunchTestRunner'

After that, the test was passing when the app launched successfully, and failed when the application crashed.

By default, when building a React Native app for debug it doesn’t bundle the Javascript and the resources because it reads them from a local HTTP server that runs alongside the application. Since that’s not what we want, before building the app, we run the following command: react-native bundle --platform android --dev false --entry-file index.js --bundle-output app/android/app/src/main/assets/index.android.bundle --assets-dest app/android/app/src/main/res --config metro.config.js

In a follow-up blog post I’ll talk about how we achieve a similar thing on iOS. In that case, we didn’t have to implement an XCTest test; instead, we added a Rake task that built the app and attempted to launch it on an iOS simulator using the simctl tool.

]]>
<![CDATA[One thing that I noticed about React Native is that with the setup that most teams have on CI launch-time crashes can go unnoticed. Those crashes often happen when the contract bet... ]]>
Control and innovation https://pepicrft.me/blog/2020/04/17/control-innovation 2020-04-17T00:00:00+00:00 2020-04-17T00:00:00+00:00 <![CDATA[

I saw a tweet this morning where the author was hoping for Apple to announce a new product in the domain of CI. Apple acquired BuddyBuild two years ago and since then, they seem to have working on something secret that they’ll release at some point ― perhaps a CI platform that integrates with Xcode.

I’m not sure why, but that made me think about Apple’s obsession for control and limitation of the environment and its tools. Unlike more open ecosystems and communities like Javascript or Ruby, where developers don’t feel constrained to innovate and buid solutions for the problems and challenges that they encounter, in Apple’s, many people dream with that innovation only coming from Apple.

People might critize communities like Javascript or Ruby’s and find them overwhelming, but they are so open that the innovation comes in any shape: tooling, libraries, paradigms… React is a good example. With all the support from the community we have seen its ecosystem maturing very quickly. Building stating sites with Gatsby, or building web apps that follow the JAMstack philosophy with RedwoodJS is mind-blowing. On the Apple land, we just saw SwiftUI last year. I haven’t tried myself, but it seems that it’s still not mature enough, and since the framework is close-sourced, radars is the only contribution that you can do to it. That, and doing a bit of evangelization in the shape of talks, posts, and books.

Until Apple relaxes its closeness and obsession for control, I doubt we’ll see Vapor taking off, more companies adopting Bazel because it integrates seamlessly into Xcode, or Swift being used more broadly, not just for writing apps or tools for those apps.

I’ll continue using Swift and building things for the Apple ecosystem because I have a emotional connection to it, but I sometimes wish building Tuist didn’t feel like fighting the ecosystem, and rather like building an extension for it.

]]>
<![CDATA[I saw a tweet this morning where the author was hoping for Apple to announce a new product in the domain of CI. Apple a... ]]>
Anxiety-free working https://pepicrft.me/blog/2020/04/15/anxiety-free-working 2020-04-15T00:00:00+00:00 2020-04-15T00:00:00+00:00 <![CDATA[

I feel it really hard to work these days without feeling anxious. A Slack ping here, an interesting tweet there, some emails to answer, articles to read that are piling up… With that set up, not only I can’t concentrate, but I deliver poorly. All my mental energy is disseminated and I look like a zombie at the end of the day.

To fix that, I’ve done the following adjustments to the way I approach spending time with computers:

  • I only check social networks when I’m mentally exhausted (i.e. at the end of the day).
  • If I think there’s something worth sharing, I use Buffer instead of openning Twitter and be trapped by the unceasing stream of new things.
  • I use to asynchronous communications over real-time ones like Slack. I’m moving Tuist’s discussions to GitHub issues and a community forum. At work, we are trying to use GitHub issues and documents more.
  • I reduced the Slack groups I’m part of, especially the community ones. I’m already doing my part of helping the community by building Tuist.
  • I time-box the time that I dedicate to certain tasks like reading email, answering messages on Slack, or doing open source.
  • I minimize my go-to programming languages and tools and be ok with not keeping up with updates. I became a goal-oriented engineer rather than technology-enthusiastic. Swift and Ruby are just technologies that enable me to achieve certain goals.

From all of the above, the ones that have helped me the most are being ok with not keeping up with things, and moving away from quick and unceassing communications that happen on platforms like Twitter, or Slack.

I’m way more relieved now, and I feel I can deliver much better things without feeling anxious at all.

]]>
<![CDATA[I feel it really hard to work these days without feeling anxious. A Slack ping here, an interesting tweet there, some emails to answer, articles to read that are piling up… With that set up, not on... ]]>
Keeping it simple https://pepicrft.me/blog/2020/04/11/keeping-it-simple 2020-04-11T00:00:00+00:00 2020-04-11T00:00:00+00:00 <![CDATA[

If there’s something that characterizes my approach to problem solving these days is simplicity. Working on acquiring a product mind-set in the last 2 years has helped me realize how obsessed we, developers, are with configurability. Why is that? I think it has to do with trying to model the world as it is with software ― and the world is complex. We want to have flags, arguments, and options to model all possible scenarios we might encounter; we want every single detail of our software to be configurable. I’ve seen developers anticipating to those scenarios too: what if teams want to do X, what if we want this feature to behave differently,…

This is a constant when working on Tuist. In this case, developers often ask for features that mimic Xcode’s complexity, and that’s precisely the reason why we are providing Tuist as an abstraction: to conceptually compress Xcode’s complexities and initriciacies. My first thought is often: if we add X, Y, and Z, which is what Xcode provides, we’ll end up with the same thing, but in a different language. Teams might need that, and that’s fine, but unfortunately, that’s not what Tuist is amining for. We aim to challenge projects’ complexity and nudge them to be simpler. The adoption of Tuist might require simplification on the user side, but believe me, both your team and Xcode will be so thankful for that.

Something that works for me to challenge developers’ requests is asking them whys until I reach their core motivation or root problem. They typically come with a solution in mind, without really understanding what they need that for ― they haven’t spent time going through those whys themselves.

I’m aware that this mind-set when developing a product like Tuist might suppose not pleasing everyone ― but it’s fine because that’s not our goal. The goal is to remain small and close to our gist of keeping things simple. That’s what makes users of Tuist love it, and most importantly, enjoy scaling up their projects.

If you are a developer working close to product, whether a product is a tool for developers, or a user-facing app, I’d encurage you to work on having this mindset of understanding the motivations behind features, and striving to keep them simple.

]]>
<![CDATA[If there’s something that characterizes my approach to problem solving these days is simplicity. Working on acquiring a product mind-set in the last 2 years has helped me realize h... ]]>
Diving into Nix https://pepicrft.me/blog/2020/04/09/nix 2020-04-09T00:00:00+00:00 2020-04-09T00:00:00+00:00 <![CDATA[

At Shopify, the dev-infra team has been working on using Nix from one of our internal tools, dev. The tool is responsible for setting up the developers’ environment, as well as providing a standard CLI for automation for projects like Rails, iOS, or Android apps. As you probably know, setting things up in a developer’s environment is wild — you don’t know what to expect. It’s hard to set things up deterministically, and reproduce one environment into another. Homebrew for instance, tries to do as best as it can, but since it treats the environment as a global space in which dumping things, alike a singleton class that anyone can modify, that often results in hard-to-debug errors.

The first time that I heard about Nix was on this blog post from Pinterest, but it didn’t catch my attention until now. I started reading about it and watching some internal videos that Burke is creating to evangelize the idea. The more I read about it, the more amazed I am with the idea. These are the ideas that struck me:

  • Environments are defined in directories with symlinks to other directories that represent nodes of a dependency graph. Each node has a unique hash based on its content, the input, and the output.
  • Every modification of your environment is tracked and can be rolled-back akin to how Git works.
  • If one of those nodes needs to be built locally, the output artifacts can be shared remotely and pulled from other environments to speed things up.
  • The dependency graph extends to components that are very core to the system. That prevents, among others, that macOS upgrades break the user environment.
  • You can pull and run a package without polluting the environment.
  • Nix provides its own expressive language that prevents developers from doing operations that might introduce side effects.

I’ll keep reading about it. I think Tuist could benefit from some of its ideas. For example, the idea of minimizing the IO and side effects, as well as the way it models the dependency graph.

I hope everyone is safe in these difficult times. Stay at home!

]]>
<![CDATA[At Shopify, the dev-infra team has been working on using Nix from one of our internal tools, dev. ... ]]>
We need more crafters https://pepicrft.me/blog/2020/03/28/we-need-more-crafters 2020-03-28T00:00:00+00:00 2020-03-28T00:00:00+00:00 <![CDATA[

I think the technology industry needs more crafters. People that have a genuine and lasting love for what they do.

Our industry is filling with people that are constantly trying to catch the latest trend, or create the newest product/project so that people have something to talk about. In most cases, they don’t have a genuine interest in the domain, they are just feeding their narcisistic needs. There’s a positive thing on that — we are constantly exploring new territories that otherwise would remain unexplored. However, there are important drawbacks:

  • Since we don’t have interest in the problems’ domain, we never engage with our craft because we are constantly thinking about what’s next.
  • It creates anxiety — The anxiety of figuring out how to stay relevant and catching up with everything.
  • It distances us from the recepients of our craft, who just become a mean to feed our egos.

It’s easy to follow that path. I’ve done that myself, and I don’t like it. I prefer to keep my feet on the ground, and nurture the love for my craft. Outside the context of of Shopify, my craft is Tuist.

There are a handful of people in the community that I admire for their lasting and deep love for the craft that they do. Off the top of my head, I really admire’s Brent Simmons’s ingrained passion for RSS readers and NetNewsWire, and DHH (and Basecamp) and their passion for building Rails and helping teams collaborate. There are probably more, but those are the ones that popped in my mind. They have seen lots of shiny RSS readers being created, and many better-than-Rails frameworks, but there they are. After many years in the industry, they continue doing what they like the most.

]]>
<![CDATA[I think the technology industry needs more crafters. People that have a genuine and lasting love for what they do. Our industry is filling with people that are constantly t... ]]>
A better signing experience in Xcode https://pepicrft.me/blog/2020/03/08/signing-tuist 2020-03-08T00:00:00+00:00 2020-03-08T00:00:00+00:00 <![CDATA[

A few days ago, Marek decided to take on a proposal that I made for Tuist a while ago, management of certificates and provisioning profiles.

As it happened with the definition of dependencies, dealing with certificates and provisioning profiles is something that annoyed me a lot. It takes time to understand all the concepts and get it right right, and it can take even longer to understand issues when they arise. In my experience, there’s usually a go-to person in each team to help debug and solve signing issues when they arise. That’s pretty bad. Configuring signing should be straightforward and issues should be easier to debug and fix.

Fastlane helped with automating the generation of the certificates and the profiles but doesn’t prevent developers from setting the wrong settings to their projects.

How can Tuist do things better? It can make certificates and profiles an implementation detail of signing; like we did with linking build phases being an implementation detail of dependencies. Those are part of the repository and encrypted using a team’s key that will need to be present in the local and CI environments.

At project generation time, Tuist will decrypt and validate them, and configure both, the environment and the project, for the signing to work. If something is missing or invalid, we’ll fail early to prevent developers from facing signing issues in Xcode.

Moreover, to eliminate the need to configure anything on the user end, we’ll establish the following naming convention:

  • Certificates: Configuration.p12 (e.g Debug.p12)
  • Profiles: Target.Configuration.mobileprovision (e.g MyApp.Debug.mobileprovision)

Thanks to that convention, the configuration will be zero, and Tuist will know which certificates and profiles to take from Tuist/Signing, for each of the targets that are part of the dependency graph.

Project generation is a powerful tool to help teams with their scaling issues yet we are just starting to see its real benefits. Check out the docs to know how to adopt it in your projects.

]]>
<![CDATA[A few days ago, Marek decided to take on a proposal that I made for Tuist a while ago, management of certificates and provisioning profiles ]]>
From iOS engineer to a T-profiled techie https://pepicrft.me/blog/2020/03/06/t-profile 2020-03-06T00:00:00+00:00 2020-03-06T00:00:00+00:00 <![CDATA[

One of the things that excited me the most about the opportunity to join Shopify back in 2018 was the opportunity to grow and learn from the challenges and the talent of the company. When I joined, my background was mainly iOS development with a bit of excitement for tooling. I was very comfortable doing Swift; I knew the common SDKs, kept up with the latest updates, and read every trendy article that came out. I loved the platform (and I still do), but there was a ingrained curiosity in me that was pushing me for exploring unknown.

Even though curiosity was knocking the door, something was holding me back. I had to answer the question of whether I wanted to be a Swift specialist, or broaden my skills, not just to include other programming languages, but also to areas like design, product, or even people management. I ended up pursuing the latter: adventuring myself into trying new things.

At work I learned Ruby and Ruby on Rails. I learned how to work in large and distributed organizations and support them with tooling. I learned how to communicate better and how to build a trust battery with other people. I’m also learning how to manage people and how to help them grow and have impact in their careers. In my spare time, I learned React and Typescript and played with Gatsby and theme-ui, which by the way, I fell in love with them both. I learned how to use Swift to implement a command line tool and how to build a great product that cares about the user experience. I learned about design by designing the brand and the website of Tuist. Also with Tuist, I learned how to build a healthy community of users and contributors that engage with challenging problems and that are thrilled to help others.

In hindsight, moving on from the label “iOS Developer” is one of the best things I could have done in my career. These days I’m not the best in any of those areas, nor I want to be. However, I know them well enough that I can explore challenges and ideas from many angles by myself. I feel empowered to build things, and I guess that’s where my motivation for building Tuist comes from. It’s not just about the challenge, which motivates me a lot, but having the opportunity to think of Tuist as a product, as a community, as a philosophy of working. It’s incredibly exciting!

Another advantage of moving on is being able to come up with better and more creative solutions as a result of being more resourceful. If I had remained as an iOS developer, I’d probably tried to push Swift to places where the language is not the most suitable option. For example, I’d have tried to use it to build my website, or implement a web service with it. You can definitively do it, but after learning about other options, I can choose the ones that I think are a a better fit for what I’m trying to do. For instance, I have to build a website, I’d use something like GatsbyJS or NextJS and if I had to build a web service, I’d use Rails. Swift was a good option for Tuist to encourage contributions, and I’d continue to use it for building apps.

Being a T-profiled techie (I originally wrote it as engineer but I that’s a constraining label that I’m trying not to use anymore) might feel exhausting because you are constantly pushing yourself outside your comfort zone, but it’s definitively worth it. It’s the same as learning a new language; it helps you consolidate the languages that you know, and find interesting connections from which you can learn a lot.

]]>
<![CDATA[One of the things that excited me the most about the opportunity to join Shopify back in 2018 was the opportunity to grow and learn from the challenges and the tal... ]]>
Generation of Swift interfaces to access resources https://pepicrft.me/blog/2020/02/25/generate-swift-interfaces 2020-02-25T00:00:00+00:00 2020-02-25T00:00:00+00:00 <![CDATA[

Many of you might already be familiar with SwiftGen; a tool that generates Swift code to access resources in a type-safe manner. Having a type-safe API is something that Android developers have had for a long time, and that Apple has never added to Xcode.

I think it’s a wonderful idea because we delegate to the compiler the task of checking whether a resource exists or not. It prevents apps from crashing at runtime, and thus improves the stability of the apps.

Why am I bringing this up? Because I like the idea so much that I’m pondering integrating SwiftGen into Tuist. SwiftGen suggests adding a new build phase to every target for which we’d like to generate the interfaces. However, I’m planning to do it differently. If users of Tuist enable this feature, the interfaces will be generated at the project generation time. The generated project will already contain the resources and their interfaces ready to be bundled and compiled.

I might give this a shot later this week. I had a look at the SwiftGen project and it exports a target that we can link against to use its API. Moreover, the license is compatible with Tuist’s so there shouldn’t be any issue. I’m planning to set up this first integration defaulting to a conventional format for the interfaces and maybe revisit this in the future to add a certain level of configurability.

]]>
<![CDATA[Many of you might already be familiar with SwiftGen; a tool that generates Swift code to access resources in a type-safe manner. Having a type-saf... ]]>
A standard CLI for Xcode projects https://pepicrft.me/blog/2020/02/19/standard-cli-for-xcode-projects 2020-02-19T00:00:00+00:00 2020-02-19T00:00:00+00:00 <![CDATA[

There’s an idea that I’d love Tuist to move towards: provide a CLI that is standard across all the projects defined using Tuist. This is not a new idea; we can see it in frameworks like Rails that thanks to being opinionated about the structure of the projects they can be opinionated about the CLI that users interact with most of the time. We can also see it in the Swift Package Managers, which is also opinionated about the structure of the packages.

What I find beautiful about this idea are 2 things:

  • Developers don’t need to maintain a translation layer like Fastlane that maps intents to calls to Apple tools. This often becomes a common source of issues and complexity in projects.
  • They can learn a concise list of commands that they can use to interact with any project defined with Tuist.

My motivation for building new features into Tuist usually comes from trying to iron out some inconveniences, or creating new workflows as I’d like them to be. For instance, implementing project generation in Tuist embracing Swift as an interface language and abstracting the definition of dependencies was motivated by the fact that having more Xcode projects at SoundCloud was becoming painful and error-prone. I implemented an API for describing dependencies that was simple and understandable by anyone.

It’s a similar scenario when it comes to automation. The industry accepted Fastlane as the go-to solution to define a CLI for projects. It does a really good job, and we owe a lot to it, but I used at scale results in a lot of duplication and complexity spread across Fastfiles that no one wants to maintain. For that reason, I think Tuist could go one step further and make things simpler leveraging project generation and the foundation that we have built. The way I picture myself as developer working in a large project is the following:

I’d cd into the directory that contains the project that I plan to work on. tuist focus would give me an Xcode project that I can use to edit the code, and I can use tuist build, tuist test, and tuist lint to make sure that the code does what’s supposed to do and that it follows the styling conventions.

As simple as that. I can go to any directory that contains a Project.swift file, and run any of those commands knowing that Tuist will know how to proceed.

I’m so excited for this that today I put the first stone towards this vision.

]]>
<![CDATA[There’s an idea that I’d love Tuist to move towards: provide a CLI that is standard across all the projects defined using Tuist. This is not a new idea; we can see it... ]]>
Evolving Tuist's architecture https://pepicrft.me/blog/2020/02/10/evolving-tuists-architecture 2020-02-10T00:00:00+00:00 2020-02-10T00:00:00+00:00 <![CDATA[

I’m flying back from Tokyo and took the opportunity to code a bit on Tuist. Since I don’t have Internet connection to get distracted with, I decided to work on something that doesn’t require Internet connection: improving the project architecture.

I’m quite happy with how the project has evolved so far with the help of everyone involved in the project. The slow yet steady pace of adoption in the community made it easy to keep en eye on the project’s architecture while introducing new features. Doing so is crucial to have a healthy codebase that has little technical debt and allows adding new features easily.

Over the project’s lifetime, we moved from a single-target project to a modularized one. Perhaps because I was heavily influenced by my work at SoundCloud, where I introduced the idea of Microfeatures. Although the main goal of modularizing the codebase was to improve developers’ productivity, it allowed identifying and defining different areas of responsibility that were represented as frameworks. Teams worked independently but highly aligned thanks to shared utilities that were core to SoundCloud business domain.

I believe the benefits of having a modularized architecture for Tuist are the following:

  • Have owners and experts in different domains of the code base. For instance, there can be an owner of the generation of Xcode projects. They’ll make sure that manifests are properly translated into Xcode projects, and that the generation of Xcode projects is fast.
  • Ease first-time contributions because new contributors don’t have to get familiar with the whole codebase, just the domain that they are interested in contributing to.
  • Better design because adding a new feature is not just adding a internal class that I can depend on from other classes. It requires defining where the feature should be implemented (i.e. target), how the feature interacts with others, and what will be its public interface. At least to me, doing this type of engineering work is beautiful.

So far the areas of responsibilities, and therefore frameworks, that we have identified in the project are the following:

  • Support: Contains Tuist-agnostic utilities. For example, there’s a utility for interacting with the file system, FileHandler, or another one to output information to the user, Printer.
  • Support testing: Contains Tuist-agnostic utilities for testing purposes. It also includes XCTest extensions for tests to use.
  • Core: It contains utilities and models that are core to the business logic of Tuist. For example, the Project model is ubiqutuous to all the features of the project and therefore needs to be defined here.
  • Loader: Contains the logic responsible for reading the manifest files and generating an in-memory dependency graph that is used latere on to generate the Xcode projects.
  • Generator: Contains the logic that translates the in-memory graph into valid Xcode projects that developers can use to work on their features.

All targets have an associated -Testing targets, which provide test data and mocks to the targets that depend on them. This is another idea that I “stole” from my time at SoundCloud and that I really like because you are facilitating future testing work. Writing a test and realizing there are mocks for our test subject dependencies already defined is priceless. Some people prefer to use tools like Sourcery for this type of work, but I’m a bit old-school here.

There’ll soon be another domain with its own target, Linting, whose logic is currently implemented as part of Loader. Linters make sure that the project is in a valid state. Otherwise, they output errors and warnings to the users, and depending of the severity, they fail the project generation. The goal here is to save developers some time debugging issues in their projects.

In a nutshell, we can summarize Tuist’s project generation as a sequence of 4 steps:

  1. Loading
  2. Linting
  3. Transformation
  4. Generation

If we translate that to code, we might be

func generarte(load: (AbsolutePath) throws -> Graph,
               lint: (Graph) throws -> [LintingIssue],
               transform: [(Graph) throws -> Graph] = [],
               generate: (Graph) throws -> Void)

Beautiful, isn’t it? We are not there yet but that’s the idea. Once we get there, I’d love to explore the idea of allowing developers to define their own transformations, either locally, or imported from third-party packages defined as Swift packages.

There could be a transformation that adds a Swiftlint to all the targets:

final class SwiftLintTransformer: TuistTransformer {
  func transform(graph: Graph) throws -> Graph {
    // Traverse projects' targets and add the build phase
  }
}

Architecting code and projects is a pleasing exercise, and I love it!

]]>
<![CDATA[I’m flying back from Tokyo and took the opportunity to code a bit on Tuist. Since I don’t have Internet connection to get distracted with, I decided to work on something that doesn’t require Intern... ]]>
Seeking inmutability https://pepicrft.me/blog/2020/01/07/path-to-inmutability 2020-01-07T00:00:00+00:00 2020-01-07T00:00:00+00:00 <![CDATA[

I recently opened up this PR on Tuist that turns models that represent the projects into structs. For some reason, which I don’t remember, I had the not-so-brilliant idea of defining them as classes. While that has been working fine, they don’t really need to be classes and moreover, they’ll cause troubles as we start optimizing things by introducing parallelization.

Classes don’t prevent mutation, so as soon as we add threads to the mix, we might start seeing some race conditions of crashes derived from the mutation of instances from different threads.

Interestingly, as part of this work I found out that we were mutating an instance of Target to change its infoPlist attribute to point to an Info.plist generated by Tuist. After turning the models into structs, Xcode complained about that not being possible. What I ended up doing was changing the code implementation to rather create a copy of the target with that attribute mutated.

Next week we are meeting to sync up on the state of the project, and optimization is one of the topics. I can’t wait to see this area being explored further. Since the inception of the project this is an area that we have disregarded. Perhaps to avoid the introduction of premature optimization.

Two years later, and after some large projects started to use Tuist, we realized there’s a lot to improve so feel it’s now the right time to tackle that problem.

]]>
<![CDATA[I recently opened up this PR on Tuist that turns models that represent the projects into structs. For some reason, which I don’t remember, I ha... ]]>
Social anxiety https://pepicrft.me/blog/2020/01/02/social-anxiety 2020-01-02T00:00:00+00:00 2020-01-02T00:00:00+00:00 <![CDATA[

I keep falling into the same trap over and over: believing that I should be active in social media to be connected to others and have a prosper professional future. My relationship with them is a rollercoaster. There are times when I neglect using them, and there are other times when I’m very active on them.

Not using them feels great; I am more present and sleep better. I don’t have to think about what creative bits to share next on Twitter, what photo to take to amaze people around me, or what project to work on to create some hype and make developers think that I do cool stuff too. I have the mental space to listen to myself, to know what I want, how I feel, and to look after important things such as the health and the family, which I tend to disregard when I’m on social media.

When I’m on social media, I feel anxious. I feel anxious when I see people traveling and having fun when I’m just “enjoying” a rainy day in Berlin. I also feel anxious when I see developers on Twitter talking and sharing things that I don’t have the bandwidth to play with. I feel all those things happening so fast, waste the scarce of attention and energy that is left for me after 8 hours at work. I don’t sleep well. I go to sleep with the phone and wake up with it too. Every moment of boredom is filled with infinite scrolling in timelines that bring me more anxiety.

I’ve tried to have a healthy relationship with social networks, but I can’t. I have a personality that doesn’t make that easy, and social networks, whose business models rely on exploiting our vulnerabilities and using our attention, doesn’t help.

For that reason, I’m starting 2020 by taking another break. Hopefully, an I-don’t-know-the-end type of break. I have my personal website where I can share my thoughts, collect the stuff that I find useful on the Internet, and write down the books that I read. It’s my own space on the Internet where I don’t feel the itch of social validation. It won’t be easy, mainly because of the sense of loneliness that the break creates. Still, I’m hopeful that this time I’ll overcome the difficulties and have a healthier life without social networks.

]]>
<![CDATA[I keep falling into the same trap over and over: believing that I should be active in social media to be connected to others and have a prosper professional future. My relationship with them is a r... ]]>
Wrapping up 2019 https://pepicrft.me/blog/2019/12/30/wrapping-up-2019 2019-12-30T00:00:00+00:00 2019-12-30T00:00:00+00:00 <![CDATA[

Following every year’s tradition, I’m writing a wrap-up post for this year, 2019.

2019 has been the year when María José and I got married. I proposed to her in January and celebrated the wedding in September. I have no words to describe how happy we were that day. I won’t forget the moment I was standing waiting for María José to come and seeing both families and friends from Spain and Berlin celebrating such a beautiful moment with us.

2019 was also the year that María José and I decided to invest in a property in Berlin. Since we had some savings to spend, and the real state market is growing fast in Berlin, we thought it’d be a good idea. I was a bit hesitant at the beginning because I pictured ourselves being tied to a bank forever. Still, we were quite optimistic that things would turn out well, and that it would become our source of income when we get retired (we don’t know if there’ll be public pension by the time that happens). We haven’t got the keys yet, but we have already started working with an interior designer that helps us materialize our ideas for the apartment. We’ll move in between June and July next year. That means we’ll stay in Berlin a bit longer, and we are excited about that because these days, Berlin feels like home.

At the beginning of the year, we traveled to Thailand, Laos, Cambodia, and Malaysia. It was the first time María José traveled outside Europe and the first time for me visiting Laos and Malaysia. It was such an adventure where we went from the chaos of Bangkok to the beautiful nature in Laos, getting amazed by the cocktail of religions in Kuala Lumpur. It was perhaps a bit long because we spent 4 weeks traveling, and the last 2 were a bit of “I miss my routine a bit.”

The professional side of things has been hectic too. At Shopify, I changed my career path and became engineering manager of the team that I was part of as an individual contributor, mobile tooling. I’m so grateful that my manager and the company supported me and that they allowed me to give it a try and manage the team. I’ve yet many things to learn, but it’s exciting and can’t wait to keep learning next year. I also have the opportunity to do a bit of project and product management, which I realized I like a lot.

In my spare time, I kept pushing some side projects. We took Tuist to its first major version, 1.0, and got companies like Bloomberg, mytaxi, Sky, and SoundCloud to use the tool to scale up their Xcode projects. (you can read more about it here). Next year I’d like my team to spend some time assessing and hopefully introducing it to Shopify, where mobile teams started facing some scaling challenges.

I also worked on Angle with two good friends of mine. Angle is a macOS app to streamline the process of testing software projects. The first version is focusing on iOS and Android apps, but the idea is to extend it to web apps and backend services. It integrates with GitHub and takes care of setting up the local environment for running builds. We plan to release it publicly in Q1 next year.

And last on side projects, but not least, I started working on Indie Social. It’s a specification for the format and structure of social publications on statically generated websites. After several attempts to give up on social networks that are built upon surveillance capitalism, I realized that one of the reasons why I ended up going back is because I don’t have a convenient to publish content. Having a specification allows developers to build plugins for static website generators like Jekyll or Gatsby, and clients to publish content (e.g. an iOS app). Moreover, I want to put my grain of salt to help people to be owners of their content on the Internet.

What’s coming in 2020?

I plan to keep working on balancing my personal and professional life. That’s something that I still struggle with. As I mentioned earlier, we’ll get the keys of the apartment in July, so I’ll work with María José on building the interiors and creating the cozy space where we see ourselves living in.

On the health side, I plan to work out more regularly. In fact, I started running more frequently because I have two marathons planned for next year, Prague and Berlin. I want to build the habit that I had back in 2015, where I ran 4 times a week.

I’ll keep traveling, starting with a trip to Japan in February with María José, my sister, and her boyfriend. India and Chile might be next, but it’s still early to know what we’ll do further next year.

I want to learn more about product design and illustration. Before I took the logical and engineering path in my life, I used to draw a lot. These days, when I open Sketch or take the iPad, I can spend hours quietly drawing and come up with a decent piece of art. I want to take the hobby further and learn skills and tools from creative people.

At work, I’ll keep learning to be a manager. One of my first tasks for next year is growing the team, which I’ve never done before. Moreover, the entire team needs to figure out what developing using React Native means, and leverage that to build first-class tools and services for supporting the development with that technology. In that regard, I might attend some React & React Native conferences next year to get familiar with the technology and the community.

As for the side projects, I’ll start working on Tuist 2.0, whose primary focus will be automation and productivity. My main goal will be building Galaxy, a service to speed up Xcode builds by providing caching. We started doing some groundwork, and it looks promising. I also plan to release the first version of Angle and the Indie Social specification.

]]>
<![CDATA[Following every year’s tradition, I’m writing a wrap-up post for this year, 2019. 2019 has been the year when María José and I got married. I proposed to her in January and... ]]>
Signing with Xcode on CI https://pepicrft.me/blog/2019/12/27/signing-xcode-app-on-ci 2019-12-27T00:00:00+00:00 2019-12-27T00:00:00+00:00 <![CDATA[

Today, I worked on automating the release process of Angle on CI. At first, I thought it’d be a straightforward task, but it turned out not to be so. Most Xcode projects don’t have to deal with this because they use Match, which abstracts us from all those intricacies, but Angle doesn’t use Match to keep the tooling stack as lean as possible.

For those of you who run into this need in the future, this is the TL;DR; version of what needs to be done:

  1. Create an unlocked keychain where signing certificates will be imported. If we use the system one, the OS will prompt the user to confirm the access, and since there’s no user interface on CI, the build will get stuck.
  2. Import the certificates and their private keys into the Keychain created in the previous step.
  3. Allow codesign, the signing tool from Xcode, to access the certificates without prompting the user for the password.
  4. Copy the provisioning profiles into the directory where Xcode reads them from.
  5. Run xcodebuild using the OTHER_CODE_SIGN_FLAGS build setting to indicate the certificate that should be used to read the certificates from.

Since we are using Ruby and Rake for automation in the project, in the following sections I include some Ruby snippets that I extracted from the project.

Executing commands in the system

The method below wraps the execution of system commands using Ruby’s system method and adding support for printing the command that is executed:

def execute(print_command: false)
  puts("Running: #{args.join(" ")}") if print_command
  system(*args) || abort
end

Creating the Keychain

The snippet below includes a method to initialize a temporary keychain where we’ll import the signing certificates. Note that that after creating the Keychain, we use set-keychain-settings to set the timeout until which the keychain will remain unlocked; 1 hour in our case. Moreover, we unlock the keychain so that processes can access it without the os prompting the user for the password. The set_key_partition_list function configures the keychain to give Apple’s tools access to the keychain:

def with_keychain
  keychain_name = "ci.keychain"
  keychain_password = "ci"

  puts("Creating Keychain for signing")
  execute("security", "create-keychain", "-p", keychain_password, keychain_name)
  execute("security", "list-keychains", "-s", keychain_name)
  execute("security", "set-keychain-settings", "-t", "3600", "-u", keychain_name)
  execute("security", "unlock-keychain", "-p", keychain_password, keychain_name)

  set_key_partition_list = ->() {
    execute("security", "set-key-partition-list", "-S", "apple-tool:,apple:,codesign:", "-s", "-k", keychain_password, keychain_name)
  }

  yield(keychain, set_key_partition_list)
ensure
  execute("security", "delete-keychain", keychain_name)
end

Note that we ensure that the temporary keychain is deleted once the given block finishes executing.

Installing certificates and profiles

Once we have the keychain created, we can proceed with installing the certificates. We can do that by using the security import command. As we can note in the code snippet below, we need to use the -T argument to indicate which executables will have access to the certificate.

In the case of provisioning profiles, we need to copy them to ~/Library/MobileDevice/Provisioning Profies, the directory where Xcode reads them from.

The snippet below iterates through all the certificates and profiles under the certificates/ directories, installing and copying the certificates and provisioning profiles respectively:

def install_certificates(keychain: nil)
  puts("🔑 Installing certificates and copying provisioning profiles")
  files = Dir.glob(File.join(__dir__, "certificates/*"))
  files.each do |file|
    if file.include?(".p12") || file.include?(".cer")
      security_import_command = [
        "security", "import",
        file,
        "-P", "",
        "-T", "/usr/bin/codesign",
        "-T", "/usr/bin/security",
      ]
      security_import_command.concat(["-k", keychain]) unless keychain.nil?
      execute(*security_import_command)
    elsif file.include?(".provisionprofile")
      profiles_path = File.expand_path("~/Library/MobileDevice/Provisioning\ Profiles")
      copy_to_path = File.join(profiles_path, File.basename(file))
      FileUtils.mkdir_p(File.dirname(copy_to_path))
      FileUtils.cp(file, copy_to_path)
    end
  end
end

Archiving & exporting the app

Once we have the temporary keychain with the right content in it, it’s time to archive and export the app. Remember, archive generates a .xcarchive file that we need to export and sign obtaining a .app as a result.

For archiving the app we run xcodebuild passing the configuration, the path where we’d like to export the .xcarchive file, as well as the OTHER_CODE_SIGN_FLAGS build setting with the value --keychain keychain_name. That way we can indicate Xcode to use a keychain other than the default one.

Exporting is very similar, with the difference that we need to pass the -exportPath to indicate the path where the app should be exported, as well as -exportOptionsPlist pointing to a .plist file with options to export the app.

def archive_and_export(keychain: nil)
  puts("📦 Archiving app")
  archive(keychain: keychain) do |archive_path|

    puts("👩‍💻 Exporting app")
    export(archive_path: archive_path, keychain: keychain) do |app_path|
      yield(app_path)
    end
  end
end

def archive(keychain: nil)
  Dir.mktmpdir do |dir|
    archive_path = File.join(dir, "Project.xcarchive")
    arguments = [
      "-configuration", "Release",
      "-archivePath", archive_path,
      "clean",
      "archive"
    ]
    arguments << "OTHER_CODE_SIGN_FLAGS='--keychain #{keychain}'" unless keychain.nil?
    xcodebuild(*arguments)
    yield(archive_path)
  end
end

def export(archive_path:, keychain: nil)
  Dir.mktmpdir do |dir|
    export_options_path = File.join(dir, "options.plist")
    File.write(export_options_path, export_options)

    arguments = [
      "-exportArchive",
      "-exportOptionsPlist", export_options_path,
      "-archivePath", archive_path,
      "-exportPath", dir,
      "MACOSX_DEPLOYMENT_TARGET=#{MACOSX_DEPLOYMENT_TARGET}",
    ]
    arguments << "OTHER_CODE_SIGN_FLAGS='--keychain #{keychain}'" unless keychain.nil?

    xcodebuild(
      *arguments,
      project: nil,
      scheme: nil
    )

    app_path = Dir.glob(File.join(dir, "*.app")).first
    yield(app_path)
  end
end

Putting it all together

And combining all the previous snippets, the resulting code looks like the snippet below. Beautiful, isn’t it?

with_keychain do |keychain|
  install_certificates(keychain: keychain) do
    set_key_partition_list.call

    archive_and_export(keychain: keychain)
  end
end

Some final notes

The part that took me the most time to figure out was giving xcodebuild and Sparkle access to the keychain without the os prompting for the password. Initially, I thought it was enough with unlocking the keychain, but I was wrong in that assumption. When adding items to the keychain, we need to use the -T argument to indicate the applications that have access to the item. Moreover, we also need to use set-key-partition-list command from security, which sets the PartitionIDs for keys that can sign (-s) for a specific keychain.

]]>
<![CDATA[Today, I worked on automating the release process of Angle on CI. At first, I thought it’d be a straightforward task, but it turned out not to be so. Most Xcode proj... ]]>
Moving Pods to Packages https://pepicrft.me/blog/2019/12/24/moving-pods-to-packages 2019-12-24T00:00:00+00:00 2019-12-24T00:00:00+00:00 <![CDATA[

Today, I decided to move all Angle‘s dependencies that were defined as CocoaPods Pods to Swift Packages. It was my first-hand experience with Xcode’s integration with the Swift Package Manager so here are my thoughts:

  • I had no issues. I went through all the dependencies, got their URL, and added the dependency from the linking frameworks build phase. Xcode was able to resolve the dependencies, list the linkable targets, and configure the project seamlessly. I barely had to touch anything on the project.
  • Some Pods had not Package.swift file so I ended up using Carthage for those. I compiled a .framework file and then dragged and drop the framework to the project.
  • After moving all the dependencies, I deleted the workspace generated by CocoaPods and all the references from the project. The project built successfully, and ran without any issues.

Apple did a really good job here.

By the way, I’m very excited to continue working on Angle, another side project in which I’m involved and to which I couldn’t contribute much because I was mainly focused on Tuist. Last weekend I worked on creating the documentation website in Gatsby. It already contains the getting started steps in case you want to give it a try.

Image that shows how dependencies are defined as packages in Angle

]]>
<![CDATA[Today, I decided to move all Angle‘s dependencies that were defined as CocoaPods Pods to Swift Packages. It was my first-hand exp... ]]>
Adding bits of reactive programming to Tuist https://pepicrft.me/blog/2019/12/08/reactive-tuist 2019-12-08T00:00:00+00:00 2019-12-08T00:00:00+00:00 <![CDATA[

Until now, most of the code in Tuist followed an imperative approach, the logic executes as part of the process’s main thread. There were some exceptions, like the execution of other system tools that ran in their process, blocking Tuist’s main process.

As we add more features, doing everything in the main thread is no longer a good idea for performance reasons. A piece of business logic that doesn’t make sense to run sequentially in a single thread is the upload of cached xcframeworks for Galaxy. Let’s say our project has in total 20 frameworks that can be cached, why not upload them in parallel?

It seems a straightforward task, yet there’s a little detail that might go unnoticed if you are not familiar with writing CLIs: we need to block the main thread until all the asynchronous tasks complete to continue the execution. Otherwise, the process will exit, and the tasks get canceled.

Foundation provides APIs for synchronizing asynchronous work such as dispatch semaphores and groups. However, it might not be evident from the resulting code to read how the execution of the asynchronous takes place. Conversely, reactive frameworks make that easy thanks to operators than can be chained, and that describes how asynchronous units of work are combined. Let me include some pseudocode to represent that:

let result = combineLatest([a, b, c]).wait()

Right for that particular reason, we are starting to introduce some bits of reactive programming in Tuist using RxSwift. It’s not a binary decision, or in other words, it doesn’t mean that we go from an entirely imperative codebase to a fully reactive one. Reactive programming makes sense in areas where asynchronous work is required, and where that asynchronous work might need to synchronize with other asynchronous work.

At this point, you might wonder why not Combine? Well, Combine is a macOS 10.15.x framework, and we might have users in older versions of macOS. A migration to Combine will definitively happen gradually as soon as we deprecate macOS 10.14. Until then, I think it’s fine starting with RxSwift.

This decision reminds me of SoundCloud, where we took a binary approach, and Reactive ended up all over the place. I aim to do things differently here and have the usage of reactive programming scoped to only the areas where it makes sense.

I hope you all have a great week. I just arrived in Ottawa, and it’s cold over here. On Wednesday, I plan to experiment with porting Shopify’s native Xcode projects to Tuist. Wish me luck!

]]>
<![CDATA[In this blog post I talk about a recent decision that we made to start using reactive programming to model asynchronous tasks in Tuist.]]>
Working on a new website for Tuist https://pepicrft.me/blog/2019/12/06/new-tuist-website 2019-12-06T00:00:00+00:00 2019-12-06T00:00:00+00:00 <![CDATA[

Over the past few weeks I’ve been working on a new website for Tuist’s. This is the third iteration of the website and I’m never tired of going through them; it takes learning some design tricks and some inspiration from other websites to see the current website as old and not representative.

For someone like me with little experience in creating products, a task like this one isn’t straightforward, yet challenging. I have to think about the message that I’d like to convey, the content that I’d like to present, the combination of fonts, colors, and sizes that represent best the tool. The website and the GitHub repository is most of the times what users first see when they come across your projects. If its style and content is not appealing, developers will probably not give the project a try.

What most developers do is using markdown files in their repositories. I’ve seen many creative and well organized markdown files that convey the message of the project well. In case of Tuist, we started with something like that but at some point, we realized markdown was not enough. I can describe in paragraphs how life-changing Tuist can be projects, but without a more visual and dynamic explanation, users won’t probably get the idea. That’s where having your own website comes handy. You are in full control, not only of the message, but of the way the message is represented. You can summarize a bunch of paragraphs in a beautiful animation that lets developers eager to try the tool.

Anyway, that’s where I’m mostly spending my time these days. For this new iteration I used Figma for the design. It really feels like GitHub but for design. I can define my shared components and colors that I can reuse from all the designs, share them with other people and get their input through comments, and all of that in the cloud. If all of that was enough, I just came across this new feature that they just launched 🤯. In case you are interested in giving feedback on the design, here is the link.

Moreover, we are moving away from Docz for generating the documentation. We’ll merge the documentation into the website built with Gatsby The reason why we are doing this is because it gives us the control to design the website, and make sure that complies with the search engines’ algorithsm. My goal is that each documentation page has the right metadata headers and HTML semantics for the content to be well indexed by search engines like Google.

That’s what I’m pretty much up to! On Sunday I’m flying to Canada to spend some days with the team to come up with a roadmap for next year ✈️.

]]>
<![CDATA[An update on what I'm up to these days with Tuist. In particular, I talk about the new website that I'm designing and implementing for the project.]]>
Creating experiences https://pepicrft.me/blog/2019/12/03/creating-experiences 2019-12-03T00:00:00+00:00 2019-12-03T00:00:00+00:00 <![CDATA[

There’s something in Tuist that keeps me very engaged with the project. Never before I had had that experience with other projects; I think it has something to do with the fact that with Tuist I’m creating new experiences and workflows that would otherwise be unimaginable.

After years of iOS development, I feel we have been accommodating our workflows and tools to what we were given by Apple. With Tuist I’m taking down those constraints. I imagine how I’d like the experience to be and then craft it as such.

If I would like to describe dependencies as .target("Core") I codify it. If build times are slow, I think of ways of making them fast and add them to Tuist. I can push myself outside my comfort zone of just developing and work on designing a brand, a website. The same goes for writing documentation and building a community of enthusiastic developers that share the vision of the projects. That’s damn exciting and can’t stop doing it.

It’s also exciting thinking of solutions for problems that I experienced myself. For instance, project generation is a solution to a problem: Xcode projects exposing too many complexities that difficult its maintenance and growth. Nowadays I look at Tuist’s API and keep thinking further through ideas to make the API simpler and nicer. I’m not building a project generator, I’m building a tool to improve developers’ experience using Xcode.

And I do all of this in the open because I’d like anyone to be part on helping others. I worked on a solid foundation to make this possible and now I’m starting to see the results.

Creating experiences in the open is awesome.

]]>
<![CDATA[Picked up my phone and dumped some thoughts on why I'm so engaged and excited to build Tuist.]]>
I am X https://pepicrft.me/blog/2019/11/24/i-am-x 2019-11-24T00:00:00+00:00 2019-11-24T00:00:00+00:00 <![CDATA[

Leonardo di ser Piero da Vinci was an Italian polymath interested in invention, drawing, painting, sculpture, architecture, science, music, mathematics, engineering, literature, anatomy, geology, astronomy, botany, paleontology, and cartography. Like him, many other geniuses of the Renaissance had interest in so many areas of knowledge. None of them put a label on themselves constraining their area of expertise. If they just found geology interesting, they grabbed a bunch of books to read and learn about it.

Fast forward to today, and talking about our industry, things have changed a lot. Developers of a given programming language, let’s say Swift, are just Swift developers. Commonly, when things arise in other domains they refuse to step outside their comfort zone. I think we developers have found our comfort zone in which we feel experts and for which we get paid a lot compared to other industries. I believe that’s one of the reasons why companies move slow. They have unnecessary friction coming from having too many technoly-X developers and very few Leonardo da Vincis. Those are developers that opt for opening tickets when they find issues in the tools that they use. Developers that do the same when there’s some work that involves backend development. Those tickets can live in the backlog days and weeks until someone picks them up.

When it comes to cross-discipline work, things are even worse. Don’t ask a developer to do a bit of design, or the other way around, ask a designer to do a bit of coding. It’s true that those are completely opposite disciplines, but imagine how powerful it’d be if we trained our brain to know a bit of both. If we could express our ideas by combining bits of design and code. One of the people in our industry that I admired a lot because he’s able to combine all disciplines to build great products is Tom Preston. GitHub & Chatterbug founder, he does design, development, and product brilliantly. In collaboration with other folks, he created the platform in which most of us spend hours and hours. The platform that stood out for its simplicity and social ideas that were copied by other competitors later on. Imagine if in its early days, he’d have had to look around to find designers and product experts to materialize their ideas around how social software development should be. I bet GitHub wouldn’t be what it is today.

How does all of that relate to me? Since I joined Shopify, I pushed my boundaries to do more work with other programming languages and domains. These days you can find me doing Ruby, using Ruby on Rails to build web apps, Javascript to do a bit of automation, or even Ansible to do a bit of devops. I might not be an expert in all of those, but when a task that involves cross-domain work arises, I’m able to deliver it entirely. With Tuist, I’m going even further. I’m reading a lot about design and product to come up with an identity for the tool. I’ve re-designed the website a few times, changed the logo a few others, and worked on defining the culture of the organization to ensure it’s healthy and collaborative. It’s sometimes exhausting, but worth it because the resulting product becomes a accurate reflection of the vision.

I believe our industry needs more crafters with Leaonardo da Vinci’s mindset. We need people that love the craft and whose main goal is not just making money. We need people that finds absorving knowledge exciting and that don’t set boundaries to themselves. The world would have better and more human tools. Designers would go beyond placing pixels or defining colors to understand the moral implications of their designs. Developers would show more empathy for others and build for everyone instead of for the latest device in the market.

I think we can all start by turning “I am X” (where X can be replaced with Swift developer, frontend designer, product manager, CEO) into “I can make X real”. What we need to get there is up to us to figure out.

]]>
<![CDATA[This is a short reflection around something that it's common in our industry, professionals labelling themselves and limiting their area of influence.]]>
Module caching with Galaxy https://pepicrft.me/blog/2019/11/18/module-caching-with-tuist 2019-11-18T00:00:00+00:00 2019-11-18T00:00:00+00:00 <![CDATA[

As I have mentioned in other blog posts, the idea of building Tuist came from the motivation of making extending projects and easy tasks. I was a bit fed up of having to manually create projects, or being one of the few people in the team that was able to add projects without breaking anything in the graph. Back then, I made 2 conscious decisions that in hindsight were great ideas: using Swift as the manifest language, and making dependencies between projects a first-class citizen. While first gave us validation of the manifest syntax and allowed reusing pieces of manifest content easily, the latter made it easy to structure your project in multiple directories and let Tuist figure out the generation of targets, projects, workspaces, and all the settings that are necessary to get the linking right.

With project generation doing its job right, and even though we don’t support all types of products, I feel it’s time for me to take Tuist to a whole new level and tackle another challenge that I was also fed up with back during my time at SoundCloud: Why do I have to compile everything every time my local cache is invalidated? Most of the times you are working on a module of your dependencies graph, and you don’t plan to change the others. In that scenario, it’d be great if I only had to compile the module that I’m working on, and get a pre-compiled version of the transitive and non-transitive dependencies. Wouldn’t that be nice?

If you know about the Buck and Bazel build systems, what I told you is a rought idea of what they do. The idea of using any of those came up back when I was at SoundCloud. We also pondered it at Shopify, but we never embarked on implementing it because based on what we heard from other companies, it’d require a lot of effort not only to make it work, but to get developers to use it. They are generic build systems so their focus is to get the build process right, and not to ensure that they integrate well into the tools that developers like to use, which in case of iOS I’m referring to Xcode. Some companies that could invest time into implementing them ended up building tools to generate user-friendly Xcode projects. For instance, Pinterest implemented and open source 2 tools: XCHammer and PodToBuild. The first one generates Xcode projects from your Bazel files and the last one generates Bazel files from your CocoaPods specs.

The spectrum between not having caching and having caching beyond local incremental builds is very wide. If you are not a Google, a Facebook, a Pinterest, or an Airbnb, you will never consider adding Buck or Bazel to your projects. If you use Carthage, Rome might be an option for external dependencies, but what if most of the code lives in projects within the same repository? Would you move them to different repositories to benefit from Rome? Maybe you would, but I personally wouldn’t. Having multiple repositories and dealing with versioning and dependencies is not something that I want to be doing in my day-to-day job. Development cycles need to be fast, and for that reason I don’t think Rome is suitable for local frameworks.

With this sceneario in mind, and having a very solid foundation in Tuist that we can leverage, I’d like Tuist to help all its users by providing caching. How would that work? Roughly:

  • A pipeline on CI would compile all the modules on CI that are cacheable. In case of modules that can target multiple architectures (simulator or device), we’d compile them as xcframeworks.
  • The compiled modules would be uploaded to a cache with a hash that uniquely identifies them based on their settings, files, and the hash of its dependencies.
  • Users generating the project would get pre-compiled modules for the dependencies of the project in the directory in which they are positioned.

It’s the same idea as Bazel and Buck, but leveraging Xcode’s projects and build system instead of using a different build system that can’t be well integrated into Xcode. I already started working on it and I can’t wait to share a first iteration of this feature. By the way, the name of it will be Galaxy. Any Tuist’s user will be able to opt-in and take their productivity to a whole new level.

If you would like to test it out in your own projects as soon as it’s available, join our Slack channel, and let us know. Your feedback will be very appreciated.

]]>
<![CDATA[Tuist has been of a great value for developers that found it difficult to extend their Xcode projects because Xcode exposed a lot of complexity to them. Having conceptually compressed those difficulties by leveraging project generation, it's time for Tuist to tackle a new challenge, reduce compilation times.]]>
Static site generators https://pepicrft.me/blog/2019/11/04/static-site-generators 2019-11-04T00:00:00+00:00 2019-11-04T00:00:00+00:00 <![CDATA[

The range of options that we find nowadays to create static sites is endless. I remember back when I started as a developer and set up my personal website and blog using Jekyll. I was not familiar with HTML & CSS but the amount of themes out there was so large that I could pick my favorite, customize it, and get a beautiful website where I could dump my thoughts and learnings.

Things have have changed a lot since then. Every programming language has its own static site generator, including Swift. Each tries to take advantage of the programming language they are developed with. For instance, the Swift static site generator that people starts to talk about seem to have a strongly typed API. That’s awesome! But like anything in software, it’s all tradeoffs.

In years iterating my website, which is now built using Gatsby by the way, I have learned what are the most important traits that make the experience of developing web a joy: instant feedback, composability and extendability.

  • Instant feedback: This means getting the changes instantly reflected in the web browser without having to wait for a transpiler or compiler to do some heavy-lifting work. CMD + S and CMD + Tab are your best friends. This can make a huge difference in the productivity of a developer because the process laying out a web page, is very iterative. Gatsby is by far doing an excellent job here. It detects when certain files that impact the site have changed, and reloads the content automatically without losing the currently loaded context.
  • Composability: Large sites are broken down into smaller composable pieces, components. A component is a combination of a layout (HTML), styles (CSS), and some interactivity (JS). Frameworks like VueJS or React make defining atomic components easy and they integrate seamlessly with bundling tools such as a Webpack, which is one of the building blocks of Gatsby. The Javascript community is years away from any tool that is being built in not-so-web-oriented programming languages like Rust, or Swift. BabelJS, WebpackJS, or Emotion are good examples of tools and frameworks without which building web nowadays would be an inconvenient task.
  • Extendability: And last, and not for that least important, have different options for extending your website functionality easily. In case of Gatsby, there are many hooks you can subscribe to to override any step in the generation logic, themes to include functionality that other developers have packaged and distributed as NPM packages, and React components and libraries that can be easily imported and used.

I’m glad to have chosen Gatsby as a framework for my personal website because it provides the aforementioned traits and more. I can package a functionality that I’ve built for this website, like the journal section that I call micro-blogging, use a library like Theme-UI to make the blog themable, use Gatsby plugins like this one that makes the website offline and more resistant to slow connections, or add React components to a markdown file using MDX.

It’s great to see though that there are people pushing the boundaries of static site generators to other programming languages like Swift. And it’s also great to have types and APIs that you are already familiar to. However, the value that it brings having a compiler that ensures that I wrote a <h1> where it should be a </h1>, compared the value that you get when you use a framework like Gatsby that can leverage years of development for the web, is not worth it, at least for me.

Hope you have a wonderful week 👋! If you notice some oddness on the website, it’s because I’m overhauling it a bit.

]]>
<![CDATA[In this blog post I talk about what traits I expect a static site generator to have, and why I believe Gatsby is a more suitable option than other alternatives in the industry.]]>
Better relative paths https://pepicrft.me/blog/2019/10/31/better-relative-paths 2019-10-31T00:00:00+00:00 2019-10-31T00:00:00+00:00 <![CDATA[

One of the best decisions that we made when we envisioned Tuist was using Swift as a language for describing projects. Its compiler can validate the syntax, users get documentation right in the code editor, Xcode, and very soon, it’ll allow defining paths that are relative to Tuist-specific directories.

One of the concerns that surfaced when we proposed introducing project description helpers was being able to define paths relative to those helpers, or why not, relative to the root directory of the project. The good thing is that with some tricks, and some Swift interface, we found a solution for that. Let’s say there’s a helper that generates a standardized feature framework project:

// FrameworkHelper.swift

func framework(name: String) -> Project {
  // Your initialization logic here
}

With some rare exceptions, the Info.plist files the framework targets usually point to have the same content. It’s clearly a good candidate to be reduced to a single file that we can place in the root directory of our project to be reused by all the targets. With the current Tuist API, developers would have to write something along the lines of:

// FrameworkHelper.swift

let path = ""../../Shared/Framework.plist"

It’s a bit ugly, and the helper is making an assumptions on the directory where the manifest using the helper is located.

Thanks to some work that we are doing, developers will be able to do something along the lines of:

// FrameworkHelper.swift

let path = Path.from(root: "Shared/Framework.plist")

The definition is shorter, and there are no assumptions, it can be used from anywhere because the root directory of the project will remain the same.

The feature work is still progress but you can check out the pull request and give us feedback.

]]>
<![CDATA[We are providing a new API from Tuist to define relative paths and this blog post describes the motivation behind it and the solution that we are adopting.]]>
Project description helpers https://pepicrft.me/blog/2019/10/10/manifest-helpers 2019-10-10T00:00:00+00:00 2019-10-10T00:00:00+00:00 <![CDATA[

Yesterday I had a pairing session with a good friend of mine who started introducing Tuist to his company’s project. He brought up an interesting need that had been in the backlog for quite some time but that no one embarked on building it: reusing code across manifests.

I was so eager to build that feature into Tuist that I immediately proposed him to pair on it. It was a very fruitful exercise that we couldn’t finish but got a fully functional prototype. If was excited before building it, my excitement skyrocketed when I saw it working.

In a nutshell, the solution that we adopted was compiling all the files under the Tuist/ProjectDescriptionHelpers in a module, and then linking the manifest against it. I think it’s a simple solution, yet powerful.

With this feature developers can not only reuse code across manifests, but leverage any language abstractions to codify their projects: struct, classes, functions, enums, generics…

Here are some things that the new feature enables:

  • Have a factory of projects or targets that given the name and some basic attributes, it returns a standard project.
  • Define how build settings are generated and combined. If you never liked how xcconfigs or build settings are flattened, you can define your own merge logic.
  • Projects can be defined in one line of code. I can’t stress enough how great this is to ease the maintenance and creation of new projects.

Can’t wait to finish it up and bring it to the users in the next version of Tuists. Until then, there are sone tests to add and documentation to write.

Happy coding!

]]>
<![CDATA[In this blog post I talk about a beautiful abstraction which Alex and I came up with to push Tuist's awesomeness even further.]]>
Abstractions https://pepicrft.me/blog/2019/10/03/abstractions 2019-10-03T00:00:00+00:00 2019-10-03T00:00:00+00:00 <![CDATA[

Most developers are familiar with the concept of abstractions. They are often used to turn a domain language into another language that suits the problem or set of problems better. Moreover, they can be leveraged to simplify things, or extend the abstracted element’s functionality. We have seen that over the years in open source libraries that aimed to simplify system frameworks such as UIKit, Foundation, or CoreData. Almofire, MagicalRecord, and SnapKit are good examples of abstractions that turned intricate APIs into a beautiful experience for developers. Those libraries sparked joy, which is a key element be engaged and motivated when we craft software. Let’s be honest, no one wants to spend their time understanding complex interfaces, figuring out how to use them safely, or working on repetitive tasks.

When abstractions are validated, they end up inspiring the evolution of the abstracted layers. We’ve seen Apple evolving their frameworks taking abstraction ideas from the community and other programming languages. UIKit will eventually be a vintage framework because developers already have a simpler, more beautiful, and declarative alternative, SwiftUI.

Most of the abstractions that we see in the Apple ecosystem are code abstractions; there are plenty of libraries out there that you can use to improve the coding experience of the developers in your team. However, code is not the only element that requires abstraction. If you work on a large project, you probably know the domain that I’m talking about: Xcode projects.

Apps are no longer a single-target project in Xcode. Even apps that are not structured in modules (frameworks or libraries) have targets for dependencies, extensions, and apps for other platforms. Xcode projects evolved to support the unceasing growth of the Apple ecosystem, and that resulted in an interface to define your projects, Xcode projects, that exposes a lot of complexity. That complexity makes it a good candidate upon which build an abstraction. Tuist was conceived to provide that abstraction.

Tuist provides an abstraction that compresses Xcode complexities and allows developers to codify conventions. It does it by leveraging projects generation and Swift.

Working on Tuist, I came across many developers that compare Tuist to XcodeGen and understandably end up concluding that Tuist is just another project generation tool. While it also generates projects, there are two subtle differences, that I think make Tuist a more suitable option for some projects.

The first of the differences has to do with conceptual compression. Despite how tempting it might be translating concepts and ideas one-to-one from Xcode projects to Tuist abstractions, we know that it results in leaking complexities and therefore we make an effort to compress concepts and provide a simple interface that Tuist defaults to. XcodeGen has good abstractions too, but it’s closer to the Xcode project’s domain.

When I was iOS developer at SoundCloud, I remember being certainly annoyed because breaking down the app required being familiar with many Xcode-specific concepts, and a fair amount of manual work. Only a few people in the team knew how to add more targets, and when issues arose, the rest of the team couldn’t make a guess on what the issue could be. The bus factor was high, and the complexity of the project grew significantly.

Creating a new project or target meant having answers for the following questions:

  • How do I name it?
  • Where do I place it?
  • Where do I place it in the dependencies graph?
  • What should be its configuration?
  • How do I make sure it’s consistent with other targets?
  • How do I make sure I’m not breaking anything?
  • How should I set up CI?
  • Should it be a framework or a library?
  • Can it be a library but with access to resources?

What we did at SoundCloud, which I’ve seen in other teams, is having all of that documented. In practice that translates into teams having one or two people that know how to answer those questions right, and the rest of the team resorting to them. It might feel great if you are one of those people. I actually was. But as I mentioned earlier, that sets the bus factor high and adds dependencies that might slow down other people’s work.

For that reason, making things easier and opinions on the Xcode projects domain codifiable, was one of the motivations to build Tuist. For instance, one of the features that I put a lot of effort into simplifying was the definition of dependencies. With Tuist, defining a dependency is as simple as defining what depends on what. The build settings and phases that are required for that are an implementation detail. Moreover, and conversely to XcodeGen that uses YAML as a language to describe the projects, I decided to use Swift for a few simple reasons: it’d validate the syntax, users would get inline documentation while editing their projects, and most importantly, it’d make codifying opinions easy. For example, the targets that are part of a project can be defined by a function that acts as a factory of targets:

func target(name: String) -> Target {
  // Initialize and return the target
}

Doing that with YAML would require extending the specification to support it, which would end up leaking some complexity in the tool domain.

It might not be obvious from the code snippet above, but it’s a powerful idea that makes codifying opinions possible. Here are some ideas:

  • All targets follow the same naming convention: Prefix\(name)
  • They have the same files structure: Sources under Sources/ and tests under Tests/.
  • They run Swiftlint to lint the code.
  • They depend on a set of foundation frameworks.

Being able to codify opinions makes being consistent easier, and as a result, scaling app your projects a pleasant experience.

The second difference with XcodeGen is that Tuist is not just a project generator. Project generation is the first step to be able to provide an easy and standard command line interface to interact with projects. This hasn’t been built yet because we are focusing on the project generation, but we’ll start working on it afterward.

Since comparisons often help understand concepts, you can think of it as Fastlane, but without having to write Fastfiles. The reason why Fastfiles exist is because the user needs to describe how their intents map to system commands. There are some important pitfalls of using that approach at scale:

  • Not all lanes run on CI for every code change. As a consequence, lanes might break without no one noticing until they have to run the lane, for example when releasing the app. That’s very frustrated.
  • Ensuring a good and clean structured mapper (Fastfile) in large projects with many developers contributing to it has been proven to be an arduous task. Developers see those files as a canvas where they can dump code, hacks, and any snippet they find on the Internet. As long as they can do fastlane my_lane, that’s all most developers care about. Not only that, but it’s easy to end up with inconsistencies in the name of the lanes. Is it build_release or release_build? Or maybe just release?

SoundCloud’s Fastfiles were large and complex, so are Shopify’s. It just happens, and it’s really hard to prevent it if we don’t have conventions and a way to enforce them. If we think about Rails, which has been a huge source of inspiration to design Tuist, any developer working with the framework knows that databases can be migrated with rails db:migrate, or that the server can be initialized with rails server. On Xcode projects, we don’t have such a user-oriented interface that reflects user intents. One might argue that there are tools like xcodebuild, or simctl, but they are closer to the system than to the user, they are not pleasant to use. Imagine we could just do tuist keys setup and we’d get the environment set up with the right certificates in the keychain to sign the app.

Tuist will provide an standard and user-friendly CLI with the most common tasks. It’s possible to do it reliably and with bare input from the user because they describe us the project and therefore, we know how to interact with it (the contract). These are some commands that are good candidates to be implemented first:

  • Build: Builds your app.
  • Test: Runs tests
  • Run: Builds and runs the app in the destination platform (e.g Simulator).
  • Release: Archives, exports apps uploads apps. For frameworks, it’d build and export a distributable framework.

Graph is a command that has already been built in to export the dependencies graph representation. That’s something that surprisingly Xcode doesn’t provide, and that I believe is crucial to make informed decisions on the project structure.


This is where we are heading, and I’m very excited about the journey. Tuist gave me the opportunity to meet very talented developers who share the same vision and that are bringing a lot of illusion and ideas for the project. I applied one of Shopify’s main principles, trust, and that resulted in a very healthy and collaborative space which I’m very proud of. It’s a pleasure working with Kassem, Olliver, Marcin, Marek and so many other contributors and users that gave Tuist a try.

I had a few downs with the project, mainly because this is a long-term bet in a space where things move fast and Apple’s tooling have always more trust from the community. What makes me stay on track and motivated is having the opportunity to explore new ideas, being able to build them into Tuist, and help companies and projects to scale up.

]]>
<![CDATA[In this blog post I talk about abstractions in the Xcode projects domain and how Tuist leverages the concept to conceptually compress intricacies of Xcode projects that developers are often confronted with.]]>
Keeping up with dependencies updates https://pepicrft.me/blog/2019/09/10/auto-updating-dependencies 2019-09-10T00:00:00+00:00 2019-09-10T00:00:00+00:00 <![CDATA[

Keeping dependencies up to date is and important task that we shouldn’t disregard because updates sometimes bring security patches and improvements that we might want to benefit from in our projects. Unfortunately, most teams don’t pay enough attention to this, and just focus on developing the product on top of a specific version of frameworks and libraries.

The good thing is that keeping up with the upstream changes has never been so easy. In my projects, I configure Dependabot, a tool that GitHub acquired recently and whose role is automating the process of updating dependencies and letting you make the final decision of testing & merging the PR.

I set up Dependabot in all my open and private source repositories. It requires no configuration to start working, which is great, and allows you to customize things like the frequency at which dependencies are updated.

One feature that I requested on Twitter and that they mentioned they are working on is support for updating Swift Package dependencies. The Swift community will be so grateful of seeing Swift Packages been supported. I can’t wait to use it to update Tuist‘s dependencies.

]]>
<![CDATA[A brief reflection on Dependabot, a tool that was recently acquired by GitHub and that helps automate the process of updating dependencies on your GitHub repositories.]]>
What's coming to Tuist https://pepicrft.me/blog/2019/09/09/whats-coming-to-tuist 2019-09-09T00:00:00+00:00 2019-09-09T00:00:00+00:00 <![CDATA[

I continue to be excited about Tuist. It’s one of those things that brings me a lot of joy when I work on it. What makes me the most exciting is the challenge of abstracting Xcode intricacies that arise when Xcode projects grow. I relate Tuist to Rails, a framework that I had the experience to learn and fall in love with very recently. Rails provides beautiful and simple abstractions for most of the features that are required nowadays to build web apps. What Tuist’s abstractions will look like is still to be defined, but I’m glad that we are on the right track to define them.

One of the things that we plan to work on, and that Kas is leading, is morph the architecture of the project into a more modular approach. That’ll make the code safer, easier to understand, and will increase the extendibility of the projects generation. He pushed the idea that we are pondering to the tuist-labs tha we created to explore ideas outside the main repository.

In regards to features, these are the ones that I’ve been pondering lately and that I’d like to see in Tuist:

  • Swift interface for accessing resources: I think Tuist could leverage SwiftGen to generate type-safe Swift code to access resources. The generated Swift code would be added automatically to the target the resources belong to.
  • Support for resources in libraries: Libraries can’t contain libraries, but we could enable that by generating a resources bundle and providing a Swift interface to access the resources from the right directory within the product bundle. I’ve seen projects workarounding this by having a build phase that copies the resources into the right directory. That’s not easy to maintain though.
  • Shared manifest files: One of the advantages of writing the manifests in Swift vs defining them in a Yaml is that we can leverage the Swift compiler to make things like reusing pieces of the manifest possible. Imagine adding more modules to your project is all about calling a function, module(name: "Settings", dependencies: ["DesignKit"]) that is defined in a Shared.swift. That’d remove one of the biggest barriers when it comes to growing the number of modules of a project.

Stay tuned because those and more features will land soon in Tuist. If you would like to use Tuist in your projects or even contribute, you are welcome to do so. I’d gladly walk you through the project to have a solid understanding to start using it in your projects or contributing.

Have a great week!

]]>
<![CDATA[A Monday blog post with some reflections about the current state of Tuist and its future.]]>
A period of changes https://pepicrft.me/blog/2019/08/29/a-period-of-changes 2019-08-29T00:00:00+00:00 2019-08-29T00:00:00+00:00 <![CDATA[

It’s been a while since I wrote my last blog post here. During this time there’s been some important changes in my life and my professional career and I felt like writing them down in an unstructured blog post.

The first of those changes is that I was giving the opportunity to give management a try. Changing paths in my career is something that I’ve always wanted to try and my team, and specially Mark, gave me the opportunity to do so. It’s been a few weeks learning a lot and getting used to it. I’m starting to feel how my ability and time to code is decreasing and I’m doing a little bit of everything. The part that I find the most exciting about this job is that I’ll have the opportunity to enable my peers to achieve their goals.

The change didn’t came alone. A person that I worked with was moved to another team temporarily to help tackle a company-wide pressing issue. That means our team went back to 2 people, being one of them a new individual contributor (IC) that just joined our team. We had to asses our resources and the projects that we where championing, and cut down some workload to help the team accommodate all the changes.

On the personal side of things, I’ve been dealing with some bureaucratic paperwork in regards to the apartment that María José and I bought in Berlin. We are very excited to be in this journey together and very looking forward to having the keys in March next year. Luckily, the amount of paperwork has reduced significantly and these days, we just get letters from Deutsche Bank with the recent movements in our bank accounts. Also, in the vein of organizing stuff, our wedding is in less than a month, so we’ve been working hard to make sure that everything is ready for it.

On another topic, I’ve been pondering again the idea of quitting social networks. I keep thinking they are not doing any good to my brain and that they put something on me which I don’t like: constantly seeking recognition and approval. For some people it might just be ok, but I don’t want my brain to only be ok when it has doses of those. I closed, again, my Facebook account, and reduced my usage of Twitter. The time that I used to spend on those places, I spend it drawing or just not doing anything. This re-education process won’t be easy, but I think my brain will be so thankful for it.

An last but not least, damm, I’m having a lot of fun playing with Gatsby and theme-UI. It reminds me to the early days of learning iOS or Swift when everything was new and joyful. I’m playing with them to polish Tuist’s website and I’m even considering drawing the illustrations of the website. Why not? I like to develop but also shape other areas of the project that have nothing to do with development.

Hope you are all having a lovely week.

]]>
<![CDATA[Dumping some thoughts on what the last month have been for me in my personal and professional life.]]>
Project generation https://pepicrft.me/blog/2019/07/22/project-generation 2019-07-22T00:00:00+00:00 2019-07-22T00:00:00+00:00 <![CDATA[

Last week I published a thread on Twitter in which I shared what I think is the value of generated Xcode projects. I’ve been a huge advocate of generated Xcode projects since I worked at SoundCloud, where I realized the maintenance cost that modular Xcode projects bring.

In this blog post, I’d like to extend each of the advantages that were mentioned in the thread, and relate them with the features that we are building into Tuist.

1 - Focus

Large apps often resort to modularization to scale up. Codebases are broken down into smaller modules with clear boundaries and responsibilities. In my experience seeing modular Xcode codebases, they are usually organized in multiple directories and Xcode projects. Each them is manually maintained.

The more Xcode projects you have, the more time you will need to spend maintaining them and figuring out issues that might arise as a result of the accidental complexity.

Tuist abstracts the low-level intricacies and handles them for you. For example, the dependencies are described semantically and not in terms of build phases or build settings:

let app = Target(name: "App", product: .application, dependencies: [
  .target("Profile")
])
let profile = Target(name: "Profile", product: .framework, dependencies: [
  .target("Utilities")
])
let utilities = Target(name: "Utilities", product: .staticLibrary)

Furthermore, it generates Xcode projects with just the pieces that the developer needs to do their work. The distractions are taken away to help developers have more focus and bring joy scaling their project up.

2 - Environment

How often have you tried to compile an app, and after some time, it fails because a necessary certificate is not present in your Keychain? The more dependencies the project has with the environment, the more likely that scenario is to happen. Using SwiftGen to generate code from our resources, or Carthage to embed dynamic frameworks, is an implicit dependency. If they don’t exist, the compilation might fail.

Teams overcome this problem by including in the project README.md a list of steps that they need to execute before opening the project in Xcode. There are two caveats with this approach: it’s hard to ensure that developer environments are configured consistently (e.g. the same version of a tool) and notify them when one of the dependencies requires an update (e.g. a new certificate to be installed in the Keychain).

Tuist provides a command, tuist up, to verify and configure the environment. Teams just need to describe the configuration in a Setup.swift file:

import ProjectDescription

let setup = Setup([
    .homebrew(packages: ["swiftlint", "SwiftGen", "Carthage"]),
    .carthage(platforms: [.iOS])
])

Moreover, and this is not implemented yet, it’ll provide an interface to describe the signing. It’ll use that description to install the right certificates in the Keychain, place the provisioning profiles in the right directory, and configure the Xcode during the project generation.

Tuist is more strict than Xcode with the validation of the projects and the environment. If it knows that the project won’t compile, it fails immediately. Developers time is precious and shouldn’t be wasted.

3 - Misconfigurations

The growth of Xcode projects come with complexity, and when things become complex, it’s easier to make mistakes. A wrong build settings flag or a missing argument in a script build phase can be the source of compilation and App Store validation errors.

Xcode runs weak validations on projects. It assumes the developers know what they are doing, and heavily relies on components like the build system or the app uploader to catch issues. There are two drawbacks with that approach:

  • It might take some time. For example, if a dynamic framework is being copied into another framework. The error will show up when the app is being uploaded to the store.
  • Most of the times, the errors say nothing about the source of the error. For instance, if you try to link a iOS framework against a macOS’s (something that Xcode allows), the compilation will fail with a framework not found error message.

Tuist is more strict in this regards and runs validations during the project generation. If it knows something won’t compile, it’ll fail and tell developers why. We understand configuring a large project can be a hard task, and we want developers to do it right at any scale.

4 - Consistency

Consistency is crucial to scale up apps. Without it, the work across multiple projects becomes more difficult. Jumps between projects require an extra effort to understand how projects differ from each other. Moreover, inconsistent projects are more error prone.

Although ensuring consistency is easier when all the Xcode projects are part of the same repository, Xcode doesn’t help much with it. The only feature that helps with consistency by reusing build settings across projects are the .xcconfig files.

Consistency can also manifest in:

  • The list of targets of each of the projects.
  • The list of target build phases and their names.
  • The list of project schemes and their configuration.

The way Tuist helps making the projects more consistent is by having a programmable interface where developers can leverage Swift features like functions. The definition of a project can be the result of calling a function:

func frameworkProject(name: String) -> Project {
  // Targets
  let framework = Target(name: name, product: .framework)
  let unitTests = Target(name: "\(name)Tests", product: .unitTests)
  let uiTests = Target(name: "\(name)UITests", product: .uiTests)
  let targets = [framework, unitTests, uiTests]

  return Project(name: name, targets: targets)
}

let searchFramework = frameworkProject(name: "Search")
let coreFramework = frameworkProject(name: "Core")

How beautiful is that? Using Swift over declarative formats like YAML makes it possible without having to re-invent the well.

5 - Complexities

One of the most important lessons that a developer can learn for coding is KISS (keep it simple and stupid). I believe the same applies to Xcode projects. In this case, the complexity is hard to avoid because it’s Xcode the one exposing it.

After the creation of projects, Xcode leaves the developers with the responsibility of maintaining and keeping them up to date. That’s proven not to be an easy job. For instance, with the recent introduction of support for Swift Packages in Xcode, many developers are still figuring out how the new Tetris piece fits their overly complex projects, that are perhaps already using CocoaPods or Carthage.

Tuist has taken a stance similar to the one Rails has in the web ecosystem: complex tasks are abstracted and made easier, and only if necessary, developers can manage intricacies by themselves. It defaults to simplicity and prevents the complexity of the projects’ structure from growing proportionally with the number and size of the projects.

Believe me, seeing how easy it is to describe the structure of a large project also makes scaling apps a enjoyable task.

6 - Workflows

Many projects depend on tools like CocoaPods, SwiftGen, or Sourcery to be run before opening the project. If developers forget to run them, they might end up getting errors. They are sometimes obvious errors, like your Podfile.lock is out of sync, but other times they are not. Some teams decide to automate all these tasks using Fastlane lanes, which calls underlying system commands:

lane :bootstrap do
  cocoapods
  swiftgen
  sourcery
end

Installing the team’s certificates and provisioning profiles is another example. Many teams in the industry decided to use Fastlane for that, but again, we are putting the developer in the position of having to remember running fastlane match, and knowing which certificates/provisioning profiles they need for the job at hand.

What if if all those tasks where beautifully integrated into the process of generating a project? That’s what Tuist aims for. It determines which tasks need to be executed, and executes them as part of the project generation. The idea is the developer doesn’t have to think about all of that. They can just remember one and easy to remember command:

tuist generate

7 - Conflicts

Having many Git conflicts is perhaps one of the most annoying things of working on large Xcode projects. The likeliness of having conflicts is proportional to the amount of people contributing to the project, and in the case of Xcode, to the size of the project. Xcode projects have a monolithic structure; most of their content lives in a file, the project.pbxproj. Any change to the project through Xcode gets reflected in that file.

If there are many branches being merged in your project, having to rebase often to solve git conflicts can be very annoying, even more if the CI takes long every time we rebase and push the changes to remote.

Tuist diminishes the conflicts because Xcode projects don’t need to be part of the repository.

8 - File patterns

Xcode projects have references to the files and folders that are part of it. Because of that, it was very common to end up with a project files and folders hierarchy that was inconsistent with the structure in the filesystem. This has improved with the recent Xcode versions, but it’s still certainly annoying having to drag & drop files to the Xcode projects to use them from targets.

Tuist makes that way easier by using glob patterns. Rather than individually referencing files, we can define a glob pattern, for example Sources/**/*.swift, and Tuist will unfold the pattern and add the resolved paths to the project. This makes it easier to define conventions in regards to the folders structure. For example, the example below is a function that ensures that all the targets, regardless of the project they belong to, have the sources in the same directory.

func target(name: String) -> Target {
  return Target(name: name,
               sources: "Sources/**/*.swift")
}

9 - CLI

Xcode provides xcodebuild, a command line tool to interact with the project. Both, its input and and output are so verbose that most developers wrap them with tools like Gym or xcpretty. Moreover, there are common use cases like building, signing, and publishing the app to the App Store, that require the interaction with other CLIs besides xcodebuild. Most projects solve this issue by using Fastlane, but that creates a new contract between the Fastfiles and your projects that can break easily, and as a consequence, present developers with failing lanes that they need to debug and fix. Have you ever experienced trying to release an app, and running into issues because someone changed something in the signing settings of the project and forgot to update that lane that configures the environment for signing?

Tuist knows your projects and will leverage that information to offer a simple set of commands. Being positioned in a directory where there’s a project defined, I could execute something like:

tuist build

And that’d build all the targets from the project in the current directory. If building requires installing CocoaPods dependencies, or generating code for your resources using SwiftGen, Tuist will do it as part of the command execution. The idea here is removing the need of having to use a tool like Fastlane, which in my experience, results in complex Fastiles that grow proportionally with the number of Xcode projects. Tuist embraces KISS.

10 - Caching

At some point in the growth of a projects, build times start affecting developers’ productivity. They push code to GitHub and it takes over 20 minutes to compile. We consider using Carthage to precompile the dependencies, and that gives us a bit of breath that is insignificant compared to the compilation time of the project. We heard that Buck and Bazel help mitigate the issue, but our team is so small that we can’t invest time and resources into replacing our build system entirely. We hope for Apple to release new versions of the Swift compiler and magic flags that speed up our builds, but that’s being too hopeful; they optimize for the majority of their userbase, and that’s small-medium sized apps.

One of Tuist’s goals is to help with this need projects have when they scale. The idea is very simple. All the modules, being a module a framework or a library, are hashed, compiled, and uploaded to a cloud storage. That’s done for every commit that is built on CI. When developers want to work with the project locally, Tuist generates the dependency graph, and generates the project by using pre-compiled modules for those targets that we don’t plan to work on. For instance, let’s say we have an App that depends on a dynamic framework Search, which depends on another framework called Core. Since we only plan to work on the app at the moment, Tuist will give us a project hat contains a target with the source code of the app, which links against Search, and copies both Search and Core into the product directory.


All of that makes me very excited when I work on Tuist. I believe working on a large Xcode project has to be as fun as working on small ones. Over the years, I’ve seen tools like Fastlane helping small/medium projects, and tools like Buck and Bazel helping large ones, but there’s space in the middle of that spectrum where projects end up hacking their way through to scale. I dream with the Rails for the development of apps using Xcode. A tool that provides simple abstractions and makes it easier to enforce practices at any level of the project.

If that sounds exciting, and would like to take part on this journey, you can start by joining our Slack channel and reading the documentation.

]]>
<![CDATA[This blog post describes the advantages of dynamic over static Xcode projects, and how Tuist leverages project generation to help teams overcome the challenges associated to scaling up projects.]]>
Adding error tracking to a CLI written in Swift https://pepicrft.me/blog/2019/07/16/adding-error-tracking-to-a-swift-cli 2019-07-16T00:00:00+00:00 2019-07-16T00:00:00+00:00 <![CDATA[

Software is written by imperfect creatures, humans, and as a consequence the imperfection manifests in the software in the shape of bugs. Languages like Swift can help us write free-of-bugs software, but they’ll never be able to help us get rid of them entirely.

When bugs happen, it’s crucial to be notified automatically with the right information that helps us debug and fix the bug quickly. That’s why platforms like Crashlytics or Sentry exist. They provide an SDK to add to your projects that collect handled and unhandled errors and report them to a web service. If you are an iOS developer, you are most likely familiar with them.

Tuist didn’t have error reporting, making it hard to know when errors happened and why. Moreover, we were relying on users to know about bugs. If they didn’t create GitHub issues, we had no way to know that they were facing bugs while using the tool.

I recently worked on adding error reporting to Tuist, which turned out not to be an straightforward task. This blog post is a summary of all the steps that I followed. Developers building command line tools using the Swift Package Manager might find this blog post useful.

Download the dynamic framework

The first thing that we need to do is pull the dynamic framework of your error tracking platform. Most services provide a dynamic framework, if not, you can ask them for it. Here’s for instance the list of Sentry releases, which contain the dynamic framework attached to it.

Setting up error tracking with a static framework it’s also possible, but in this post I’ll focus on the dynamic approach.

Place the framework under ./Frameworks/ (e.g. ./Frameworks/Sentry.framework).

Generate an Xcode project that links against the framework

Once we have the framework, we need to tell the Swift Package Manager to set up the generated Xcode project to use it. Although there isn’t a public API that we can use from our project’s Package.swift file, there’s an undocumented API that we can leverage. Create a file, MyTool.xcconfig, where MyTool is the name of your tool, and add the following content:

LD_RUNPATH_SEARCH_PATHS = $(inherited) $(SRCROOT)/Frameworks
FRAMEWORK_SEARCH_PATHS=$(inherited) $(SRCROOT)/Frameworks
OTHER_LDFLAGS = $(inherited) -framework "Sentry"
  • LD_RUNPATH_SEARCH_PATHS: Defines a list of directories where the dynamic linker can look up the linked frameworks. We are adding $(SRCROOT)/Frameworks, which is the Frameworks directory relative to the path where the generated Xcode project is.
  • FRAMEWORK_SEARCH_PATHS: Defines the directories that contain the frameworks to be linked during the compilation process.
  • OTHER_LDFLAGS: With this setting we include the linker flag -framework Sentry to link against the Sentry framework. You’ll need to replace Sentry with the name of your framework.

With the file MyTool.xcconfig in the project directory, we can run the following command:

swift package generate-xcodeproj --xcconfig-overrides MyTool.xcconfig

Notice the ---xcconfig-overrides the argument that indicates the SwiftPM to use a different xcconfig file for the generated Xcode project.

Try to import your framework and use its API. The Xcode project should compile:

// main.swift
import Sentry

// Your setup logic

Build the Swift package from the terminal

If you try to run swift build at this point, it’ll fail. Although the generated Xcode project includes the build settings to link against the framework, SwiftPM doesn’t use the Xcode project to compile your tool and therefore, it doesn’t know how to link the framework.

Fortunately, the swift build command accepts arguments to be passed to the compiler:

swift build \
  --configuration Release \
  --Xswiftc -F -Xswiftc ./Frameworks/ \
  --Xswiftc -framework -Xswiftc Sentry

If we run that command, we should get the tool compiled and linked dynamically against the framework.

Add the runtime path

If we distribute the binary under .build/release/MyTool, users will get an error when they try to run it from the terminal. Since the framework is dynamically linked, the linker will try to link the framework at runtime and will fail because it won’t be able to find it.

To fix the issue, you need to make sure of 2 things:

  • The frameworks is copied as part of the installation.
  • The directory where the framework is placed is part of the binary runtime search paths.

If we assume we’ll copy the framework into the /usr/local/Frameworks directory, we can run the following command to add that directory to the runtime search paths.

install_name_tool -add_rpath "/usr/local/Frameworks" "/path/to/MyTool"

After running that command, and having the error tracking framework in that directory, you should be able to run the tool successfully.

Conclusions

As we have seen, the process is not as straightforward as it could be with other programming languages like Swift. The complexity comes from the fact that the SwiftPM doesn’t provide an API to link against existing pre-compiled dynamic frameworks, nor a way to handle the installation of the tools and its dependencies in the user’s environment.

I hope you found the blog post useful and that it encourages you to add error tracking to your command line tools written in Swift.

]]>
<![CDATA[Trying to add error tracking to Tuist, I realized how the process is not very straightforward. This blog post describes the process that I followed to help other Swift developers add error tracking to their CLI tools.]]>
Derived Info.plist files https://pepicrft.me/blog/2019/07/12/derived-info-plist 2019-07-12T00:00:00+00:00 2019-07-12T00:00:00+00:00 <![CDATA[

Today I found some time to do some work on this PR, which allows Tuist users to define the content of their Info.plist files in the project manifest. Although it doesn’t add much value compared to having the content in a Info.plist file, it opens the door to a powerful abstraction: inheriting product-base values that developers can extend with their target-specific keys.

let infoPlist = InfoPlist.default(extend:[
  "CFBundleShortVersionString": "1.0",
  "CFBundleVersion": "1"
])
let target = Target(name: "MyFramework", infoPlist: infoPlist)

As we can see, we just need to override the values that our target is interested in providing, the rest is provided by Tuist. As part of the project generation, Tuist creates the file at /path/to/project/Derived/InfoPlists/MyFramework.plist.

The Derived directory only contains the Info.plist files for now, but we may store more types of files in the future. For instance, I’m considering integrating SwiftGen into Tuist, storing the generated code under Derived/SiftGen*.

It’s exciting seeing Tuist abstracting all the complexities and most importantly, seeing how the architectural decisions that we made are enabling this effort.

]]>
<![CDATA[In this mini-post I talk about some recent work that I've done in Tuist to support defining Info.plist files in the manifest file.]]>
Running system tasks with Swift and Foundation https://pepicrft.me/blog/2019/07/10/running-system-tasks-on-macos 2019-07-10T00:00:00+00:00 2019-07-10T00:00:00+00:00 <![CDATA[

Have you ever tried to use Foundation’s API on macOS to run system tasks? Perhaps you know the class Process, which is provided exactly for that purpose. I tried to used it in Angle, a side project that I’m developing with some friends, and I’m struggling to use it properly. These are the issues that I ran into:

  • When the application that triggers the process gets killed, the process remains running. After digging around, I found that there’s a private method, setStartsNewProcessGroup: to tell the process whether it should terminate when the process that triggers it finishes. Why is that method private?
  • It doesn’t ensure that the standard output, error, and completion events are serialized in the same order. That may result in events coming in the wrong other, for example, a process that completes before the standard error message comes. Projects like ShellOut, which wrap Process to provide a more convenient API, have to use to ensure the events come in the right order.

The best implementation that I found so far to run tasks in the system is the one from the Swift Package Manager, which interestingly, doesn’t use Process but its own implementation of it. Unfortunately, you can’t/shouldn’t copy-paste the class into your project.

It’s unfortunate that those issues haven’t been tackled. The API is not very user-friendly and there’s a lot of room for improvement to make it more straightforward to use. If we look at Ruby’s API, this is how we look like:

require 'open3'

# Launch the process and capture the standard output in a variable.
developer_path = `xcode-select -p`

# Launch the process and forward the standard output and error.
system("xcodebuild", "build", "-project", "MyProject.xcodeproj") || abort

# Launch the process and capture the standard output and error
stdout_str, stderr_str, status = Open3.capture3("xcrun", "simctl", "list", "devices", "-j")

# Launch the process and call the block with the
Open3.popen3("xcodebuild", "build", "-project", "MyProject.xcodeproj") {|stdin, stdout, stderr, wait_thr|
  pid = wait_thr.pid
  exit_status = wait_thr.value
}

As you can see, we have several options from which we can choose depending on what we’d like to do with the process:

  • Fire and forget.
  • Fire and collect its output.
  • Fire and notify me when the an event (e.g. standard output data) has been sent.

The most reliable abstraction that I’ve found is ReactiveTask, which is developed by the Carthage team and used by Carthage. It provides a beautiful reactive API using ReactiveSwift, with which you can use Reactive operators and subscribe to the events that you are most interested in. If there’s a good use case for the usage of the reactive paradigm, this is to me one. A process is an operation that starts, sends a bunch of events, and then completes.

Unfortunately 😕, Angle has already RxSwift as a dependency, and I doubt it’s a good idea to add another reactive library to the stack. For that reason, I started developing internally an implementation similar to ReactiveTask’s, but using RxSwift. It’s still WIP, but if I’m happy with the result, we might open source it.

I wonder if I’m the only one having this experience with the Process class, or there are other developers that are struggling with the same issues. If you are one of those, I’m curious to know how you overcame them.

Have a wonderful week!

]]>
<![CDATA[In this blog post, I talk about my experience using one of Foundation's APIs, Process.]]>
All you need is tools (talk) https://pepicrft.me/blog/2019/07/09/all-you-need-is-tools-talk 2019-07-09T00:00:00+00:00 2019-07-09T00:00:00+00:00 <![CDATA[ ]]> <![CDATA[This post contains the video of the talk that I gave at AltConf about why I think it's important investing into tooling and some tips to build great tools.]]> How I feel working on Tuist https://pepicrft.me/blog/2019/07/03/how-i-feel-working-on-tuist 2019-07-03T00:00:00+00:00 2019-07-03T00:00:00+00:00 <![CDATA[

It’s been a long time working on Tuist, and despite some downs, I continue to be excited about the project. In hindsight, these are the facts that contribute to that:

  • I had the opportunity to meet and work with enthusiastic people that bring a lot of ideas and good vibes for the project. They found a lot of value in the project and are giving a lot back. Seeing them contributing to the project is very inspiring. For instance, this week I sent them some Tuist stickers and a small letter thanking them for the contribution.
  • It’s an interesting challenge in a domain where most of the standards, frameworks and tools are given by Apple. It sometimes feel like rowing a small boat next to a cruise, but that’s fun too. The fact that we are a tiny project allows us to be closer to developers and have a more iterative process to help them scale.
  • Swift is relatively new in the command line tooling space. As a consequence there are APIS like Process that are not as friendly as they could be. Exploring that territory and pushing the language forward feels great.
  • We started building a new element to take Tuist and teams’ productivity to the next level, Tuist Galaxy. We aim to build a web platform that integrates with developers workflows and their tools (e.g Xcode, GitHub, Slack) and provides them with insights and useful information to ensure their projects are in a healthy state. Leveraging the web to make that possible is an interesting task.
  • We are no just building software, but a product. That means design, documentation, community, website, and so many other things from which I’m learning a lot.

The only thing that I wish I could start doing is dog-fooding. It’s hard to get ideas, or build the ones that I have if I don’t use the project as part of my day to day job.

Anyway, I’m happy for what we achieved and looking forward for the months and years ahead.

]]>
<![CDATA[A random reflection about Tuist and why I'm so glad working on it.]]>
The tale of Xcode projects' complexity https://pepicrft.me/blog/2019/06/23/the-tale-xcode-projects-complexity 2019-06-23T00:00:00+00:00 2019-06-23T00:00:00+00:00 <![CDATA[

The CEO of our company wants the product to have an iOS app. We embark on building it, so we start by creating the project: we open Xcode, select new project, and then Xcode dumps the following into a local directory:

Project tale 1s

What a beautiful greenfield. We click run and the simulator opens almost instantly with our app running in it. We probably don’t know about many of those files that were created and the content in them but who cares? As long as we can compile it, it’s all fine.

A few weeks later, the need of adding dependencies comes up. Someone in the team decides to introduce CocoaPods. By then the project has got a bit more complex; there were a few build flags added to speed up compilation, and some build phases to customize the build process a bit. CocoaPods tries to do its best to integrate the dependencies into the project but it fails at it. We blame CocoaPods because we believe it’s CocoaPods fault. We don’t realize that Xcode exposes so much complexity that CocoaPods can’t define a contract between the Pods project and ours. Our project became complex, it’s normal, and it’s not our fault. After Stackoverflowing a bit, we find out the hack that will make the CocoaPods integration work. Awesome! Now we have external dependencies and we can add more if we need to.

Times goes by, the project continues to grow, and few months later, someone sees that modularizing the project helps with having clearly defined boundaries between features and better compilation times. The modularization requires creating a few frameworks, and therefore new targets with some files to configure them. It doesn’t sound that hard. A few weeks later, the project is modularized. Some features have been moved into their own framework, others remain in the main app because their code is so tangled to the foundation of the project, that is impossible to extract them. Perhaps without realizing it, we end up with similar projects and targets (build phases & settings) that barely reuse anything.

A year after we added the first line of code to the project, someone mentions the idea of replacing CocoaPods with Carthage because they heard its a better option. Someone said something about the source code being pre-compiled, and therefore faster builds. That sounds too good to be true, and it shouldn’t be that much work according to the README.md. We add a few Carthage dependencies and our project doesn’t compile; we added them as transitive dependencies of one of our frameworks and forgot to copy them into the app’s bundle. Again, Stackoverflow has the solution to our problem: just a few tweaks and the project compiles. Since we want to be safe, we add the copy frameworks build phase that Carthage suggests to all the targets. Nothing can be wrong if I can compile the app and CI is green. Well… it is all fine until we try to release the app and Apple realizes that we are embedding frameworks that contain other frameworks. What is this, inception?

The time to migrate the version of Swift comes. We want to try the latest and re-invented APIs that Apple presented in the last WWDC. We don’t want to be that old-school project or company that ues Swift 3. Oh nice! Xcode suggests doing the migration for us. They must know what they are doing… We are too naive. Xcode assumes that our project is simple but it’s not. After clicking the magic button our project not only doesn’t compile, but leaves us with over hundred errors that are caused by who knows which flag. What do we do? Hopefully we use a version control system, so we revert the changes that Xcode introduced and do the migration manually. It turns out to be more painful than how Apple presented it during WWDC. Using the latest Swift version is worth the effort so we spend all the time that is necessary to do the migration. Yay! After a week, we can consider the migration complete.

It’s 2019, the flying bird knows how to drop packages in projects. It flies over our project, but it’s confused. It doesn’t know where to drop them. How did we end up in this situation?

Project tale 2

Takeaways

Is there any part of the story that resonates with you? It’s easy end up with a lot of accidental complexity if we are not aware of the implication of each of the changes that we are adding to our projects. Xcode projects are monoliths and barely allow reusing its pieces. Complexity makes the projects hard to maintain and migrate. We can see that when developers use Xcode’s feature to migrate projects. Hast it ever worked for you? Perhaps if it’s a single-target application. We all know that one-target project is how we start but we eventually end up with many of them (libraries, extensions, apps for other platforms).

What can we do if we don’t want to be there? The first option would be to wait for Apple to rethink the format of the projects like they do with hardware. I had some hope for this year but nothing was presented. Instead, they keep extending the project format, this time with the support for Swift packages. You probably didn’t realize, but they leveraged the closeness of Xcode, to make a seamless integration of dependencies possible. They did what CocoaPods tried for a long time, but they couldn’t because Apple didn’t allow them to do so.

  • Default to simplicity but open to configuration: Only expose details whose developers are interested in. Otherwise, default to default values.
  • Allow reusability of project elements: Being able to define elements like build phases or schemes in one place, and use them from multiple projects and targets.

The second option would be using Tuist, an open source tool that I’m maintainer of. It makes the definition of the projects more declarative and abstracts developers from all the intricacies that are often the source of the aforementioned issues. Only if they need to, developers can have fine-control over the low-level projects configuration.

Until Apple decides to invest in developers experience scaling their projects, you can give Tuist a try. I’d love to help you set it up if you are interested in using it.

]]>
<![CDATA[This is a short story of how Xcode projects can end up being very complex and hard to maintain.]]>
The urge to be the first https://pepicrft.me/blog/2019/06/20/the-urge-to-be-the-first 2019-06-20T00:00:00+00:00 2019-06-20T00:00:00+00:00 <![CDATA[

I have the impression that there’s a increasing urge to be the first in our industry. The first one to write a blog post about SwiftUI, a book, a podcast, a talk, an opinion to be pushed onto others. I’m honestly exhausted when I look at Twitter these days. I haven’t had time to watch the talks or read the official documentation, but I feel I know a lot about SwiftUI. I know a lot through the lenses of those who are running the race to be the first one, to be assigned the label of expert on the technology. God sake!

Good for them. They are responsible for their time and how they want to use it. What bothers me though, is that they are normalizing “being exhausted”. They don’t understand that we are not running that race, but they somehow want us to feel part of it. No, that’s the marathon they decided to run, not us. Some people praise that behavior; I’ll never do so.

There’s beauty in going slowly, in learning things as we come across them. I’m learning to ignore all those unhealthy attitudes. I don’t want to stress out because I can’t participate in discussions about SwiftUI, nor have I an opinion about it. It’s ok.

We often talk about burnout and how to overcome it, yet we barely talk about what leads us to it. Meditation is great, but in my opinion, the proliferation of meditation apps is a symptom of something not working in our industry. As much as I can, I’ll reject and talk about unhealthy behaviors like this one that might lead to burnout. We, as a community, should not accept such things, and most importantly, reject them to help others not end up being exhausted in their lives.

]]>
<![CDATA[Just an observation of a trend that I've seen in our industry: developers rushing to be the gain the label of expert in a given technology.]]>
Speaking at AltConf https://pepicrft.me/blog/2019/06/08/altconf-2019 2019-06-08T00:00:00+00:00 2019-06-08T00:00:00+00:00 <![CDATA[

A few days ago I gave a talk at AltConf in San Jose. Even though it’d been a while without giving a talk publicly, it went very well.

Seeing familiar faces in the crowd and talking about something that I’ve been working for a long time brought me a lot of confidence. I spoke relaxed, not feeling nervous at all, and I was even able to make some jokes and demo Tuist, the open source tool that I’m most proud of maintaining.

This reinforced something that I’m huge advocate of, sharing without having experienced is like teaching someone life lessons without having experienced them yourself. Learning is a long process. We can read many books on a topic but we’ll struggle to assimilate the knowledge if we don’t get our hands dirty.

I felt my hands were dirty, perhaps not as much as I could, but I believe enough to share some lessons and give the attendees some takeaways for their projects.

Moreover, it was great to meet people with whom I’d only talked over Slack or GitHub. It’s so much nicer to talk to people in person. That made me think that I should jump more into calls to see other developers’ faces and have spontaneous chat. I’m so grateful of having connected to those people thanks to open source projects.

I’m almost landing in Zurich after having flown from San Francisco. The WWDC break is over. I’m excited to seeing all the improvements that are landing to the Apple OS, although I’m not coding apps anymore. There’s been a fair amount of improvements that we’ll leverage at Shopify to make developers productive, and that’s exciting.

I wish you all a great weekend.

]]>
<![CDATA[A brief post talking about my experience speaking at AltConf 2019 in San Jose.]]>
Abstracting Info.plist files https://pepicrft.me/blog/2019/05/08/infoplists 2019-05-08T00:00:00+00:00 2019-05-08T00:00:00+00:00 <![CDATA[

If you have worked with Xcode projects before, you might know what Info.plist files are. For those of you who are not familiar with them, they are plain xml files with key-value pairs that define app settings such as the icon or the build number. Below you find an example of the structure of the file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>CFBundleDevelopmentRegion</key>
    <string>$(DEVELOPMENT_LANGUAGE)</string>
    <key>CFBundleDisplayName</key>
    <string>MyApp</string>
    <key>CFBundleExecutable</key>
    <string>$(EXECUTABLE_NAME)</string>
    ...
    <key>AppIdentifierPrefix</key>
    <string>$(AppIdentifierPrefix)</string>
</dict>
</plist>

Info.plist files are created when developers create a new target (e.g. a new iOS application). Most of the entries in the file are necessary but they are barely modified after the file is created. The entries that developers change the most is the version and build numbers. For others like the main storyboard or the icon, Xcode provides a UI interface to change the value.

As you might know, one of the aims of Tuist is abstracting away the details that we believe developers shouldn’t be exposed to. I think those Info.plist values that are barely touched are a perfect candidate for abstraction.

I’ve been thinking about how that abstraction would be, and this is the idea I came up with:

enum InfoPlist {
    case file(Path)
    case dictionary([String: Any])
    static func productDefaults(extend: [String: Any]) -> InfoPlist
}
  • file: If the user project has already an Info.plist file and they’d like to continue maintaining it themselves, this is the best option. The target will be generated accordingly to point to the local file. Notice that this is the only option that Tuist offers currently.
  • dictionary: If the wants to define the content of the file in the project manifest and let Tuist generate the Info.plist file this is the option. When Tuist generates the project, it’ll also generate the file right next to the project in a directory that contains this and other generated files.
  • productDefaults: As I mentioned earlier, developers barely modify the values in that file. The one that they usually modify when they release a new version of the app is the build and version numbers. This option tells Tuist to use the default values of the target product, allowing the user to extend them as needed.

Below you find some examples of how the definition of the target would look. Note that for simplicity of the examples, the targets don’t take all the arguments that are required:

// Existing Info.plist file
let watchApp = Target(name: "MyWatchApp", infoPlist: .file("./WatchApp.plist"))

// Generated Info.plist file with the given dictionary
let framework = Target(name: "MyFramework", infoPlist: .dictionary([
  "CFBundleExecutable": "$(EXECUTABLE_NAME)",
  "AppIdentifierPrefix": "$(AppIdentifierPrefix)"
]))

// Generated Info.plist file extending the platform default values.
let app = Target(name: "MyApp", infoPlist: .productDefaults(extend: [
  "CFBundleInfoDictionaryVersion": "6.0",
  "CFBundleVersion": "1.0"
]))

One of the most beautiful features of Tuist in my humble opinion is that manifests are written in Swift. That allows us to take benefit of extensions and Foundation protocols to simplify the interface:

extension InfoPlist: ExpressibleByDictionaryLiteral {
    init(dictionaryLiteral elements: (String, Any)...) {
        self = .dictionary(Dictionary(uniqueKeysWithValues: elements))
    }
}

extension InfoPlist: ExpressibleByStringLiteral {
    init(stringLiteral value: String) {
        self = .file(value)
    }
}

Thanks to those extensions, we can turn the examples into:

// Existing Info.plist file
let watchApp = Target(name: "MyWatchApp", infoPlist: "./WatchApp.plist")

// Generated Info.plist file with the given dictionary
let framework = Target(name: "MyFramework", infoPlist: [
  "CFBundleExecutable": "$(EXECUTABLE_NAME)",
  "AppIdentifierPrefix": "$(AppIdentifierPrefix)"
])

By giving Tuist more control over those files, we can run validations and verify that they contain the required attributes with the right values. As an example, watchOS extensions have a strict requirement when it comes to the bundle id of the extension. Since that’s an attribute configured in the Info.plist, we could verify that the value is right according to the watch app they are associated to.

Although developers can git-ignore those files because they get generated automatically when they run Tuist, I’d encourage them to keep them in the repository so that developers can checkout any revision of the project and compile it with Xcode without having to install Tuist.

An idea that I’m still pondering is how to structure the directory that contains the generated files. Info.plist files will be the first ones living in this directory but I’m sure they won’t be the only ones. Here’s a rough idea that I just had:

./Project.swift
./Generated
   InfoPlists/
     App.plist
     MyFramework.plist

I’ll sleep over the idea experiment with it to see how it feels in practice. If you have an opinion on it and don’t mind sharing, I’d appreciate it a lot. Don’t hesitate to ping me on Twitter, send me an email, our join Tuist’s Slack where you can talk to other contributors and maintainers.

I wanted to start coding on my flight ✈️ to Alicante but unfortunately, I forgot to run swift package generate-xcodeproj and now I can’t fetch the dependencies to do some coding 😕.

]]>
<![CDATA[On this blog post I talk about an idea that I'm pondering for Tuist. In the aim of abstracting implementation details from Xcode projects, I think there's an opportunity for Tuist to abstract `Info.plist` files which are barely modified by developers after they get created.]]>
T-shaped engineers https://pepicrft.me/blog/2019/04/13/t-shaped-engineers 2019-04-13T00:00:00+00:00 2019-04-13T00:00:00+00:00 <![CDATA[

How often have you heard “I’m an iOS engineer”? Every time I hear it, I wonder if that means that the engineer is only able to code iOS apps, or that it’s their expertise area but are open to engineer for other platforms and with other programming languages.

I used to fall in the former group. Doing something other than iOS or Swift did not motivate me, and ended up procrastinating when the opportunity of doing something else came up. Hopefully that changed, and being honest and professionally speaking, it’s made a huge difference in my day to day job.

When I open the laptop every day, I do it without knowing whether that day I’ll be doing Ruby, Javascript, or Swift. I don’t know if I’ll build software for Android, iOS, or React Native. Every day is uncertain. I push myself out of my comfort zone and broaden my points of view and skills. I like it.

I’ve been in teams with experts in different domains, and when some work escaped the domain they considered themselves experts of, they refused to do the work. They’d rather ping the developer that works actively on the API, or open an issue for someone to fix it on that Ruby open source tool that saves them so much time. Aren’t we all engineers capable of leveraging out skills to solve engineering problems?

This continues to be a pattern in many projects and companies. Shopify follows a different approach here. It aims for hiring t-shaped engineers. Those are engineers that are really good at one thing, but capable and willing to learn and do so many other things. I believe that’s crucial for a company to move fast. The last thing that you want for your project is engineers that add roadblocks when they are asked to do something different.

As an example, I’ve seen my team at Shopify working on the following tasks:

  • Build a command line tool in Ruby
  • Develop an internal web app to coordinate releases to the store.
  • Automate the provisioning of CI hosts to run mobile builds.
  • Investigate and help speed up builds.

Unfortunately, this setup might not be possible in small companies. Constantly jumping into new domains is expensive because developers need to gather context and learn about every new domain they jump it. That requires time.

I’m glad that Shopify believes in this powerful idea and seeing it materialize in my team. I still consider myself mostly experienced in Swift, yet love to try and learn other programming languages and paradigms like Ruby, Javascript or React.

If you work on a large project, and having specialized engineers is a source of friction when shipping stuff, you’d better revisit your hiring process. T-shaped engineers are a wonderful asset.

]]>
<![CDATA[Specialized engineers usually refrain from working on tasks outside their comfort zone of familiar programming languages and tools. While I believe this might be a good setup for projects that are just starting, I think building teams with t-shaped engineers is crucial for the long-term success of the project.]]>
Xcode updates are fun https://pepicrft.me/blog/2019/04/12/xcode-updates-are-fun 2019-04-12T00:00:00+00:00 2019-04-12T00:00:00+00:00 <![CDATA[

Last week we pushed the latest Xcode version to all the CI hosts. It’s an exciting thing because you see the company projects keeping up with the tools, but updates are painful. There hasn’t been any Xcode update that required any work other than installing it. We cross our fingers every time hoping for the update to be straightforward but we know that’s being too optimistic. That never happens.

With Xcode 10.2, teams migrated their apps pretty quickly and we did our job installing Xcode on the CI hosts. Things seemed to be running smoothly until yesterday, when one of the teams tried to release a new version of the app to the store and the build failed. It’s been two days of debugging and guess what, we haven’t been able to figure it out yet. The app gets archived, but when we try to export it, the xcodebuild command gets killed by some other process that is out of our control.

We have tried many things but more we try, the more confused we are:

  • It works locally but not on CI.
  • One project succeeds but the other fails.
  • The archive from CI can be successfully exported locally.

At this point I wish I had access to xcodebuild to track down where issue is coming from but I know that won’t be possible. xcodebuild is a black box whether we like it or not.

Right today, I attended a React conference in Amsterdam, and one of the things that I observed is how great it is that the Javascript industry have access to the source code of most of the tools that they use. The software that they use might be buggy, but at least they have the option to dive deep into the code and help solve the bugs.

We are not that lucky. Try and error is our best friend to understand the closed tooling that Apple provides. I remain optimistic though, and hope for a future where tools are open like Swift and that helps make our Xcode updates painless.

How has your Friday been?

]]>
<![CDATA[On this blog post I talk about Xcode updates and how painful they can sometimes be.]]>
Reliably installing Xcode https://pepicrft.me/blog/2019/04/06/reliably-install-xcode-versions 2019-04-06T00:00:00+00:00 2019-04-06T00:00:00+00:00 <![CDATA[

Apple hasn’t traditionally done a good job at providing convenient channels for distributing their tools for development environments. A good example of that is Xcode. If you want to install it on your laptop and have an account on the App Store is fine. You just need to search for it, click download and voila 🎉! You can start coding straight away.

However, if the environment where you are trying to install Xcode doesn’t have a GUI, the installation process is not straightforward. First of all, one needs to know the link where to download Xcode from. You can authenticate on the developer portal, go to the downloads section, and copy the link of the Xcode version that you wish to download. Not too much of a hassle, but it’s one step that needs to be done manually. Optionally, you can depend on a third-party website like xcodereleases, which provides an up-to-date list of all the Xcode versions with their download link.

Compared to other apps that you can drag & drop into the system /Applications directory and start using them, Xcode requires some additional steps to be executed:

  • Agree with non-agreed or new licenses.
  • Install missing components.

xcode-install, a well-known Ruby tool from the community, has been the tool most teams resorted to, including Shopify. It drives the whole process asking the user for input if necessary (like providing the 2 factor authentication code to authenticate the access to the download page). The tool makes the process more convenient, but in my experience, doesn’t provide the level of reliability and convenience that one expects when upgrading Xcode versions. It’s not something that we have to do every day, but when it has to be done, it’s fairly frustrating seeing the tool dumping errors on the terminal. For that reason, I embarked on developing a new command line tool in Swift to help developers install and upgrade Xcode versions:

  • The tool will be written in Swift. No more dependencies with Ruby or transitive native dependencies whose compilation might fail as stated on the xcode-install’s README.
  • It’s distributed as a self-contained binary. Just install it anywhere on the system and run it.
  • No more authentication required from the installer. The list of versions will be synchronized periodically and placed on the repository as a json file, downloads.json. If all of a sudden Apple decides to introduce a 3 factor authentication with a short notice period, well, users of the tool won’t have to change anything, we’ll deal with that for them.

It’s still work in progress but you can look in to it on this repository.

I hope you are having a great weekend 🥘🌴

]]>
<![CDATA[I started developing a tool, install-xcode that aims to help developers to install and upgrade versions of Xcode easily. In this blog post I talk about the motivation behinds building it and some design principles that I'm embrazcing.]]>
macOS development and being comfortable https://pepicrft.me/blog/2019/03/29/macos-development 2019-03-29T00:00:00+00:00 2019-03-29T00:00:00+00:00 <![CDATA[

How has your week been? It’s Friday, I’m having a sip of the first morning coffee, and I thought it’d be a good idea to write something down in my blog post. For years, I’ve been focusing on building iOS apps and tools around it. Developing macOS was always a far remote thing to me, and somehow postponed every time the idea of doing something for macOS came to my mind.

Have you ever procrastinated to not push yourself out of comfort zone?
That’s exactly what happened to me with macOS. I felt comfortable with iOS and its frameworks, so the idea of confronting new frameworks and paradigms held me back from trying.

I recently started building an app with some friends, Angle, an app that will make it easier for software teams to collaborate when building features. This time I decided it was the time to get my hands dirty with macOS. To my surprise, it’s not that different as I thought it would be. There’s no reason of which being scared. I actually realized that I enjoy it a lot. It reminds me back 8 years ago when I was playing with iOS for the first time. Some APIs are different, some have some weird intricacies exposed and merely documented, but it’s something one can easily learn with a bit of effort.

How many things are we missing because our comfort zone is holding us back from experimenting new things? macOS was just one example but I think this happens to me in other areas of my life. When one grows, you and others put labels on you. You are the person x, that has personality y, that knows z, and that likes w. I have them. I also have prejudices about me. I know what I’m good at and areas where I’m not that good. I avoid them, no one ones to know that they are bad at something. I’d rather remain comfortable around the things in my domain.

Introducing myself into macOS has helped me realize that it’s an stupid idea to get stuck in your comfort area. There are many things to explore and learn, and even though we have grown a personality, and with it a comfort zone, that does not mean that we cannot expand it. I’ll work more and more on feeling uncomfortable.

Have a great weekend!

]]>
<![CDATA[I've been avoiding macOS development for no reason. This blog post is a short reflection on why I think I've been doing it.]]>
Interacting with Xcode projects in Swift https://pepicrft.me/blog/2019/03/15/xcodeproj 2019-03-15T00:00:00+00:00 2019-03-15T00:00:00+00:00 <![CDATA[

There are some scenarios where it might be useful to do some automation on those files, for example, to detect references to files that don’t exist, or invalid build settings. Even though you could check those things by parsing the file yourself and traversing the structure, you can do it more conveniently with xcodeproj. It not only provides you with an API in Swift, but ensures that your changes in the project file are persisted with the format that Xcode.

In this blog post, I’ll talk about the project and its structure, and jump into some examples where you might consider introducing some automation in your projects with xcodeproj.

Note that APIs might have changed since the I wrote this blog post. If the examples don’t run as expected, I recommend checking out the documentation on the repository.

Xcodeproj, a monolithic format

The Xcode project, which has an extension xcodeproj (where the name of the library comes from), is a folder that contains several files that define different components of the project. One of the most interesting and complex files is the file project.pbxproj. You might be familiar with it if you have run into git conflicts on Xcode projects before. This is a property list file, like the Info.plist, but with a subtle difference that made implementing xcodeproj a challenge. The file has some custom annotations that Xcode adds along the file to make the format more human-readable and (I’m guessing) facilitate resolving git conflicts. Since the format is not documented, the library required several iterations to approximate the format of Xcode accurately.

The pbxproj file contains a large list of objects, which in xcodeproj are modeled as PBXObject classes. They represent elements such as build phases (PBXBuildPhase), targets (PBXNativeTarget) or files (PBXFileReference). Those objects get a unique identifier (UUID) when Xcode creates them, and it’s used to declare references between objects. For example, a target has references to its build phases using their UUIDs as shown in the example below:

buildPhases = (
OBJ*593 /* Sources _/,
OBJ_599 /_ Frameworks \_/,
);

The example above is from a project generated by the SPM. SPM has its own convention for naming UUIDs, which doesn’t match Xcode’s default.

For projects like the SwiftPM or Tuist, which leverage project generation, it’s crucial to generate the UUIDs deterministically. In other words, every time a project is generated, its objects always get the same UUIDs. Otherwise, Xcode and its build system would invalidate the cache and cause builds to start from a clean state. xcodeproj uses the object attributes and the attributes of its parents to make the generated id deterministic. Moreover, the format is better aligned with the one that Xcode uses.

An undocumented format

Conversely to Android applications that are built using Gradle, a build system that is extensively documented, the format of Xcode projects lacks documentation. Apple expects Xcode to be the interface to the projects. Consequently, they barely put effort into documenting the project structure or making it more declarative and git-friendly.

So if the format is not documented, how were we able to develop a Swift library that works as an alternative interface to the projects? First and foremost, thanks to the pioneering work that the fantastic CocoaPods team did in that area. They developed the first ever library to read, update and write Xcode projects, xcodeproj. The library is written in Ruby and is a core component of CocoaPods.

The work of understanding the project pretty much of reverse-engineering how Xcode maps actions to changes into the project files. To give you an example, let’s say that we’d like to understand how the linking of a new library reflects on the project files.

  1. We commit the current state of the project so that we can use git to spot the diff.
  2. Change the target settings to link the library.
  3. Use git diff and see what changed.

CocoaPods already did some of that work, but that did not prevent us from having to do it as well. For instance, we wanted to expose as optionals the attributes that are optionals in projects. How did we know which attributes were optionals? We removed them from the project and tried to open the project with Xcode. If Xcode was able to open the project, that indicated that the attribute was optional. If Xcode crashed, it meant that the attribute was required. Do you imagine doing that with every attribute of each object? It was a vast amount of work, but luckily something that we don’t have to do often because new Xcode versions barely introduce new attributes.

Hands-on examples

I could write a blog post explaining each of the objects and get you weary with some theory, but I thought it’d be better to take you through some practical examples that you could write yourself to get familiar with the objects. Before we dive into them, we need to create a new Swift executable package where we’ll add xcodeproj as a dependency. Let’s create a folder and initialize a package:

mkdir examples
cd examples
swift package init --type executable

The commands above will create a manifest file, Package.swift, and a Sources/examples directory with a main.swift file where we’ll write our examples. Next up, we need to add xcodeproj as a dependency. Edit the Package.swift and add the following dependencies to the dependencies array:

.package(url: "https://github.com/tuist/xcodeproj.git", .upToNextMajor(from: "6.5.0")),

Replace 6.5.0 with the version of xcodeproj that you’d like to use.

Alternatively, you can use swift-sh, a handy tool that facilitates the definition of Swift scripts with external dependencies. The only thing you need to do is to install the tool, which can be done with Homebrew by running brew install swift-sh and create a Swift script where you’ll code the examples:

#!/usr/bin/swift sh

import Foundation
import xcodeproj // tuist/xcodeproj
import PathKit // kylef/PathKit

That’s all we need to start playing with the examples.

Example 1: Generate an empty project

In this example, we’ll write some Swift lines to create an empty Xcode project. Exciting, isn’t it? If you ever wondered what Xcode does when you click File > New Project, you’ll learn it with this example. You’ll realize that after all, creating an Xcode project is not as complicated as it might seem. You could write your own Xcode project generator. Let me dump some code here and navigate you through it right after:

import Foundation
import PathKit
import xcodeproj

// 1 .pbxproj
let pbxproj = PBXProj()

// 2. Create groups
let mainGroup = PBXGroup(sourceTree: .group)
pbxproj.add(object: mainGroup)
let productsGroup = PBXGroup(children: [], sourceTree: .group, name: "Products")
pbxproj.add(object: productsGroup)

// 3. Create configuration list
let configurationList = XCConfigurationList()
pbxproj.add(object: configurationList)
try configurationList.addDefaultConfigurations()

// 4. Create project
let project = PBXProject(name: "MyProject",
buildConfigurationList: configurationList,
compatibilityVersion: Xcode.Default.compatibilityVersion,
mainGroup: mainGroup,
productsGroup: productsGroup)
pbxproj.add(object: project)
pbxproj.rootObject = project

// 5. Create xcodeproj
let workspaceData = XCWorkspaceData(children: [])
let workspace = XCWorkspace(data: workspaceData)
let xcodeproj = XcodeProj(workspace: workspace, pbxproj: pbxproj)

// 6. Save project
let projectPath = Path("/path/to/Project.xcodeproj")
try xcodeproj.write(path: projectPath)

Let’s break that up analyze block by block:

  1. A PBXProj represents the project.pbxproj file contained in the project directory. The constructor initializes it with some default values expected by Xcode and an empty list of objects.
  2. PBXGroup objects represent the groups that one can see in the project navigator. Projects required two groups to be defined, the mainGroup which represents the root of the project and where other will groups will be added as children, and the productsGroup which is the group where Xcode creates references for all your project products (e.g. apps, frameworks, libraries)
  3. Projects and targets need what’s called a configuration list, XCConfigurationList. A configuration list groups configurations like Debug and Release and ties them to a project or target. The call to the method addDefaultConfigurations creates the default build configurations, represented by the class XCBuildConfiguration. A XCBuildConfiguration object has a hash with build settings, and a reference to an .xcconfig file, both optional.
  4. Next up, we need to initiate a PBXProject which contains project settings such as the configuration list, the name, the targets, and the groups.
  5. Last but not least, we need to create an instance of a XcodeProj which represents the project that is written to the disk. If you explore the content of any project, you’ll realize that it contains a workspace. Therefore the XcodeProj instance needs the workspace attribute to be set with an object of type XCWorkspace.
  6. Changes need to be persisted into the disk by calling write on the project and passing the path where we’d like to write it.

Notice that the objects that are created to be part of the project need to be added to the pbxproj.

If you run the code above, you’ll get an Xcode project that works in Xcode. However, it does not contain any target or schemes that you can work with. The goal with this example was to give you a sense of what the xcodeproj API and Xcode projects look like. Using xcodeproj to generate your company’s projects would require much work so unless there’s a good reason for it, you can use tools like XcodeGen or Tuist instead. Those tools allow you define your projects in a different format, for example, yaml or Swift, and they convert your definition into an Xcode project. The resulting definition file is much simpler and human-readable than Xcode’s .pbxproj

Example 2: Add a target to an existing project

Continuing with examples that help you understand the project’s structure, we’ll add a target to an existing project. Like I did with the preceding example, I’ll introduce you to the code first:

import xcodeproj
import PathKit

// 1. Read the project
let path = Path("/path/to/Project.xcodeproj")
let project = try XcodeProj(path: path)
let pbxproj = project.pbxproj
let targetName = "MyFramework"
let pbxProject = pbxproj.projects.first!

// 2. Create configuration list
let configurationList = XCConfigurationList()
pbxproj.add(object: configurationList)
try configurationList.addDefaultConfigurations()

// 3. Create build phases
let sourcesBuildPhase = PBXSourcesBuildPhase()
pbxproj.add(object: sourcesBuildPhase)
let resourcesBuildPhase = PBXResourcesBuildPhase()
pbxproj.add(object: PBXResourcesBuildPhase())

// 4. Create the product reference
let productType = PBXProductType.framework
let productName = "\(targetName).\(productType.fileExtension!)"
let productReference = PBXFileReference(sourceTree: .buildProductsDir, name: productName)
pbxproj.add(object: productReference)
pbxProject.productsGroup?.children.append(productReference)

// 5. Create the target
let target = PBXNativeTarget(name: "MyFramework",
buildConfigurationList: configurationList,
buildPhases: [sourcesBuildPhase, resourcesBuildPhase],
productName: productName,
product: productReference,
productType: productType)
pbxproj.add(object: target)
pbxProject.targets.append(target)

try project.write(path: path)
  1. The first thing that we need to do is read the project from disk. XcodeProj provides a constructor that takes a path to the project directory. xcodeproj decodes the project and its objects. Notice that we are assuming that the pbxproj contains at least a project. If nothing has been messed up with the project that’s always the case.
  2. Like we did when we generated the project, targets need configurations. We are not defining any build settings, but if you wish, I recommend you to explore the constructors of the classes. You’ll get to see all the configurable attributes.
  3. A target has build phases. xcodeproj provides classes representing each of the build phases supported by Xcode, all of them following the naming convention PB---BuildPhase. In our example, we are creating two build phases for the sources and the resources.
  4. Targets need a reference to their output product. It’s the file that you see under the Products directory when you create a new target with Xcode. It references the product in the derived data directory. Since we are creating the target manually, we need to create that reference ourselves. For that, we use an object of type PBXFileReference. The name is initialized with two attributes, the name, and the sourceTree which defines the parent directory or the association with its parent group. You can see all the possible values that sourceTree can take. In the case of the target product, the file must be relative to the build products directory. Don’t forget to add the product as a child of the project products group.
public enum PBXSourceTree {
  case none
  case absolute // Absolute path.
  case group // Path relative to the parent group.
  case sourceRoot // Path relative to the project source root directory.
  case buildProductsDir // Path relative to the build products directory.
  case sdkRoot // Path relative to the SDK directory.
  case developerDir // Path relative to the developer directory.
  case custom(String) // Custom path.
}

With all the ingredients to bake the target, we can create the instance and add it to the project. Write the project back to disk and open the project. Voila 🎉! A new target shows up in your project.

Note: A pbxproj can contain more than one project when an Xcode project is added as sub-project of a project. In that case Xcode adds the project as a file reference and then adds the reference to the pbxproj.projects attribute.

Example 3: Detect missing file references

If you have solved git conflicts before in your Xcode projects, you might already know that sometimes, you end up with files in your build phases that reference files that don’t exist. Most times, Xcode doesn’t let you know about it, and you end up with a project in a project in a not-so-good state. What if we were able to detect that before Xcode even tries to compile your app?

import xcodeproj
import PathKit

let path = Path("/path/to/project.xcodeproj")

let project = try XcodeProj(path: path)
let pbxproj = project.pbxproj
let pbxProject = pbxproj.projects.first!

/// 1. Get build phases files
let buildFiles = pbxproj.nativeTargets
  .flatMap({ $0.buildPhases })
  .flatMap({ $0.files })

try buildFiles.forEach { (buildFile) in
/// 2. Check if the reference exists
guard let fileReference = buildFile.file else {
fatalError("The build file \(buildFile.uuid) has a missing reference")
return
}

/// 3. Check if the references an existing file
let filePath = try fileReference.fullPath(sourceRoot: path.parent())
if filePath?.exists == false {
  fatalError("The file reference \(fileReference.uuid) references a file that doesn't exist")
}
  1. Projects have an attribute, nativeTargets, that returns all the targets of the project. From each target, we can get its list of build phases accessing the attribute buildPhases. Build phases are objects of the type PBXBuildPhase which expose an attribute, files with the files that are part of the build phase. Build phase files, build files, are represented by the class PBXBuildFile.
  2. The first thing that we do is checking if the file reference exist. Notice that we are accessing the file attribute from the build file. That’s because a build file is a type that works as a reference to a file from your project groups. The same file represented by its PBXFileReference object, can be referenced from multiple build phases resulting in multiple PBXBuildFiles but just one PBXFileReference. We check whether the file reference exists. If it doesn’t, it probably means that we didn’t solve the git conflict correctly and removed a reference that was being referenced by a build file.
  3. After checking if the file reference exists, we check that the file reference points to an existing file. We can do that by obtaining the absolute path calling the method fullPath on the file reference. Notice that we need to pass a sourceRoot argument, which is the directory that contains the project.

Example 4: Detecting if a Info.plist is being copied as a resource

Another typical scenario when working with Xcode projects, is when we add a file to the copy resources build phase when it shouldn’t be there. An excellent example of this one is copying the Info.plist file. Have you been there before? Fortunately, we can leverage xcodeproj to detect that as well.

import xcodeproj
import PathKit

let path = Path("/pat/to/project.xcodeproj")

// Read the project
let project = try XcodeProj(path: path)
let pbxproj = project.pbxproj
let pbxProject = pbxproj.projects.first!

try pbxproj.nativeTargets.forEach { target in
// 1. Get the resources build phase
let resourcesBuildPhase = try target.resourcesBuildPhase()

resourcesBuildPhase?.files.forEach { buildFile in
guard let fileReference = buildFile.file else { return }

/// 2. Check if the path or name reference an Info.plist file
if fileReference.path?.contains("Info.plist") == true ||
  fileReference.name?.contains("Info.plist") == true {
  fatalError("The target \(target.name) resources build phase is copying an Info.plist file")
}
  1. We can obtain the resources build phase from a target calling the convenience method resourcesBuildPhase(). If the build phase doesn’t exist, it’ll return a nil value.
  2. As we did in the previous example, we get the file reference of each build phase, and we check whether the name or the path contain Info.plist. If they do, we let the developer know.

Note that checking the name of the file being Info.plist is not enough cause it might be a file that is not the target Info.plist. If we want to be more precise, we’d need to check if it references the same file as the INFOPLIST_FILE build setting. For the sake of simplicity, we only check one thing in the example.

Ensuring a healthy state in your projects

As we’ve seen, with a few lines of Swift, we can implement relatively simple checks that can be run as part of the local development or as a build step on CI to make sure that your projects are in a good state. Thanks to xcodeproj you can do it in a language that you are familiar with, Swift.

Having your projects in a good state is crucial to make your builds reproducible and avoid unexpected compilation issues that might arise later as a result of a bad merge that went unnoticed.

Projects powered by xcodeproj

Before closing the blog post, I’d like to give you some examples of tools that leveraged xcodeproj to make your life easier as a developer.

  • Tuist: Tuist is a tool that helps you define, maintain, and interact with your Xcode projects at any scale. It’s the project that motivated the development of xcodeproj.
  • Cake: A delicious, quality‑of‑life supplement for your app‑development toolbox.
  • XcodeGen: A Swift command line tool for generating your Xcode project
  • AutoEnvironment: Tool to automatically generate Environment.swift based on Xcode project.
  • Deli: Deli is an easy-to-use Dependency Injection(DI).
  • Accio: A dependency manager driven by SwiftPM that works for iOS/tvOS/watchOS/macOS projects.
  • xccheck: A diagnostic tool for Xcode projects.
  • expel: Automatically move your Xcode project build settings to xcconfig files.
  • xcodemissing: A tool to find and delete files that are missing from Xcode projects.
  • xcodeproj-modify: Adds a Run Script phase to an Xcode project.
  • Templar: A template generator.

I hope after reading this blog post you have a better sense of how Xcode projects are structured, and how even though Xcode doesn’t expose any API for you to read/update your projects, you can leverage a tool like xcodeproj to do so.

]]>
<![CDATA[This blog post is an introduction to the format of Xcode projects and xcodeproj, a Swift library that helps read and update Xcode projects in Swift. The post contains a few hands-on examples for developers to experiment with the library.]]>
Open source and trust https://pepicrft.me/blog/2019/03/14/open-source-and-trust 2019-03-14T00:00:00+00:00 2019-03-14T00:00:00+00:00 <![CDATA[

As I wrote on this blog post, one of the facts that I value the most about writing software is being able to meet and collaborate with people. Coincidentally, I recently had the opportunity to meet like-minded developers that started getting involved in Tuist.

I’m a person that trusts other people by default and therefore, that’s what I did since I could see they were really interested in the project, and would like to contribute to make it work for their projects. Seeing someone that you don’t know trusting you might take you by surprise at first, but it gives you the necessary confidence to feel part of the project. One might think that I’m risking the project by doing that, however, that’s a risk that I’m willing to take. The most fruitful collaboration, and thus contributions, happen when there’s trust among the people involved in the project. If by any chance, the person turns out to be untrustworthy, we can take steps back. Luckily, I haven’t had an experience like that (yet).

I’ve personally tried to contribute to projects where maintainers didn’t seem to be interested in receiving external contributions. I could feel that from the friction that they added and sometimes, a lack of empathy with a new contributor. They turned all the energy that I had for the project into a pure lack of interest. I experienced the same when I came across a project that is driven with a fair amount of ego and where recognizing each other’s work was inconceivable.

With Tuist, I did things differently and I’m glad that I made that decision:

  • Repositories are not under my GitHub user, but under a organization to which contributors and maintainers belong.
  • Anyone can join the Slack channel and talk to other users of the project.
  • Developers get push access after their first PR gets merged.
  • Praises and thanks have space in pull requests and issues.
  • We default to trust and let them prove wrong.

Trusting by default has allowed me to work with Oliver, Marcin, and Kassem and some other like-minded developers. We can brainstorm and overcome challenges together. Challenges that many developers are facing and for which unfortunately, Apple hasn’t offered a solution yet.

If you maintain an open source project and would like it to thrive, I recommend you to default to trust when new contributions land in the project. It’s a subtle thing but with a huge impact.

]]>
<![CDATA[Trust is key for open source projects to thrive. In this blog post I explain what trust has meant for Tuist.]]>
Automated tests for a Swift CLI tool with Cucumber https://pepicrft.me/blog/2019/03/13/cucumber 2019-03-13T00:00:00+00:00 2019-03-13T00:00:00+00:00 <![CDATA[

As you might already know, I’m devoting part of my free time to build Tuist, a command line tool that helps Swift developers maintain their Xcode projects regardless of their scale. Since we added the first Swift file to the project, having a good test suite has been one of our key design principles to ensure that features do what they are supposed to do, and that new versions are backward compatible (unless it’s impossible to achieve so). If companies and developers start using Tuist in a daily basis, the last thing that we want is disturbing their work as a consequence of a buggy or breaking update.

There’s nothing more annoying than not being able to do your work because the tool that you are using doesn’t work as expected.

The project initially contained a target with decent list of unit tests. This allowed us to test each piece of code in isolation but didn’t bring enough confidence for us to release new versions of Tuist. If unit tests were not enough, what else could we do? We could have adopted an analogic approach. Before releasing a new version, we could have asked users to try the next version before going out into wild. Beta testing is a tedious process, requires an effort on the users side, and slows down the release due to all the back and forth that it entails.

The approach that we decided to take, and about which I’d like to talk, is based on the Ruby BDD testing framework Cucumber. Before I jump into details about why we made that choice, I want to show you an example of a test run output:

Cucumber

As you can see in the example above, the steps of the scenario that we are testing can be read as if you were reading a story. At the end of the day, they are user stories. We describe the scenarios as a set of steps that are in fact sentences, and Cucumber maps those steps into Ruby code that gets executed. Cucumber offers the expressiveness that XCTest doesn’t. The latter is, in my opinion, more suitable for unit tests where having a more verbose API and output makes more sense.

Before introducing Cucumber to the project, I was a big skeptical about adding Ruby code to the mix. I’m comfortable writing code Ruby but what about contributors to the project? It turns out that people can quickly understand how Cucumber works. Whenever we see a use case with potential to be tested with include a fixture project that we use to run the tests on. We keep track of all the fixtures on this README where each fixture includes a description of what the project is like.

Here are some examples of fixtures that we run automated tests on:

  • ios_app_with_static_libraries: This application provides a top level application with two static library dependencies. The first static library dependency has another static library dependency so that we are able to test how tuist handles the transitiveness of the static libraries in the linked frameworks of the main app.
  • app_with_frameworks: Slightly more complicated project consists of an iOS app and few frameworks.

The automated tests see the Swift Package Manager as a tool that builds the object under testing, the Tuist binary. The only interactions that they can have with Tuist is through the CLI, the standard output and error, and the generated output artifacts. Something like this can also be achieved defining a tests target with the Swift Package Manager, but I find it odd that the test runners, SwiftPM or Xcode, run tests that depend on themselves. When we describe the tests, we put ourselves in someone elses shoes:

I’m a user that have just installed Tuist and I’d like to initialize a project.

I’d expect to be able initialize the project with Tuist and get an Xcode project with a target that I can build. We’d describe that scenario in Cucumber like this:

Feature: Initialize a new project using Tuist

  Scenario: The project is a compilable macOS application
    Given that tuist is available
    And I have a working directory
    When I initialize a macos application named Test
    Then tuist generates the project
    Then I should be able to build the scheme Test

Imagine that we introduce a change in the project that doesn’t break the project generation, for which we already have unit tests, but for some reason the generated Xcode project cannot be built. Do you think it’d be a good experience for the user? I doubt so. Luckily our automated test would fail immediately and raise a flag.

I believe being able to release new software versions with confidence is crucial to move fast without breaking things. When developers use your software, they trust the software and the people behind it. They use it because it brings value to them and they’d like to continue using it if it continues to work reliably. If we are not able to meet that expectation and release with confidence, we are putting ourself at the risk of breaking the trust between our users and us.

This is just an example of how we are bringing that confidence to Tuist, and it’s certainly not the only one. The next time you merge a PR or release a new version ask yourself if you feel confident enough. If you don’t, you’d better adjust things in your project.

]]>
<![CDATA[In this post, I explain how we are able introduce changes and release new versions of Tuist with the confidence of not introducing bugs or breaking things.]]>
Software and people https://pepicrft.me/blog/2019/03/01/software-and-people 2019-03-01T00:00:00+00:00 2019-03-01T00:00:00+00:00 <![CDATA[

I’m currently flying back to Berlin, somewhere over the Atlantic ocean. Perfect time (without an Internet connection) to make some reflections. The one that I made this time has to do with my motivations when it comes to writing software. Guess what? One of the things that motivate me the most about writing software is getting to know and meet people that I would not meet otherwise.

I’ve been in Ottawa for 2 weeks. It’s the city where most of my team work from, so I try to visit them as much as I can. Most of my interactions with them happen over Slack or GitHub. You probably know what your colleagues are on those platforms, users. They have an avatar, a name, and write messages more elaborated than the ones a bot could ever write. Some people like to see their colleagues just like more intelligent bots. I don’t. I spend many hours a day working with computers (at the very least 8) so I don’t want to feel like talking to computer 8 hours a day.

That’s why whenever I have the opportunity, I break the ice and try to make personal the impersonal. I propose activities to my colleagues that have nothing to do with work. If people are hesitant to it, I completely understand and I don’t insist. People might want to keep some distance with the people that they work with, that’s understandable.

This time in Ottawa I learned that people in my team love philosophy and skiing and that some love traveling the world. I know more about them, and therefore, I can have more casual conversations with them about the things that they love the most. Isn’t it great?

I experience the same on the open source space. Recently, I met wonderful people that happened to have an interest in an open source project that I gladly bootstrapped, Tuist. I invited Kassem, Marcin, and Oliver to the organization and Slack. Since then, we’ve been collaborating, having ideas together, proposing improvements. Isn’t it a beautiful experience? Software enabled that. I’m starting to appreciate more the opportunity to meet people thanks to software.

I don’t imagine myself writing software without having any human component around it. I think I’d end up writing software that only machines would find useful, not humans. It’d be depressing, and that’s why I guess I avoid when we treat each other in such impersonal manners. I think it also take initiatives to abstract myself and the people around me from seeing software and the people that make it possible as just bytes, patterns, paradigms, architectures, technologies…

I firmly believe that great software is a result of humans being humans. Machines and the code that we put on them allow it, but that’s just the means.

I’m glad that software development gave me that opportunity and I’ll do my best to keep it alive. Thanks to software, I worked with amazing people some of whom I consider great friends. I’m grateful to have the opportunity to work with lovely people at Shopify from whom I can learn a lot and build incredible things together. And last, and not for that less important, I’m glad that open source is connecting me with people with so many different backgrounds that are helping me grow personally and professionally.

]]>
<![CDATA[A reflection on what's one of the most important things to me when building software, the people that make it possible.]]>
Turning negativism into positivism https://pepicrft.me/blog/2019/02/21/turning-negativism-into-positivism 2019-02-21T00:00:00+00:00 2019-02-21T00:00:00+00:00 <![CDATA[

Today, while I was having a chat with whom used to be my manager, he brought up an idea that resonated a lot in me: try to always turn sources of negativity into opportunities to bring positivism into the team. Let me give you an example.

My team at Shopify builds tools and infrastructure for mobile developers, and as part of our job we have to do some support work and respond requests which sometimes are out our area of the responsibilities. It’s easy to feel annoyed by that and start complaining about your colleagues believing that your team is there to put out the fires and have answers for everything. However, you can take the opportunity, show empathy, put yourself in the other person’s shoes, and think if within your domain, there’s something you could do to make that person’s life easier.

It helps you build trust between you, your team and the other person. Moreover, you are preventing you and your team from entering a negativism spiral that can drain your energy. I think it’s an exercise easy to do, yet with beneficial results for you and for others.

It’s a powerful idea that not only applies to work but life. In a world where there are inevitably negative things happening, it’s a simple exercise that we should do as much as we can.

The next time I start feeling any annoyance and negativism around me, I’ll stop and ask myself: What’s the most positive thing I could take of of this?


I hope you are all having a wonderful week. I’m spending a couple of weeks in Canada 🇨🇦, catching up and working closely with my colleagues, whom I mostly see on Slack or GitHub most of the time.

]]>
<![CDATA[Short reflection on how beneficial it can be turning negativism into something positive.]]>
Deep linking into macOS apps https://pepicrft.me/blog/2019/02/13/deep-linking-into-macos-apps 2019-02-13T00:00:00+00:00 2019-02-13T00:00:00+00:00 <![CDATA[

Today I happened to play with deep links on macOS. Having worked a lot on the iOS platform, I assumed things would be alike. To my surprise, it wasn’t like that.

As you might know, websites can include a manifest file that associates a website with a given application on iOS. When the webview detects that the website is deeplinkable, it suggests the user to continue the navigation inside the app. Seamless, isn’t it? There could arguably be more flexible and better ways of dealing with deeplinks on iOS, but I think the approach that iOS currently provides is enough for the needs of most projects. The example below shows the format of that apple-app-site-association.json manifest file:

{
    "applinks": {
        "apps": [],
        "details": [
            {
                "appID": "com.mycompany.App",
                "paths": ["*"]
            }
        ]
    }
}

Things are pretty different on macOS. First of all, macOS apps can’t be associated to a website through a manifest. What they can do though is defining URL schemes that they support. When a webview tries to load a URL, whose scheme matches any of the schemes defined by the installed apps, it’ll launch the app pasing the URL to it. One might think that that’s all needs right? Let me refute that by giving you an example, proof of a bad user experience.

You navigate a website or a service that can deeplink into a macOS client. At some point of the user navigation, it triggers the deeplink and it turns out that the user doesn’t have the app installed. The system errors because the request was not handled by anyone and the browser can’t show an error because it doesn’t know what happened after the link was fired off.

The browser can just trigger the request hoping for the os to handle it gracefully. Unfortunately, the system interrupts the flow when there isn’t an app to process the request.

One solution to this problem could come from Apple. They could align macOS to iOS and allow defining associations between websites and desktop apps. Given that they are putting effort into aligning macOS to the other platforms, I would not be surprised if that happens any time soon.

Another solution, yet not perfect, could be implemented by using some Javascript and HTML. The first thing that I tried was changing the website location using Javascript and detect whether the browser was able to replace the location using a timeout. That solution worked for Chrome and Firefox but not for Safari. Regardless of the system being able to handle the request, Safari navigates to an invalid page, in which our Javascript session with the timeout gets wiped out.

After some reading, I found a little hack that also works in Safari. We can embed an <iframe> element in which we can load the deeplink. By doing that, we can trigger the processing of the deeplink by the system and prevent Safari from navigating to an invalid page. Not ideal, but works. If you are familiar with React, here is how I ended up wrapping everything into a component:

class RedirectShowPage extends React.Component {

  render() {
    let body = (
      <p>
        <b>Redirecting to the app...</b>
        <br />
        If you don't have the app installed, you can download it using the
        following <a href="link_to_app">link</a>.
      </p>
    );
    const path = this.props.path;
    const location = `scheme://${path}`;
    return (
      <Page>
        {body}
        <iframe ref="iframe" style=\{\{ visibility: "hidden" \}\} src={location}/>
      </Page>
    );
  }
}

I tried to figure out if it’d be possible to detect when the deeplink could not be handled but unfortunately, that’s something neither the browser nor the system can help us with.

References

]]>
<![CDATA[Some comments on what's the state of art of macOS handling deeplinks.]]>
GitHub as an organization hub https://pepicrft.me/blog/2019/02/08/github-organization-hub 2019-02-08T00:00:00+00:00 2019-02-08T00:00:00+00:00 <![CDATA[

An idea that I’ve been pondering lately is using GitHub for organizing myself. In the last years, I’ve tried several approaches to organize myself: Trello boards, todo apps such as Todoist, plain text files. None of them worked well for me. I realized that most of my work happens on GitHub, so I found it very annoying having to create references between the tasks and the work on GitHub. For example, if there is an open source issue that I want to address the day after, I have to create a task and then make sure that I don’t forget to add the link to the issue, otherwise the task would lack some context.

I tried those services that leverage webhooks to synchronize worlds (Zapier is an example), but it felt too much having to add another tool to the mix. Moreover, if you have ever worked with webhooks you might know how unreliable they are. You create an issue on GitHub and it doesn’t show up on the todo app, or complete the task and the GitHub issue doesn’t get closed. Thinking about about potential synchronization issues made me feel uncomfortable.

Today I read this announcement from GitHub, where they they announced personal projects. You can use projects, a feature that has been in the platform for a long time, but associated to your user. You can link the projects to the repositories you are involved with. In my case, I can see three projects, personal, shopify, open source:

  • Personal: I’d create a repository, perhaps named todo, where I can have all life-related tasks. I’d move all my tasks from Todoist into that repository. I get exactly the same features that I use from Todoist: labels, comments, attachments. Furthermore, I don’t have to pay for any subscription because GitHub offers now private repositories for free.
  • Shopify: I’d link this project to all the internal repositories that I work on the most. We are already using GitHub issues to keep track of the work that we are doing or needs to be done so I only need to create tasks for those issues and move them along the project boards. Re-organizing these tasks is something that I’d do as the first thing in the morning. What should I focus on today?
  • Open source: In this one I’d group all the open source work that I do, pretty much the repositories from Tuist.

It might sound a geek thing using GitHub as a platform for self-organization but I don’t see why not. I think GitHub has all the elements that are necessary for that. I’m even considering building a simple todo client for iOS, written React Native client, and that uses GitHub as a backend. Do you imagine an app like Todoist but that persist your tasks into a GitHub repository/project?

I’m going to set up everything in the next days and see if it works or maybe the idea is not as great as I thought it would be.

How do you organize yourself?

]]>
<![CDATA[With the recent GitHub announcement of personal projects, I'm considering using GitHub as a todo platform where I can not only keep track of work-related tasks, but also personal ones. In this brief blog post I talk about how I used to organize myself, and why I think GitHub projects might suit my needs well.]]>
The motivations behind building Tuist https://pepicrft.me/blog/2019/02/02/tuist-motivations 2019-02-02T00:00:00+00:00 2019-02-02T00:00:00+00:00 <![CDATA[

It’s been a long time working on xcodeproj and Tuist, two open source projects that I’m very proud of. Before I started working on xcodeproj, the library that Tuist depends on, I’d worked on other open source projects that were Swift libraries. They had a much smaller scope and were intended to be used as a helpers and time savers in iOS apps.

The motivation to build Tuist came to me when I was working at SoundCloud. We handled the growth of the project, which brought an increase in build times, by splitting up the project into smaller projects and frameworks. It was back then when I realized how hard and cumbersome maintaining Xcode projects can be. Before I moved on from the company, there were around 8 frameworks, each with 3 targets within a project. Most of them were very similar, but we could barely reuse anything between them, just the build settings. Not only it was cumbersome, but also an error-prone setup. From time to time, we got linking errors and compilation issues caused by small changes that seemingly had no relation with the errors raised. I then set out to build Tuist, with the goal of making it easier to work with Xcode projects at any scale and with any experience.

I wanted to build Tuist in Swift, but I needed a way to read, update and write Xcode projects, like CocoaPods had with xcodeproj. Unfortunately, there wasn’t such a thing on the Swift community, so I worked on the Swift brother of that library, xcodeproj. It was while I was working on the library when Yonas opened an PR on the repository. How came someone was already using the library when it as not ready to be used yet? After a quick chat where I asked him what led him to use the project, I realized that we was building a Xcode project generator, something that to some extent, overlapped with what I wanted to achieve with Tuist.

I had a sadness feeling. Someone had had a similar idea while I was doing some groundwork for that idea to become real. After sleeping over it a few nights, I decided to pause the idea of Tuist, and rather focus on releasing the first version of xcodeproj, which would support the generator that Yonas was building, XcodeGen, as well as any other tools that the community decided to build. During that time I realized two things. The first one was that, even though XcodeGen had things in common with the ideas that I had for Tuist, the goals were completely different. The second realization was that by giving up on the idea of building Tuist I lost the motivation for building xcodeproj. I was building a library, with the plans of using it, but I had decided not to.

An inner voice told me to make Tuist real, but another pushed the idea back to not seem like a competition with XcodeGen. The evolution of XcodeGen continued to prove me that we were aiming for different goals and principles and that there was no reason to feel bad about building Tuist. On one side, XcodeGen was coming up with a declarative interface for defining projects and taking the opportunity to simplify and making the definition of projects easier. On the other side, Tuist aimed to abstract complexities of Xcode projects at any scale, and make it easier not only to generate projects but build, test and release them.

Another area in which both projects diverge is the design principles. XcodeGen prefers configuration over conventions. Most of the attributes that are supported by Xcode projects are also supported by XcodeGen. If you need a tool that helps you define and generate Xcode projects with no conventions imposed by the tool, XcodeGen is your tool. On the other side, Tuist compares to Rails. It comes with simple and strong conventions weakly held, which are aimed over configuration. These conventions abstract developers from common complexities, like linking dependencies, and encourage them to follow good practices.

As you might know, many people criticize Rails for being so opinionated. I’ve seen similar comments on issues and pull requests on the Tuist repository. They are understandable so I take the opportunity to introduce them to XcodeGen and encourage them to use it.

I’ve had a few downs while working on Tuist, times when I thought I’d rather stop working on it and work on a project that people would like to use. Seeing people neglecting the use of it, or considering the use of other tools can be demotivating. I took me some time to accept that as something normal and not let it impact my motivation for the project. I took me also time to accept that working on a long-term project, like Tuist is, means that you get a moment of hype when you first release and announce it, that’s when your motivation goes to the highest pick, and then it fades away. I was too used to the excitement of publishing tiny libraries and framework fairly frequently, which helped fed my ego.

I’d love Tuist to be thoroughly designed and continue to stay firm in its conventions. I’m trying hard to get people involved with their project and make them feel part of it as I feel. I few days back, I felt glad when I checked my GitHub notifications on Octobox, and I saw that contributors had been working on new features for the project while I’d been on vacation. It’s a moment when you see that all the work that you did bootstrapping the project, creating a website, writing documentation and some other tasks, pays off.

I’m very excited to see all the ideas and features that we are in the backlog for Tuist. To give you a sneak peak of what’s coming, . is working on supporting static transitive dependencies. This will allow configuring dependencies and their dependencies as static libraries with resources that will get automatically bundled and copied into the right product directories. Changing between dynamic and static linking will be easier than ever after we merge this feature in.

Moreover, xx and xx are working on re-structuring the project targets to make the logic of projects generation reusable. They found it practical and they’d like to extend it with their own additions. The current generation logic is very coupled to the Tuist’s API so they are making it more agnostic.

I’m working on adding two commands, build and test so that you can build and test your projects by just running tuist build and tuist test in the directory where the project is defined. Most of the xcodebuild arguments will be inferred for you. What excites me the most about this features is that developers won’t have to maintain Fastfiles or depend on Fastlane anymore. Another fact that I find exciting is that I’m porting the well-known xcpretty into Swift. I’ll most likely extract it into a separate repository so that developers can use it in their own projects.

Slow but steadily, the baby is growing. We want to make sure that we are taking steps in the right direction and that we continue to be aligned with our values. Moreover, we keep improving our test suite, to which we recently added acceptance tests with Cucumber that test real scenarios. With projects and teams already depending on Tuist from their workflows, is crucial to make sure the tool works reliably and that minor versions don’t break their projects.

It seems it was yesterday when I ran swift package generate to bootstrap the project. I can’t wait to see all the features that we’ll land on the project.

]]>
<![CDATA[Tuist is my most beloved open source project. In this blog post I touch on the motivations that led me to build it.]]>
Wrapping up 2018 📦 https://pepicrft.me/blog/2018/12/23/wrapping-up-2018 2018-12-23T00:00:00+00:00 2018-12-23T00:00:00+00:00 <![CDATA[

It’s been almost 365 days since I made a similar reflection on 2017. This time is the turn for 2018, a year with significant changes in my life, new countries that I visited for the first time, and some ups and downs that made me stronger and helped me grow as a person. In this blog post, I’d like to reflect on some remarkable things that happened to me this year.

🛒 Shopify

I started the year by joining the mobile tooling team at Shopify. Having done iOS development for the last 4 years, this was a shift in my career, where developers became my users, and I’d write Rails and Ruby instead of Swift. During this year, I was part of the design and development of internal tools that are used across the mobile teams at the company. Moreover, my Ruby code, which initially looked pretty much like Swift, improved a lot. Also, I got to learn more about Rails, which we use to build an internal website that I’m looking forward to telling you more about soon.

I’ve also learned a lot about how to work with a remote team. Working remotely was an exciting thing that I had read a lot. However, I was not fully aware of all the challenges that it comes with. I feel I’m better at communicating and coordinating work with the rest of my team.

In retrospective, this was a great step in my career, and I can’t wait to continue to learn and grow.

🧠 Psychologist

As I already wrote about on my “What a psychologist helped me realize” blog post, I went through some issues related to stress and anxiety. I was not balancing my time well, and I was putting too much time and energy onto work-related stuff. Besides my full-time job, I was doing open source, side projects, writing blog posts. I ended up losing motivation for things that I used to like a lot, like doing sport. Everything that I did outside work had to do with work in some way or another. I read books related to software engineering, I talked to my friends about stuff that I did at work…

I wasn’t until I got some professional help when I realized how poorly I was managing my time. Moreover, I learned about the importance of being assertive, setting expectations, defining short-term goals and celebrating often.

Although I struggle a bit nowadays, it got much better. I’m more aware and present about my feelings and take more time to make more accurate decisions. Getting professional help is something that I’d recommend to anyone feeling mentally or emotionally unstable. Our brains are a mystery, and we’d better be taught about how they work.

✈️ New countries

I had the opportunity to visit 3 countries for the first time:

  • Canada 🇨🇦: I flew there for the first time for the Shopify interviews. Since then, I’ve been in Canada several times along the year and visited cities like Ottawa, Montreal, and Toronto.
  • Latvia 🇱🇻: Despite being not that far from Berlin, I had never been in Latvia before. I visited it to attend the DevTernity conference with some friends.
  • Macedonia 🇲🇰: I went on a 4-day trip with Maria José to celebrate my birthday. We rented a car and drove from Skopje to the beautiful Ohrid, and from there to Thessaloniki, northern Greece.

🚶‍♂️ Walked “El Camino”

It had been in my list for a long time and I finally set out to do it this year. I’m glad that I made this decision. El Camino is an unique experience that is hard to describe. You pause your life for a few days, weeks or months to connect with the nature, with people you have never met before. You have time to think about yourself, about your concerns, your motivations, your life. You come across a lot hearted people willing to share stories and happiness with you.

I definitively plan to do it again in the near future. I took the photo below when I was around 40 Km from Santiago.

📣 Conferences

In 2017, I took a break from attending and speaking at conferences. Moreover, with the other organizers of ADDC, we made the decision for me to step down as an organizer. It was a great experience during the two years that I was involved, and I’m glad the event ran successfully. I’m sure the event will continue to grow and be ahead of innovation.

After this year-long break, I feel with more energy and motivated to prepare a talk for a conference. We’ve built some internal tools at Shopify to overcome some scalability challenges so I might prepare the talk around that topic.

🇩🇪 German

I set learning German as a goal for 2018, and I think I miserably failed at it. First of all, I think it was a mistake to define “learn German” as a goal. What’s learning German? Being proficient? Being able to have a casual conversation or make a phone call? Since learning languages is not something that I particularly like, I kept procrastinating and leaving my lessons for later. As a consequence, I kept running into awkward situations where I needed to talk to someone in German, and I didn’t know how to say a word. One side of me things that I should speak the language because I don’t know for how long I will be in the country, the other side of me says that I should instead put my energy into something that really motivates me.

I feel terrible because I think that as a person living in Germany, I should speak the language of the country. Living in a globalized world is excellent, but I shouldn’t disregard the beauty of having different cultures and languages, and thus, I should strive to keep them.

🏃‍♂️ Exercise

I’m not very proud of how I’ve been exercising this year. The first half of the year I got a personal trainer to help me prepare the marathon in Berlin which, unfortunately, I had to abandon after some pain in my muscles. After that, I haven’t trained regularly, barely one or twice a week. Moreover, I got some extra kilos which I’m struggling get off my body. I’d like to get back the motivation that I had two years ago when I was exercising 4 times a week, but I don’t know how.

I’ve been thinking about this a lot, and I think that I just need to find a running buddy that can workout with. Two years ago I was mostly working out with my flat-mate. When I was not feeling motivated, he was the source of motivation. When he wasn’t, I was the source. Moreover, I wasn’t putting too much energy into work, so it was easier for me to stop thinking about work and enjoy the moment of going for a run.

🥅 Goals for 2019

  • Nurture other areas of my life: I started doing it this year, and I’d like to continue doing it more next year. It’s been 5 years with too much focus into software development while disregarding other areas that are equally important.
  • Completely abandon social networks: In my experience, they bring more negativism than positivism into my life. In particular, Twitter, the one that I use the most, I experience it as a bloated stream of information and a race to see who gets the most attention. I ended up doing the same, which caused some anxiety and valuable time of my life being spent scrolling on their infinite timeline. Moreover, I don’t want to be part of a platform where users are used as a means or where the voice of assholes gets echoed.
  • Get the running routine back: I’d like to exercise around 3/4 times a week regularly, even when I’m traveling. Exercise makes me feel more energized and confident, so I should have no excuses to make an effort to bring the routine back.
  • Have a routine for learning German: I replace the ambitious “I want to learn German” with just I want to be constant at learning it regardless of how much it takes to be proficient. I’ve been procrastinating it too much, and I think it’s time to settle down.
  • Devote more time to María José and grow the relationship: I feel I disregarded the relationship a bit by unbalancing my life’s priorities. I think a relationship is like a flower if you don’t put enough water and love into it gets down. While María José has been very supportive and hearted along the year, I think it’s time for me to contribute.

Can’t wait to see what 2019 holds my life. I hope you are all having a wonderful Christmas time 🎄.

Give and share some love with your loved ones and never stop doing it. This world needs more love and more humanity than ever.

]]>
<![CDATA[A retrospective on what 2018 has been]]>
All you need is tools 🛠 https://pepicrft.me/blog/2018/11/25/all-you-need-is-tools 2018-11-25T00:00:00+00:00 2018-11-25T00:00:00+00:00 <![CDATA[

Almost a year ago, I joined the team Mobile Tooling at Shopify. It’s a team that focuses on developing tools and infrastructure that mobile developers can leverage to build and release high-quality apps. It was the first time I had the opportunity to work full time on tooling, something that I’d had the opportunity to experiment with in some open source space.

You can read Mobile Tophatting at Shopify and Scaling iOS CI with Anka to have an idea of the things that my team is building at Shopify.

Tooling is an area that is often disregarded. Since it doesn’t contribute directly to the product, most companies would rather have developers building the app, instead of building the tools that are necessary for that. What many companies don’t know is that having great tooling and infrastructure is vital for the projects to move forward steadily. It has a significant impact on developers productivity, motivation, and the quality of the product that is delivered to the end user.

In this post, I’d like to talk about why I think investing in tooling is crucial, some recommendations based on my experience, and bad practices that you should avoid in your tools.

Why do we need tools?

🚗 More automation

When you work on a project, it’s common to end up doing a lot of manual and repetitive tasks. The first time you have to do something again, you don’t realize it’s the second time, but when you do it one more time, you notice there are a pattern and an opportunity for automation. Manual and repetitive work should be avoided because it is error-prone and computers are better than us at it. When those tasks are automated, not only they are more robust but save us a much time.

For instance, at Shopify developers used to spend a much time checking out PR branches and compiling the app to try out other developers changes. We spotted that and provided them with a command that they could use to download the app and launch it in a local emulator/simulator in a matter of seconds.

Like we do by abstracting code when it’s duplicated in several places, we should automate manual tasks that repeat over time. Developers’ time is a valuable asset that you do not want to waste in repetitive work.

At Shopify, we use Ruby for most of that work, but one can choose the language they feel most comfortable with. Swift, Go, Rust, or Kotlin are examples of languages that you could use as well. Shopify is a company that bet for Ruby and we have a lot of libraries and knowledge that we can leverage in our tools. Not to mention all the open source projects in Ruby, like Fastlane.

✨Reliability

Murphy once said that if something can go wrong, will go wrong. Things can fail at any time. Perhaps as a result of some flaws in the code or the environment where your app is running. When it happens, developers retry because most of the times it fixes the issue. What if we could detect flaws or infrastructure failures and provide those retries automatically? Luckily, that’s something one can do with tools. Commands in the system, and we can know at any time how the execution is going and the result when it completes. With that information, we can provide a mechanism that retries the failures automatically.

We recently implemented automatic tests retries for Android and iOS. We leveraged gradle and xcodebuild respectively, analyzed the output of those commands, and retried the tests that could potentially be flaky. As a consequence, the stability of the pipelines improved and we could surface flakiness issues for developers to tackle them.

We discourage bad test practices that make tests flaky but admit that some testing scenarios bring a lot of value and come with some inherent flakiness.

📲 Better insights

Tools are an excellent opportunity to surface insights about the project and the code. For example, if our tool abstracts the compilation of an iOS app and we have detected some warnings that can potentially become issues in the future, we can expose them on GitHub, and hint the developer how to fix them.

By having control on the tooling, we can also raise awareness when teams don’t follow good practices and conventions that are standard across the organization. We can raise an error if we come across an unacceptable practice, or output a warning if it’s not that critical. We can guide developers to improve the quality of their code and projects.

If your project is on GitHub, its Checks API is fantastic for this. You can report insights directly to GitHub, and they will show up on the developers PRs. You can add inline annotations and even send a markdown file as a report. We recently added integration with that API to our CI infrastructure. Now, all the projects at Shopify can leverage that integration to generate insights when the pipelines are run.

Recommendations

🔋 Build trust with the users of your tools

If you want the developers to use your tools, you need to build some trust. Here are some ideas worth practicing:

  • Be supportive, especially in the early stages of the adoptions of your tools.
  • Collect as much feedback as possible.
  • Be responsive when things are not working.
  • Be honest when setting expectations.

One thing that you might struggle with at the beginning is the fact that only a few developers thank you when things are working, yet a lot approach you when they can’t do their work because of your buggy tool. What has worked for me is being empathetic by telling them how sorry I am about the tool not working as expected, and do my best to provide them with a solution as soon as possible. If the solution takes me some time, I think of a workaround that allows them to continue with their work.

👀 Observe

Although ideas sometimes come from developers, you can also spot areas to improve and propose tools yourself. Developers are so focused on building product, that they barely step back to see how they can improve their workflows with tooling. You are in the position to do it so don’t miss the opportunity. You can see what developers complain about on Slack, tasks that take way too much time, or common patterns that could be abstracted.

When you come up with a tool for an idea, make a proposal, present it to your team, and prioritize it in your team’s backlog. You might feel tempted to jump right into coding, but don’t let the excitement shift your focus.

📈 Incremental interface

As soon as the developers start using your tools, they’ll depend on their interfaces. Be mindful when you evolve the interface, embrace semantic versioning, and avoid introducing many breaking changes. Before introducing a breaking change, think if it’s possible to introduce a non-breaking change that helps developers transition towards the new interface. If you release a many breaking changes and force developers to change their code with every new version, you’ll frustrate and disappoint them, and that’s the last thing that we want.

If you are not sure about the interface that you designed, validate it first. You can find a project and work with the team behind it to validate your assumptions and introduce the necessary changes to make the interface convenient and flexible for other teams to start using it.

✅ Metrics

Measure your tools. Understand how developers are using them and how efficient they are. Measure whether they behave as expected, or they have some bugs that need to be tackled. Get metrics that are handy for developers to improve their workflows. For example, if your tool helps to compile a project, measure how often the project gets compiled, the average compilation time, how many times it fails, which tests are the ones that fail the most.

Measurement not only helps you improve your tools but the teams to improve their workflows. It’s an everyone wins effort.

🚀 Sell the tool

You don’t want to implement a tool that no one wants to use, do you? Once the tool is built, market it internally and do some work on getting the teams to use it. Don’t publish it on a GitHub repository and expect the users to grow as if by magic. You can talk to entrepreneurs about this, and I’m sure they’ll give you many tips of things that they do to get more users into their products.

As a good product, make sure it’s documented, has a rememberable name, and provides easy instructions to get onboard. Don’t be a developer and think the only important thing in the tool is the code. If it doesn’t appeal to them from the outside, they won’t use it.

Fastlane is an excellent example of a well-marketed tool. You can use it as a reference to market yours.

❇️ Integration tests

Tools often have a contract with other tools and services. For example, when you wrap the compilation process of an iOS app, the tool needs to meet the contract with xcodebuild. The most reliable way to know if the contract is properly met is by running a test that tests the integration. Same goes when your tool interacts with HTTP APIs. Although you can use the documentation to implement the request generation and the response parsing, you’d better test it with real data coming from the API. As opposed to apps, where the contract with internal backend APIs is more predictable, and changes are communicated beforehand, the environment and the services the tools depend on are not that predictable.

As an example, at Shopify, we run our actions which wrap xcodebuild and gradle with real builds of fixture projects. If they interact with any REST API, we record real responses using VCR and use those responses for the tests.

👎 Bad practices

💔 Carelessness

For some reason we, developers, don’t treat tooling with the same level of care as we do with the main code of the project. I thought it was because usually the tools are written in a language other than the project’s but a colleague of mine proved me wrong. In my experience in different projects, the tools are usually not that well structured, don’t include tests, and the code is a bit unmanageable. If you need an example, you can think about Fastfiles or the bash scripts someone added to the project a while ago. That code that has grown and been tweaked several times until it became the fragile piece of code that your team depends on.

I think tools should be treated with the same care as the project’s code. It should be structured and tested regardless of the language it’s written in. If you don’t put love on that code, no one in your team does, and all of you end up depending on a piece of code that can break at any time. Do you need an example? Have you ever experienced painful releases because the script that is responsible for pushing the app to the store broke without no one in the team noticing it?

🐞 Deficient error handling

How often have you seen a tool blowing up an error which outputs the whole stacktrace. Is it valuable for the developer using the tool? No, that’s handy for the developer that writes the tool. The users of the tools need to know what happened, and why they couldn’t achieve what they were trying to achieve. What caused the problem is an implementation detail that shouldn’t be exposed to the user. If you are writing the tool, it takes more work to handle all possible scenarios, but by doing that you’ll offer the users a better experience, and they’ll be thankful for that. What’s more, if you output a stack trace, they’ll think the source of the problem is in the tool itself.

💥 Noisy output

In comparison to non-CLI software, like web or mobile apps, the interaction with the tools happens through the standard input and the standard output, what they type and what they see in the terminal. There’s only one channel to communicate things to the user, so we have to use it well. We might feel tempted to dump anything, but that results in a bad experience because we may present information that is irrelevant to the user. Not showing enough information is as bad as showing too much. If we show nothing, the developer might think that the process got stuck and that they should interrupt it. Too much information might make the developer feel overwhelmed. Whenever you plan to add an output, answer this question: “Is this useful for the user?”. If it’s not, don’t output it. See the terminal as a constrained canvas where strokes are expensive.

🤲 Open source and third-party tools

Also, last but not least, I’d like to touch the usage of open source and third-party tools briefly.. When you build tools, you don’t need to build everything yourself. Does your tool need to have error reporting? You’d better consider third-party error tracking services that offer SDK in multiple programming languages. Do you need to process some snapshot tests images? You can find a lot of libraries out there that help you with the image processing.

Don’t spend your time building something that someone has already built. If it’s reliable and helps you solve your particular problem, it’s worth a try. It might happen that the dependency doesn’t fit your needs exactly as you planned. In that case, you can either make a contribution upstream to the project or fork it out and modify according to your needs.

For instance, at Shopify, most of the automation was done with Fastlane. It saved us much time and allowed teams to build their tools easily by combining the lanes that Fastlane provides. However, we’ve reached a point where we need more standardization and reliability which is unfeasible to achieve with Fastlane. We are replacing some of its features with a solution more tailored to our needs. Some of the tools that we build use Fastlane internally, but it’s on us how the framework is used, not on the users of our tools.

As you have seen, working on tooling is exciting and challenging. The process of bringing tools into developers’ workflows goes from the conception of the idea, until the marketing step, including the design, proposal, and development of the tools. Not only you grow as a developer but learn how to collaborate with other teams in your company, how to listen to them, and how to leverage your experience to provide the best solution for their problems and needs.

If you found it exciting and you’d like to speak further, don’t hesitate to leave a comment or write me an email. We, mobile toolers (if that’s an accepted word), go through similar struggles so the more we share, the more we can learn from each other.

Have a nice week!

]]>
<![CDATA[In this post I talk about why investing in good tooling is crucial for projects to move steadily.]]>
What a psychologist helped me realize https://pepicrft.me/blog/2018/10/06/what-psychologist-helped-me-realize 2018-10-06T00:00:00+00:00 2018-10-06T00:00:00+00:00 <![CDATA[

A while ago I wrote a blog post about the stress that I was suffering in my life. Back then, I decided to get some professional help, which turned out to be a great decision. Not only she helped me manage the stress, but I understood better how our brain works. I’m so glad that I made that decision, and I encourage anyone suffering stress or emotional issues to get help as well.

I learned about balancing things in life, emotional intelligence, social conducts, communication… Many things that I assumed were right in my life, were not well taken care of. I’ve been so focused on work for the last 4 years that I neglected other important aspects that contribute towards one’s happiness and a sense of purpose and fulfillment.

I still remember when I had my first session with the psychologist. She asked me how much time I work daily. I told her that 8 hours, but that I also spend time doing open source. I cheated myself by putting open source outside the work bucket, and then she asked me: What is open source? Isn’t that work? Interesting… I hadn’t looked at it as work, but in the end, it is work. Then we talked about the things that I did before or after work, for instance, reading or listening to music or podcasts. She asked me which kind of reading and podcasts. When I answered that question, I realized that most of the reading and podcasts had a connection with work because all of them were technical.

I was investing so much time in work that other areas of my life were ignored. I was not working on my social life, taking care of my relationship and the family, exploring or just dedicating time to myself. How did I end up like this? I think it was the wrong idea of having to renovate myself as the industry evolves. Maybe seeing everyone around me creating, renovating themselves, trying the latest framework and programming language, made me think that I had to do the same to be accepted by the industry, find opportunities, and take part in the new and fresh things that everyone is talking about. That idea is wrong. I don’t need to renovate myself, listen to podcasts, check Twitter, or attend conferences. The only thing that I need is to find something that motivates me, learn about it, whether that is a programming language, a framework, or a hobby, and don’t feel I need to play with the thing everyone around me is playing with. In that regards, I admire people in our industry like Tom Preston and DHH, who found their passion in Ruby and Rails, became good at them, and focused on creating things with value for the society.

I learned that I can’t and don’t have to know everything. When I accepted that, I stopped listening to podcasts, attending conferences, or reading blog posts very often. Ignoring those things decreased my anxiety, and worry for missing out. I had more focus and more time that I could invest in something else. The psychologist showed me an analogy between those areas and a table with 4 legs. The only way a table can stay steady is if the 4 legs are strong. If one of the legs is weak or has flaws, the table will fall down. Our life has 4 legs as well: ourselves, family, friends, and work. If we don’t balance them, one of them is stronger than the others, and the table will drop: meaningless relationships, arguments with our partner, ill health. The strongest leg in my table was work. When I dedicated time to the other legs, I did it in the context of work: I attended tech meetups to meet new people, hung out with people I had met in conferences or at work and ended up talking about work.

I started working on the other three legs by dissociating them from the work’s leg. It was an uncomfortable feeling first, but a pleasing one once I controlled it. I started painting, which is something that I used to do when I was young. I realized that the exercise of painting has the power of disconnecting my mind. I do it old style, with ors, brushes, and paper. I can do it with an iPad, but I would be connected to work and technology in some way (push notifications, email, Twitter), and the experience wouldn’t be the same. I encourage anyone to find an analog activity and practice it, but without tracking or writing about it on Twitter or Facebook. I think feeling disconnection and boredom is something we should find time for in our busy world and industry.

When was the last time you felt boredom? I was presented with that question and didn’t have an answer for it. It’s been such a long time without feeling bored that I don’t remember how that feels. Boredom became so uncomfortable that I appealed to my phone to not handle it. I didn’t realize that experiencing boredom is an important thing to do. Many people call it meditation nowadays, but I think the industry has misled that term and made us believe that playing a 5 minutes audio on an app is meditation That’s just pausing the chaos, and noise around us for 5 minutes, which will slap our face afterward. We are offered with tons of options to celebrate our achievement by sharing the disconnection with the world on social networks. That’s not meditation, that’s a product. I tried to find more of those moments. I’m glad that we moved into a new apartment with a balcony. I got one of those beach chairs, and I can lay on it looking at the sky, and the planes landing in Berlin. No phone, computer, talk, or thoughts whatsoever. I do the same thing when I’m on the public transport or riding my bike. I let my thoughts wander, I look at the things around me, the people, the buildings, nature, everything.

Another domain of psychology that we put the focus on is on assertiveness. Assertiveness is the ability to manifest your rights and respecting others’ rights. I realized that I’m a person that respects everyone’s rights, but when it comes to manifesting my own, I often don’t do it (I’m a passive assertive). For example, if I’m with friends, and I have a different opinion that I’d like to share, I’d be assertive by sharing my thoughts (the right to be heard and taken into account). Being a passive person when it comes to assertiveness leads to things like interpersonal conflicts, depression, anger, or having a poor image of ourselves. After recognizing that, I worked a lot to be more assertive. Whenever I feel I have the right to something, I just manifest it. It was hard at the beginning, but once I experienced the benefits of being an assertive person, it got more natural.

Last but not least, I was introduced to the notion of emotional intelligence, something I hadn’t heard about before. I used to think that one becomes successful in life by working on its intellect. That’s not necessarily true, because there’s another intelligence, the emotional, that we are not told about and thus barely work on. It’s been proven that having good emotional intelligence is crucial to being emotionally stable and making great decisions in life. When I reflected on the intelligence I had worked on the last few years I realized that it was mainly the intellectual one. As a consequence, when I was presented with hard and deep emotions, I felt overwhelmed, responded fast and often without reasoning.

Reasoning about my thinking, emotions, and time management has significantly improved my life. Work is no longer my main focus, and I don’t let it invade other areas of my life. I learned how to make better decisions and deal with emotions that used to affect me. All of that thanks to the professional help of a psychologist that understands how we, humans, think and behave. Our brain 🧠 is a muscle which we need to take care of. Take care of it ❤️, take care of yourself.

If you are experiencing similar things in your life and would like to chat about it, don’t hesitate to let me know. I’ll be glad to share with you the ideas that worked for me.

]]>
<![CDATA[It's been a few months going through a therapy that has helped me understand how my brain works and where the stress that I used to experience came from.]]>
GitHub workspaces using email https://pepicrft.me/blog/2018/09/06/github-workspaces-using-email 2018-09-06T00:00:00+00:00 2018-09-06T00:00:00+00:00 <![CDATA[

In my effort of moving away form Gmail, I took the opportunity to set up my new email account in such a way that I could better organize my work on GitHub.

There’s a feature that GitHub has never had and that I’d find very useful, workspaces. I used to look at the notifications to know the things I should focus on. If you are part of multiple organizations, all those notifications show up in the same place. As a result, when I open the notifications page to plan the open source work, I end up looking at work stuff and the other way around.

Luckily, most email providers offer defining rules for emails. With rules, you can match certain emails that you receive, and define an action for them, for example archiving or deleting.

Thanks to rules I was able to have workspaces for GitHub right on my inbox. This is what I did:

  1. Add an email address per workspace to your GitHub account.
  2. Configure the notifications per organization and forward them to the right email account.
  3. Create a folder per workspace. In my case it was “shopify”, “work”, and “tuist”.
  4. Define a rule where you match the recepient with the emails above and send the email to the right folder.

Simple, isn’t it? With this little tweak I can access a notifications-free GitHub, focus on my work, and check the notifications on my email when I need to plan the work.

]]>
<![CDATA[With this simple tweak I managed to have a notifications-free GitHub dashboard with workspaces right on my email.]]>
Open source mindfulness https://pepicrft.me/blog/2018/08/19/open-source-mindfulneess 2018-08-19T00:00:00+00:00 2018-08-19T00:00:00+00:00 <![CDATA[

As a developer who likes and believes in the benefits of making our software open, I devoted a vast amount of time building open source libraries and tools in the open. It helps me learn and grow as a software engineer and experiment with things I can’t experiment with at work. In most cases, those contributions happen before/after work, when you need to care about other important things in life like family, friends, and health. Yes, those things are more important than work or open source, regardless of how much you like it.

One of the things that I find the most difficult of working on open source is doing it with mindfulness. I tend to get trapped by the joy and spend too much time thinking and working on it. As a result, I reach burnout points when I start wondering why I’m working on the project, what’s the value of it, what if I spend my time on something else, what if no one uses it. I even lose the motivation for it.

Getting burnout of doing open source work is not something new. If you do a bit of research you can find a lot written about the topic:

In this blog post I’ll walk you through some of the principles I’m sticking to to have healthier open source contributions:

Responsibilities

Open source is something that I do in my free time, something I’m not getting paid for, and which I do for fun. I dedicate the amount of time I consider healthy and balanced with other responsibilities. If I assume too many responsibilities and devote too much time to it, I stop having fun and start worrying too much about the project.

Vision

When I start an open source project, I write down what I’m aiming for with it. Which problem I’m trying to solve, the project, is it a short or long-term project? By doing so, I avoid wrong expectations, and I can use it to steer the project. For example, if it’s an experimental project that I don’t plan to maintain, I reflect it somewhere on the README.

Having a vision in the project is useful in discussions, where you need to justify why you are making individual decisions.

No

This is something I struggle a lot with because overall, I don’t know how to say no. Luckily, I’m getting better at this with some work. Some examples of when I have to say know are:

  • When a feature request is outside the scope of the project.
  • When the code quality doesn’t follow the project standards.
  • When tests are missing. Not being assertive in an open source project might lead to some frustration and lousy self-esteem.

Goal

A project without a goal is hard to steer. When I work on a new project, and I tell people about it, I hear lots of different opinions along the way. People that like the project, and start supporting, people that don’t believe in the idea and think it’s not worth spending time on something like that. If I dreamed with the project and envisioned it, I appreciate the feedback a lot, but I’d like to prove to myself whether it was a wrong decision or something that brings value to developers.

I’ve sometimes been concerned about what other developers thought about the project and ended up losing motivation for the project.

Priorities

When I work on a new task, I prioritize either the ones that add the most value to the project, or which help achieve the next milestone to the project. Along the way, ideas and external contributions come out. I add them to a processing backlog and decide which level of attention they need. Concentration is gold nowadays, so even if I feel tempted to shift my focus to them, I try not to.


Those are some principles that I recently applied to have a healthy relationship with the open source project. It’s hard, especially when it’s something that I enjoy doing, but something that I have to do if I don’t want to burn myself out.

Are you an open source contributor/maintainer? I’d love to know how your relationship with open source looks.

]]>
<![CDATA[Not being mindful when contributing and maintaining open source projects might lead to burnout or low self-steem. In this blog post I talk about some principles that I applied to have a healthier relationship with the open source.]]>
Why am I obsessed with developers being productive using Xcode? https://pepicrft.me/blog/2018/07/23/obsessed-xcode 2018-07-23T00:00:00+00:00 2018-07-23T00:00:00+00:00 <![CDATA[

These days I’m attending Shopify’s RnD summit where different teams give talks and workshops about some areas of the company. In today’s talks, there was a topic that resonated with me: being obsessed with developers productivity. It made me think about a tool which I’ve been building for a few months, Tuist, and what led me to devote part of my free time to build it.

The name of Tuist comes from merging build and test.

I’ve been doing iOS development for most of my career. I started building apps myself, then I jumped into building them within a team, and eventually, I ended up researching and working on how to architect them to make sure that they scaled well with the growth of the team and the project. If you are building a simple app with a few targets, it’s probably enough to use Xcode and maybe some automation tool like Fastlane that takes some manual work off from you. If you have the size of a company like Facebook, you can create a team of engineers to address your project needs. Unfortunately, not all the companies are Facebook which can create a team focused on making sure that the project scales and developers are productive working on it.

As an example, at Shopify there’s a team focused on that, mobile tooling, which I belong to. We are not too large, but enough to have an impact on developers productivity. We maintain part of the CI infrastructure as well as the tooling and processes that developers use to interact with their projects.

One of the things that I like the most about working in tech is the ability to help other people. This was a great opportunity for me to take action and help developers focus and be productive on what they like the most, building great apps. I hated spending time spending time waiting for the compiler to finish, trying to understand the errors that Xcode threw, or doing a lot of manual work to add new projects/targets. I wondered so many times why did I have to do all those things myself when a tool or Xcode could take care of those things.

A few years later Xcode remains the same, except with some nice features like refactoring and dark mode. Jokes aside, Xcode keeps being optimized for simple apps. The reality is though, that apps are more complex. One might think that we could just keep the structure of our apps simple, but that’s unavoidable if your project is large. Sooner or later you add targets for other platforms, like a watchOS application, or have other teams that want to reuse some of the classes that are already built, forcing you to create a framework for that.

Apps become complex and the tools that we are given doesn’t help us deal with that complexity

Things are great when we create new projects with Xcode. We are given an assistant where we can select the type of application that we’d like to create or the testing targets that we’d like to add. Those projects come with an implicit configuration that we are sometimes not aware of, but which is crucial for things to work. As we keep adding and changing things, it’s very likely that we touch it resulting in our setup breaking and developers spending time figuring out why. Have you been there before?

Let me give you an example to better understand what I’m talking about. If you have an iOS app and add a watchOS one, the following things are added by Xcode:

  • A watchOS app and extension targets.
  • The extension target has the same bundle id as the app but prepending .watchkitextension.
  • The iOS app has a new build phase that embeds the app inside the products $(CONTENTS_FOLDER_PATH)/Watch folder.
  • The watchOS app has a target dependency with the extension. It also has a build phase to embed the extension into the plugins folder.

If any of the settings above is changed or removed by mistake, you’ll get weird errors trying to run the watch app. Should Xcode make those things more explicit? I think it should. Could Xcode catch those issues? It could. But why it doesn’t? 🤔.

I strongly believe that the setup and maintenance of projects should be taken away from the developer’s responsibilities by making things more explicit. Furthermore, any misconfiguration should be detected as early as possible and thrown to the user with understandable messages that facilitate the fix. The way Tuist solves this problem is by providing a new project format. The tool generates Xcode projects from it and offers a convenient and easy to use set of tools that developers can use from their terminal.

Another thing that motivates me about building projects like Tuist or xcodeproj is that I’m also opening new APIs that teams can leverage to build their own tools. When I look at the Android ecosystem I’m jealous of the tools that they are given. The build system, Gradle, allows defining the build process in Kotlin and extending it through plugins. It also offers convenient command line tools whose interface depends on the current directory. For example, if there’s a module that we can build for release, you can execute ./gradlew assembleRelease and you’ll get your build for release. Imagine if you had such a thing in Xcode, a wrapper around xcodebuild that figures out the arguments for you to keep the interface as simple as possible. This is also something I’m aiming for in Tuist.

While working on this project I had a few downs. One of them was feeling that I would be working on something that no one would use. In my experience working with iOS developers, it’s hard to convince them to use something that doesn’t come from Apple. Some are often hopeful for Apple to make their lives easier, and skeptical regarding the usage of non-official tools such as a CocoaPods, Carthage, or similar. They see the usage of third-party tools a risk that they are not willing to take. Apple has a strong influence on how it wants its ecosystem to be, and that makes it difficult for projects like Tuist to find motivation and support from more people.

I’m very optimistic about the future of Tuist and the opportunity of having an impact in the apps that are being built nowadays. Before writing the first line of code of Tuist, I had had a chat with some iOS developers who expressed me their struggles working in their codebases. That was around a year ago and nothing has changed so far. I’m working on Tuist with the uncertainty of Apple extending the scope of the Swift Package Manager, but even if that happens, I’ll be happy to have put a seed in what the future of productivity-focused tools for Xcode should look like and all the things that I’ll have learned along the way.

If Tuist sounds interesting and you would like to give it a try, or just contribute to the project just let me know. I’m eager to hear your experience using Xcode and get ideas for things Tuist could help you with. There’ll be an official blog post and documentation once the first version gets published. In the meantime, feel free to head over to the organization on GitHub or check out the project repository.

If making mobile developers productive sounds interesting to you and you have some experience on Ruby drop me a line.

]]>
<![CDATA[In the last months I've been investing a big chunk of my free time on building tools to make developers productive working with Xcode. In this blog post I reflect on what led me to start working on that tool, Tuist, and how I'm addressing some of the challenges that are presented when using Xcode at scale.]]>
Open Source https://pepicrft.me/blog/2018/04/29/open-source 2018-04-29T00:00:00+00:00 2018-04-29T00:00:00+00:00 <![CDATA[

I’ve been wondering how many things that I do, I do them because everyone does them. With everyone sharing how they do things, and pitching us their library, their work-style, or even their tools I think software engineers are strongly biased by external opinions, including me. Are you using VIPER because it’s a good fit for your project’s needs or because you saw a few companies using it? Are you using that library because it’s saving you time, or because you saw an example of how to use it and you thought you could go with replicating the example?

Answering those questions has been eye-opening, and made me realize that some of my opinions and decisions were not made by myself If you have never done this exercise I encourage you to include an extra question when you have to make decisions:

Am I doing this because everyone’s doing it?

I don’t like doing something when I don’t feel passionate about it. It’s easy to follow a trend and end up doing what everyone else is doing, but I prefer not to. Following trends and already formed opinions do not help with building yours; you repeat what others are saying. Sometimes it might happen that the conclusion that you have, is the same as others’, and that’s fine, but I think there’s a considerable value in spending some time to form it.

I asked myself the same question for open source. I realized that I prefer doing open source software over private’s. Open source has become an important thing in the software industry, and companies that traditionally didn’t do any open source started doing it. Am I doing this because of the trend? Why am I doing it in the first place?.

This question led me to an answer which I think describes well what open source means to me and why I devote part of my time to it, interaction with people. I like working with people more than computers. Unfortunately, that’s something you don’t have a lot if you work in tech, where your best friend is your computer. That changes if you do open source because all of a sudden you are connected with people from all over the world working together towards the same goals.

For me, the beauty of open source is creating community. When I envision a new open source project, I like spending some time thinking about how I’d love the community to be. One of the first things that I do is creating an organization which they can feel part of. I could create the repositories on my GitHub profile, but it wouldn’t be fair, in a sense that, they’d be contributing to a repository that belongs to me. I want them to feel they are contributing to something that also belongs to them. I use the Aeryn tool from the Moya community to invite contributors after they merge their first PR. That makes them feel part of the family and encourages them to continue contributing to the project.

Interacting with people is harder than doing it with computers and software. Different profiles of developers will land on your project:

  • A developer who interested in contributing altruisticly because they like the idea behind it.
  • A developer who contributes towards their interests.
  • A developer who extensively uses it and reports a lot of bugs and great ideas.
  • A developer who will make you feel like you are serving them and you should do whatever the asked you to do.

Working in open source has helped me improve my communication and interaction skills. Dealing with all different profiles and choosing the right language for every interaction is not easy. It’s a challenge, but one of those challenges I enjoy going through. I still have a lot to learn, but I’m enjoying the process.

As an example, I recently started working on a new open source project, xcbuddy. It’s been a few years working as iOS Engineer and suffered the pain of using Xcode at scale. Instead of keeping complaining I just decided to build a tool that helps teams overcome the most common challenges when scaling their projects. The first version of the tool is not ready yet (and there will be a more extense blog post talking about it) , but I already established the foundation of the community around it. This is what I did:

  • Tweeter 🐦: I built a simple service that publishes tweets with the most recent updates of the project. What’s more important, it publishes when there’s a new contributor that has joined the organization. It makes the community more inclusive, making them feel part of the community. I think inclusion is not only about having a markdown in the repository that says that your project/organization is inclusive. It’s all about little actions, like this one, that makes everyone feel welcome and part of the project.
  • Slack-free 🔇: Slack is great for real-time communications but in my opinion, not a good fit for open source projects where communications happen mostly asynchronously. Moreover, it’s easy to lose track of discussions when there’s a lot going on.
  • Spectrum 👥: Spectrum is a tool for communities which I recently came across. I’m still evaluating whether it’s worth using it or we should keep things simple and use GitHub for everything. I decided to create a community in Spectrum because there are things like introductions, ideas, or off-topic threads that are not suitable for GitHub issues.
  • Website 🌎: I’m designing and developing a website, not only to explain to developers what the tool is about, but to praise, and thanks to contributors for the work they are doing. I think everyone deserves recognition for its work because, without them, most of the open source projects that we use nowadays wouldn’t exist.

When it comes to building open source projects, the essential element for me is the people that will make it possible. Although I might be the primary driver when the project takes the first steps, most of the projects keep moving thanks to the energy and the passion which external contributors bring. Sometimes you feel down, or a bit burnout after a lot of work on the project and it’s energizing to see that the project grows thanks to the work of people that you welcomed to the project back in time. I still have a lot to learn, and things to improve, like how to be more transparent, or manage my time better, but this ride is fascinating and I’ll continue doing more work in the open.

I’d like to thank people like @orta because they are an excellent source of inspiration for me when it comes to creating communities and doing open source by default.

]]>
<![CDATA[In this blog post I talk about why I work on open source projects and what are the most important elements for me when starting an open source community.]]>
On having focus https://pepicrft.me/blog/2018/04/11/on-having-focus 2018-04-11T00:00:00+00:00 2018-04-11T00:00:00+00:00 <![CDATA[

There’s something that has been happening to me lately, and that I’m struggling with: having focus. While this wasn’t a problem a few years ago, and I was able to sit down and work on one thing at a time without distractions, I can’t do that anymore. It might be that I’m getting older, or that I ambitiously pushed myself beyond my limits. The fact is that this started affecting me, losing motivation for things, and feeling exhausted with technology overall.

I found myself involved in multiple projects at the same time. I’m terrible at saying no, especially if it’s an exciting project where I can contribute. I’m also bad at processing ideas. Every time I have an idea, I overexcite instead of sleeping over it, and adding it to a backlog in case it’s something feasible with high potential to be worked on in the future. Imagine that happening every day and your list of active projects growing endlessly. You open your laptop in the morning, try to plan your day, and you don’t know in which project you should focus on. Should I work on this because today I feel that I’d love to do this one? Or should I work on this other one that has a couple of issues opened waiting for feedback from me?

Moreover, and like most developers, we are forced to work on the ongoing project of renovating ourselves. You can’t just learn a language X and expect that knowledge to be enough for your whole career. Language X evolves, and with it, some other technologies or languages that emerge. There are trends that you need to follow, even if don’t want to because that’s the only way you’ll have decent work opportunities in the future. For example, if you were Objective-C developer until Swift came out, and you didn’t invest time in learning Swift, you would very likely be missing a lot of opportunities nowadays. That applies to any language and technology: the Javascript developer that hasn’t learned declarative UIs using components, or the Android developer that is still using Java.

With so many projects to dedicate time to and a lot of distractions, having a lack of focus is a natural consequence. These are the things that I’m currently doing to bring that focus back and feel less overwhelmed:

  • Slow down: This might seem obvious but my personality leads me to the opposite. I’m a person who reacts quickly without thinking thoroughly. Changing this is tough because there is an inner Pedro whose excitement would lead him to answer without thinking twice.
  • Notifications-free phone setup: I decided to leave my iPhone at home and go with an old Nokia that only allows me to make and receive calls and SMS. That’s all I need. Notifications are rarely important, and that consumes a lot of energy and focus from me. All work-related emails or Slack messages will be accessible only from my computer and the same with the social networks that I barely use.
  • A side project at a time: Instead of kicking off and dragging multiple projects at the same time I’d devote my time to one project instead. I became less active in open source projects than I used to be, and I’m focusing on my first software product project that I’ll share with you soon.
  • It’s ok not to stay up to date with technologies: There are new technologies, blog posts, and projects coming out every day so I changed my mindset from having to be aware of all of them, to filtering those who really matter to me. As an example, I don’t follow the evolution of Swift actively but only skim through the release notes when there’s a new version.
  • Spend less time with technology: I love technology, but like any other thing, too much of it creates an addiction, and that’s not good. Instead, I’m devoting more time to myself and to the people around me. Spending more time with people helps you make your software more human-driven.

I think it’s important to have these retrospectives with ourselves where we can see how technology is impacting us and how healthy our relationship is. We spend an insane amount of time with it, and this time is increasing every day. We shouldn’t allow technology take control over our focus, whether that is being happy, or be an astronaut.

If you are a developer, who has experienced something similar I’m curious to know how you overcame it and how you make to stay focus in this world where it’s easy to get distracted.

]]>
<![CDATA[I'm struggling to have focus nowadays. In this post I describe why it's so hard for me to focus, and the things that I'm doing to overcome the problem.]]>
Hallo Ruby, wie geht's? https://pepicrft.me/blog/2018/02/23/hallo-ruby 2018-02-23T00:00:00+00:00 2018-02-23T00:00:00+00:00 <![CDATA[

It’s been a long time since the last time I built something in Ruby. Most of the work that I’ve done with the language while I was iOS developer were changes on either the CocoaPods Podfile, or the Fastfile. I became super optimistic when Swift came out, and even built some command line tools and libraries contributing to the community. However, as I wrote in one of my posts, I decided to devote most of my time on interpreted and community-driven languages like Ruby or Javascript. Two things motivated me to make this decision:

  • Ruby is the primary programming language at Shopify, my current company. That was a decision made a while ago, and all the internal tools and libraries are built in Ruby. It didn’t make any sense to push for another language that couldn’t fully leverage the existing stuff or be integrated easily.
  • I’ve been doing Swift since it was released. Most of my open source projects are written in Swift, and I like the language. It’s beautiful and well designed. Things are moving fast, and the community is very involed the development and the decisions that are being made. However, I did a retrospective on what doing Swift means to me as a developer, and I realized that I was limiting the scope of the software that I build, contributing towards an ecosystem that is controlled by a company with a lot of power, Apple. Doing Ruby and Javascript, programming languages mainly driven my communities, I’d make my software accessible from any platform, or ecosystem. Anyone can access a website, but not everyone can have access to an iPhone or a macOS device.

Since I joined Shopify, I’ve been doing mostly Ruby. There are great engineers with a lot of experience here, and it’s a fantastic opportunity for me to learn. It felt bizarre the first time that I tried to write some Ruby after some time off. I’d like to outline some of these strange feelings:

  • Types: This was probably the most awful thing for me. I wasn’t aware of how much I was used to types. I just used them, and I safely wrote my apps or Swift scripts. There are no types in Ruby. You call a method that takes a few arguments with some names, but you don’t know which types the method implementation is expecting. Should this argument that I’m passing be a string? Should it be an array of String? With Swift you can write the code with a lot of confidence, but with Ruby most of the times, you have to dig into implementations to know what types the method expects. I’ve seen some Ruby projects, and they seem to leverage documentation to offer types information.

  • Where should I validate my data: Our software gets input and returns output. If we take a mobile app, for instance, inputs are user interactions, and the output is the views presented on the screen. When an input is received, the action is propagated through our software, to produce the output. In every step of the propagation, we have the types system and the compiler to make sure that all the pieces in our software match. It won’t let us go to production or the store if there’s a mismatch. With Ruby, the validation happens as the action propagates through the system. If the system hasn’t been designed properly, it’ll result in a runtime error, and the system will need to recover from it. That scares me, and to be honest, I don’t know what’s the best approach to minimize this yet. It doesn’t make sense to validate all the inputs in our methods because it’s slow and we shouldn’t assert for something that we’ve wrongly implemented. But if we only validate the input, can expect some runtime errors to blow up unless we thoroughly test all kind of integrations in our software.

  • Code organization: In Ruby, there’s this notion of modules. Modules are used among others, to create namespaces. That’s an idea that didn’t take me much to swallow cause there’s something similar in Swift, but how to split modules and classes in different .rb files is another story. I’ve checked multiple open source projects, and each of them does it differently. I’ve seen some using require_relative, others using an umbrella entry point Ruby file that defines the project hierarchy autoloading all the components. I’ve stuck to the latter, and it’s working well for me. autoload is not thread-safe, and that it’ll be deprecated, but since I don’t have any thread-safety requirements in the tools that I write, it’s safe to use it. This is an example of what an umbrella Ruby file looks like:

module Catalisis
  module Builder
    autoload :Project, 'catalisis/builder/project'
  end
end

I’m getting used to the things that I mentioned above. When you spend some years with a language, you tend to replicate the patterns and styles into the new language. For example, instead of using the naming convention that is commonly used in Ruby projects I literally brought the one that Swift recommends, where if something can be encoded, it should be called Encodable.

There are things from writing software in Ruby that I’m enjoying a lot:

  • Minitest/guard: You can see how mature the language and the community are when you start using the tools. One that surprised me a lot if miniguard. When you write tests, it detects the files that you changed and runs the tests of those files, imediately. You can focus on writing your code/tests, and there’ll be a parallel process running and telling you whether everything passes or there’s an issue. You can do proper TDD.

  • Editor: The language is not tied to any editor/IDE so you can choose whatever works best for you. If you prefer something closer to the Xcode experience, you can try any IDE from the JetBrains suite. If you are a minimalist and just want a simple text editor with syntax highlighting and pluggable add-ons to customize your workflows and make you more productive you can use Sublime, Atom or VSCode. I personally use VSCode. It’ simple, fast and extensible. It has just what I need to work on my projects.

  • Libraries: Whatever you can imagine can very likely found as a Gem. The community has been building libraries for a long time. This saves you a considerable amount of time.

  • Distribution: Your projects can be distributed as gems. You can install it on your system, and it’ll install all the necessary components to get it working. This is not officially supported by Swift and its package manager, so you have to appeal to non-official tools. I wouldn’t be surprised if Apple includes this as a feature in the Swift Package Manager, but even with that, pre-compiled binaries would break with new versions of Swift because it hasn’t reached ABI stability yet.

I hope you enjoyed this brief reflection. If you are more familiar with Kotlin or Javascript, you can replace Swift and Ruby by Kotlin and Javascript and the points mentioned above should apply as well. As I mentioned earlier, making the software that I write more accessible is for me one of the key motivators and having the opportunity to do it at Shopify is one of the best decisions I’ve recently made. You can expect more Ruby OSS coming from me from now on :).

If you have had a similar experience transitioning from a static to a dynamic language I’d like to hear your experience. Don’t hesitate to leave a comment right below.

]]>
<![CDATA[It's been a long time since the last time I coded something on Ruby. In this blog post I talk about why I started using it again, and how it feels after spending a few years working with a compiled language like Swift.]]>
Thoughtful usage of technology https://pepicrft.me/blog/2018/02/14/thoughtful-usage-of-technology 2018-02-14T00:00:00+00:00 2018-02-14T00:00:00+00:00 <![CDATA[
  • I struggle to concentrate when I read.
  • I struggle to listen when someone is talking to me.
  • Doing something that doesn’t involve technology is something that I don’t feel like doing.
  • Receiving notifications and updating everyone about what I’m doing has become part of me.
  • The streams of information are flooding my attention, and it tires me daily.

Like many other things in life, doing too much of something is not healthy. A cup of coffee a day is not bad, drinking five can be very dangerous for your health. Playing video games once a week is not bad, but doing it as a daily routine can have a very negative impact on you. Although technology is useful, and it’s enabling many things that were impossible a few years ago, I think the fact that I’m fully immersed in it every day is having a not so good impact on me. This is something that I realized, but I’ve never taken any action. I feel that technology is like candies, you consume it, you enjoy the moment, your brain does it, and you don’t realize how bad a lot of it can be until you see the long-term impact it’s having. Candies make you fatter, and it doesn’t happen overnight. You enjoy every single candy that you put in your mouth. They come with beautiful colors because that tricks your brain; they are sweet, and your mouth likes that pleasure. Do that every day, and sooner or later your body will manifest. Our brain is the result of millions of years of evolution. Technology has rapidly evolved in less than 50 years. Do you think our brains have been able to keep up with that fast-speed evolution that surrounds us? I don’t think so.

I work in technology. I’m a software engineer, so working with technology is part of my job. I spend my days using apps, exploring the internet, reading news, checking my email, downloading the last updates, buying the latest technology. It feels so exhausting when I write it down… On one side I like it because technology helps to solve real-world problems (not only the ones that people working and living in the Bay Area have), but on the other side, I think there’s beauty on living a technology-free life and doing things without technology.

When I look at all the innovation that is coming, the new technologies companies are investing money in it scares me. It scares me the fact that technology is evolving with a lot of side effects on humans along the way. We are becoming addicted to it; we don’t know how to live without it. We’ve come to think that all around us is about technology. Technology is always there to surprise us, to gives a new thing, new feelings that we haven’t experienced before, new gadgets to try out, problems that didn’t exist before. We’ve unconsciously become non-conformist people. We are always expecting technology to feed our need of having more, of experiencing more, of being more connected. Technology has learned how to prove us that we need it, and that scares me even more. In the recent weeks, I’ve been detoxing from it, and I realized how much stuff that I thought I needed is useless. Call me a hipster or retro person, but I’m trying to live like I used to live before technology invaded us. I don’t feel excited when Amazon says they plan to remove the cashiers from the supermarket because the future is that technology should control everything. I like going to the supermarket and talking to the cashier about the news, or even some gossips. I love smiling at each other and keeping that smile for the rest of the day. I don’t feel thrilled either when Elon Musk says that he’s planning trips to Mars and sending cars to space. We have a beautiful planet, with a lot of problems with it, and that we are destroying slowly. Why don’t we invest money in using technology to solve real world’s problems?

When we are told that VR is the future and that social networks make people more connected I always wonder how likely these statements can turn to be true. How can you dare to say that you are making people more connected if you are introducing something that has never been tried before and that radically changes the way people interact? VR will be the future in a couple of years and then what? Will there be any other thing that will be new future? What if we freeze the time for a while and reflect on where technology drove us to; what we learned, what we failed at doing, the direction technology should take. The world is moving to fast to stop it, the machine is running, and it’s impossible to stop it. Would you imagine Facebook ending everything that they are doing to reflect the impact Facebook is having on people’s lives?

I’m optimistic. I also look a technology from another angle. I see projects coming up shining human values, so there’s some hope. I’m using technology, but not in the same way I used it before. Our relationship is simpler, I use technology only when I think it offers me any value, and don’t feel addicted to it.

]]>
<![CDATA[I gave up using Medium and here are the reasons that led me to make the decission.]]>
I gave up using Medium https://pepicrft.me/blog/2018/01/31/gave-up-medium 2018-01-31T00:00:00+00:00 2018-01-31T00:00:00+00:00 <![CDATA[

A week ago I decided to remove all my publications from Medium. I’ve been using Medium together with my blog to publish articles, and also to find content from other publishers. I like how clean the design of the platform is, and how easy it is to discover new content based on other publications that you liked. However, several reasons made me decide to stop using it, remove my posts and focus on my personal blog instead. These are the reasons:

  • Pull vs push: I became anti-push products or technologies. For me, push products are those who push content to you, instead of expecting you to appeal to the product whenever you want. In other words, opening a website to read something because you want to read at this particular moment, instead of getting notifications every time there’s something new that “you might like”. I hate that products try to guess what and when I might need something. I hate that from Facebook, I hate that from marketing emails, I hate that from Amazon. Medium does it as well, and I don’t like it. We have a saying in Spanish that says: “don’t want for others, what you don’t want for you”. Since I don’t like being pushed with new content, I don’t want others to feel the same when I publish an article. We are overwhelmed with a lot of streams of information every day, and I don’t want to add up to that stream. I don’t want to be feeding machine learning engines, and recommendation algorithms to spread my blog post around the world. I don’t care if I don’t reach that many people, or if my blog posts are not read by as many people as they used to be. I’m caring more about people’s time, and since I can’t change the way most of the products work, I’ll stop being part of them. The people that like reading what I write can always use a convenient web feed format called RSS. I’m using it a lot these days, and have it setup on my desktop and mobile phone to read whenever I feel I’d like to read something.

  • Content: Ultimately, I don’t find any good blog post on Medium. The homepage is full of sensationalist and superficial blog posts that have no value at all. “7 tips to have a successful life”, “Why you should start your own company”, “How meditation changed my life”, “8 reasons why you should invest on Bitcoins”, “Why you should use Kotlin”. Not sure if it’s their recommendation algorithm that doesn’t work well with my profile, but I don’t like most of the posts that I find there. As I said, I set up my RSS client again, and I subscribed to the blogs whose content I like because they are well written, with strong arguments, and with a lot of value.

  • Claps: Most of the things on the Internet are built around the need humans have of being recognized, including Medium. It’s all about the impact: the likes that you got on your publication, the people that watched your Instagram stories, the retweets on your last tweet or the stars on your GitHub repository. On Medium the impact is measured by claps, and in some way is a mechanism to hook publishers into the platform and get them to write more. Have you ever thought about the profound effect this subtle thing has in your mind? You end up publishing more because you want to get more claps the next time. That doesn’t go with me, and even though I’ve suffered it in the past, I don’t want to publish anymore driven by the recognition on a platform.

  • Jekyll: I like the flexibility Jekyll offers me. I can write my blog posts using Markdown, easily add code examples, add Ruby code to automate the project generation or use Javascript to add components to the website. As a developer, having such flexibility is something that I like a lot. Moreover, using only Jekyll, I don’t have to maintain the blog posts in two places. I didn’t have anything automated, so when I wrote a blog post, I copied and pasted it into Medium. When there was a typo, I had to fix it in on both sides. I don’t have to do that anymore. There’s a single source of truth, and I have full control over it. It’s an open source git repository on GitHub. I can deploy the site to any hosting service. The content that I write is not used to run businesses with it. It’s a content that I write because I like sharing all the work that I do and the things that I learn. It’s not a business, it’s knowledge that should be shared on the open Internet.

]]>
<![CDATA[I gave up using Medium and here are the reasons that led me to make the decission.]]>
The hermeticism and rigidity of Xcode and its projects https://pepicrft.me/blog/2018/01/28/xcode-rigidity-hermeticism 2018-01-28T00:00:00+00:00 2018-01-28T00:00:00+00:00 <![CDATA[

If you work with Xcode you are most likely familiar with its hermeticism. Compared to other programming languages, like Kotlin, where the build system is independent of the IDE (Gradle), in Xcode everything is together and not well documented. Xcode projects have build settings and build phases that are the input to the build system that Xcode uses. Have you ever searched for what each of the build settings means? You’ll most likely end up on StackOverflow or some random website where someone tried to figure out what these settings are for. Documentation is terrible, and developers have to do some reverse engineering to understand what they are for. When I see other build systems like Gradle, where you have total flexibility during the build process, and everything is documented I feel jealous. I wish Apple had something like that with Xcode. I’m optimistic, and I believe it’s going to happen sooner or later, but I think we are far from having it.

Besides the hermeticism of the build system, another thing that annoys me is its rigidness. Build settings and build phases is the only input, and they are very static, you can’t do much with them. For example, if you want to link a library/framework when some conditions are met you cannot. You need to write your scripts that can be hooked from Xcode build phases but they cannot participate in the build of the source code. Only build settings, and sources buidl phases can determine what and how needs to be built. If you add a custom build phase that links a library conditionally, then it breaks the scheme “Find implicit dependencies” because Xcode is not aware of your custom linking. Bad, isn’t it?

Although this works for most of the projects, as soon as you need to optimize things in the way your project is built, then you are fucked. Companies like Pinterest, Uber, or Facebook has moved to other build systems like Bazel and Buck. Besides the powerful features that they get from them, they are very flexible, especially Bazel, so you can customize any step of your project build. One important difference between Buck and Bazel for instance is that Bazel allows you to define custom phases using a programming language similar to Python. For companies like Shopify, where there a lot of engineers building the app every day, where our CI infrastructure is compiling every commit that is sent to the git repository, it’s essential that we have a fair amount of flexibility. We’ll work soon on having incremental builds on CI. The idea is to dynamically share build artifacts across the pipeline builds, copying only the artifacts that are necessary for Xcode not to compile the frameworks/libraries that don’t need to be compiled. To do that, you need to know how Xcode manages the derived data directory, how the build system turns the input (build settings, build phases, source files, and resources) into intermediate and final artifacts. Does Xcode use the file update date to determine what needs to be built? Is it necessary to copy the intermediate files if there are some final ones? Well, we don’t know. With Bazel and Buck, not only they know about what output is generated from some given input, but also you know.

Another rigid component of Xcode is its projects. When a company has a few Xcode projects to maintain, it’s important to be consistent and share as much as possible across all the projects. It makes maintenance easier. Someone might argue that sharing is possible using .xcconfig files, and it’s true, but partially. .xcconfig files allow you to reuse build settings, but if you want to reuse build phases, targets or schemes structure, then you cannot. We have a few modules at Shopify that are shared across all the company iOS applications. They have similar build settings, same targets and schemes structure, but they are not sharing anything. If we want to update the deployment target or add a new target in each of them, we have to go one by one updating it manually. While this is something we can do when there are 4/5 modules, it becomes a pain in the ass when there are 10 or 20. It’s easy to forget something, and suddenly the project doesn’t compile. XcodeGen is an open source tool that helps you overcome this issue. Projects are defined in yaml, so you can use all reusing options that the yaml format offers. It also provides a more flexible way to define and share your build settings. I’ve used it to describe a modular app that I’ll build in the workshop that I’m giving at Mobos, and I was able to have a repository with no Xcode project in it, and sharing configuration across all modules that are part of the project. Wouldn’t it be great if Apple followed a similar approach and provided something similar? Imagine something like the SPM Package.swift but for apps projects, Project.swift.

As I said, I’m optimistic. Apple open sourced Swift, and is open sourcing components of the Xcode and its build system. Not sure if having it open source will make it more flexible, but at least there will be an open space where we’ll be able to participate in discussions about the future of the build system. Software engineers will be able to bring ideas to the Xcode build system, and other build systems will be able to take some from Apple’s one.

]]>
<![CDATA[Xcode and its projects are not as flexible as they could be, which makes it hard for companies to optimize their workflows and processes. In this post I'll analyze some of the things that I would improve from its build system and projects.]]>
This app could not be installed at this time https://pepicrft.me/blog/2018/01/20/watchapp-and-xcode-nightmare 2018-01-20T00:00:00+00:00 2018-01-20T00:00:00+00:00 <![CDATA[

I’ve spent a whole Sunday trying to get an Xcode project running. The project contains an iOS and watchOS app sharing code using frameworks. Moreover, I’ve automated the generation of the Xcode projects using XcodeGen. Everything seemed to be fine; I was able to generate the projects, compile the modules individually, run their tests, but at some point, I got stuck at something that after a few hours, can’t understand. Whenever I tried to run the iOS app or the watchOS application I got the error that you can see in the screenshot below:

Xcode error saying that the app could not be installed this time

This app could not be installed at this time. At first I thought my compiler became time sensitive, but after a while, nothing changed. Xcode kept complaining about the same thing. The second thing that I tried was creating the project manually. I dragged and drop a bunch of stuff, update the project build settings, and to my surprise, the same thing happened 🙄. I’ve never had a good experience working on watchOS applications using Xcode. I haven’t done it for a while but it seems that there hasn’t been much improvement. I think this is so far the second most frustrating issues that I’ve got so far from Xcode. The first one is of course, Segmentation Fault. I love that one, especially when you have to debug it, and you end up reverting all the code that you recently added.

I try to be optimistic but lately, working using Apple’s tools have been very frustrating. I feel they are investing a lot of effort in pushing Swift but that they are forgetting about some elemental things like the editor that most of the people use to develop in Swift.

Anyways! I was building an app that I plan to build in a workshop that I’m giving at Romobos. The preparation of the workshop was going very smoothly until I came across this nice Xcode gift. I haven’t been able to understand the issue so far, and I think I’ll end up building a today extension, rather than a watchOS application. If you are interested in the project and would like to try it by yourself you can check out this git repository https://github.com/pepicrft/xcode-modular-apps-workshop and go to the tag 0.6.0.

]]>
<![CDATA[Because sometimes Xcode cannot install your apps, and you have to figure out why.]]>
Random thoughts a Friday night in Ottawa https://pepicrft.me/blog/2018/01/19/random-reflection 2018-01-19T00:00:00+00:00 2018-01-19T00:00:00+00:00 <![CDATA[

It’s January 19th, and I’m right now in Canada. It’s been a fascinating beginning of the year, starting with my onboarding at Shopify. I had forgotten how it feels to start a new job, with a lot of tools and processes to learn, and a lot of people to meet. It requires a lot of energy, and I tend to overwhelm, but I’m trying to take it easy this time, step by step. What excites me the most about joining this new company is that I’ll be able to dive deeper into how to make developers productive working on mobile codebases. I’ve been learning a lot about it in the last couple of years, and I love building tools to make developers’ life easier. Shopify has a team for that I’m glad to be part of it. I’m excited about all the stuff that I’ll have the opportunity to learn here.

This post is somehow unique. I couldn’t come up with a name that summarizes what I’m going to write about (because I don’t know), but I just wanted to write down some thoughts which have been in my head for quite some time. Do you know these moments in life when you ask yourself so many questions, and they don’t allow you to see any light at all? That’s sort of how I’m feeling. Professionally speaking, I’ve had a very intense career since I became a developer. I taught myself most of the stuff that I know nowadays. That requires being very active, reading a lot of books and tutorials, using the social networks very actively, especially Twitter where you can get the most recent tech news, getting involved in open source projects and working on your own. Sometimes I was like a horse, I was moving fast, but I didn’t have time to enjoy the journey genuinely. That moment when you can sit back, relax, and celebrate your achievements. My achievements quickly faded away because I was already thinking about the next step in the journey. I think this goes pretty much with my personality, and the people that know me well will most likely agree on that. I’m working a lot to slow down my life, but I’m finding it very hard. A side of me says that I should stop it because otherwise, I’ll run into troubles soon, the other side tells me that I should continue because that’s the only way to be successful in life. It’s like having a devil and an angel on your shoulders pushing you to opposite directions. That’s my daily dilemma.

I’d love to slow down, and have more focus in my life. Moving fast with no focus brings me stress and anxiety. It makes me feel bad about myself and with the people around me. I don’t want that. I want to feel good and enjoy my work and the people around me. It’s hard, isn’t it? I don’t know if you have ever been in that situation, but I struggle to make another version of me. As a consequence, things that I used to like, I don’t enjoy them anymore. I used to run a lot, but a lot. It was my mindfulness moment of the day. When I ran, I didn’t think about anything; I focused on my breathing, my steps, the things around me. Nowadays, when I go out for running, I keep thinking about my everyday struggles. They are with me all the day long. It’s so exhausting that I don’t like running anymore. It’s not running; it’s me. I go to the cinema, I spend some time with the family, and the horse Pedro continues looking straight and moving. I don’t know another way to describe it better, but I think the horse metaphor is very representative.

There are a lot of great resources out there with people talking about similar struggles. We live in a very competitive environment, with the insane amount of information being thrown at us every day. We sometimes push us so hard, because we think that the only thing we need in our life is success. We want to have the best job, a high recognition, and be the expert on some given areas. We are surrounded with such positiveness, that we demand us more than what we should. And to me, it’s not surprising that such thing happens. You , and you find tons of articles titled like “X ways to be successful in Y”. You open Instagram or Twitter, and everyone is so happy and has a great life that they become a source of inspiration for you. You lose personality; you want to meditate because person X said that meditation is what makes him/her a happy person, you want the new thing Y because you saw on Instagram a lot of people are enjoying that thing. I’ve been there, I’m still there, and sometimes I have zero time to listen myself, to hear my feelings, to reflect on my thoughts, to understand what motivates me or what doesn’t. I just let myself be influenced by the competitive and overwhelming environment. Isn’t it sad? I lost some personality.

One thing that I’m currently doing to help me get some focus is decluttering my life. I’m getting rid of all the things that I don’t need, and which don’t add anything other than noise. There’s a buzzword for that, Minimalism, but it’s not something that has been recently invented. People were minimalists in the past. They had all they needed to have a happy and quiet life. Before the capitalism and the Internet revolution people cared more about each other and themselves (something that we are losing more and more nowadays). Anyways, that’s another interesting discussion topic. Simplifying one’s life is not only about materialistic elements but also non-materialistic ones. I’m getting rid of the gadgets that I don’t need in my life, replacing the expensive ones with a cheaper version (I’m replacing my iPhone with a cheap Xiaomi more than enough for my needs). Closing the accounts that I don’t use, leaving Slack organizations in which I don’t participate, reducing the tools that I use, leaving only the ones that are indispensable. I shut down my Instagram account and cleaned up my Facebook’s one. I’d love to close it as well, but I find the events section very useful, especially when you live abroad. I have a list of all clothes that I’m going to give away and stuff that I’m going to sell in secondhand Facebook groups.

Getting rid of the stuff that I don’t need is making me feel more relaxed. I don’t have to think about these things anymore; I can open the laptop and have just a few apps that I need, and not a bunch of apps with badges trying to catch my attention. I have more time to read, to learn languages, to learn new technologies, without any distraction at all. It feels great, and overall, I have time to think about myself. Maybe that’s what made me sit down and start writing this post today. Ideally, I’d like my life to be a continuous technology detox, where I can control when it’s time to think about technology. I want to read more, and I don’t mean tweets or Medium articles, I mean proper books. Wake up, and with a relaxed mind, start reading and don’t feel that the time is passing. Read pages without thinking about any notification coming in, or meetings in the calendar waiting for your attention. I think the last time I had that feeling was in high school, where there was no phone, nor distractions to think about. Don’t you miss them as well?

Minimalism is giving me time and space to think selfishly about me, about this horse that has been moving fast for some months/years. There’s something that is coming up every time that I stop to reflect on myself, and it’s the impact the work I’m doing has. Open sourcing used to be my mean to help people giving them some piece of software that they could use to build software with good intentions. It felt good helping other developers, talking to them and iterating the projects, but at some point, I came across the reality. It’s a competitive industry, where not many people want to share efforts towards making useful open source tools, but instead, individually work on their own. I found out that communities in open source are rarely communities, they are just a bunch of people working with the same programming language, reading the same articles, and watching the same videos, but with different interests in the end. It was for me hard to understand why there was such a polemic in the CocoaPods team when there was another dependency management tool coming out, Carthage. I understand their position much better now. That’s what I meant with communities formed by individuals with different interests, rather than shared interests. That disappointed me a lot. For me, one of the charming things about open source software is sharing efforts with other people and working together towards the same goals. In the competitive society where we live nowadays, it’s becoming impossible, and I can’t believe I’d say this but, I’m not as motivated as I was before working on open source projects.

I’ve also been there myself. I’ve mistakenly built projects thinking own my interests. Core Data wrappers, Xcode project parsers. I’ve contributed towards making everyone more confused and adding some fragmentation. I feel terrible about it. Interests in open source communities result in fragmentation, and we see it everywhere: Buck & Bazel, Yarn & npm. It’s been a tremendous recent lesson for me.

Another thing that I had some time to think about is the ego. I started writing about it on this blog, but I stopped it. I instead reflected on it. As I mentioned, I was very active working on open source projects, writing articles, sharing stuff on Twitter. People used my projects, replied to my posts, asked me questions, and I felt that I was important. That wasn’t me, that was the ego speaking in my name. When my ego felt important, it wanted to feel more important, do more, and publish more, and be more active. That’s a shit-ton of energy for the sake of being recognized. Has it ever happened to you? Open source can be a good candy for your ego, because you expose your project and the person who worked on it, publicly to be used and liked. I knew the ego was speaking in my name because something that didn’t use to be vital for me, like the stars on GitHub, or projects using it, started to be more important. When I decided to work on a new open source project, I started working on the facade, rather than the core. I managed to control it, and I’m working hard to change my mindset. Rather than publishing things, just with the mere intention of being used, or liked, because that increases my dopamine levels, I’d do it anonymously and participate less in social discussions.

We were told during the Shopify onboarding that sometimes we have to unlearn things to learn new things. That resonated with me and I felt that I have to unlearn a few things to progress in my career (or at least in the way that I want to):

  • I want to unlearn my individual perception of open source projects and work towards having a collective one.
  • I want to unlearn all the stuff fed my ego, and learn how to work towards having great impact anonymously.
  • I want to unlearn the importance of being liked and learn the significance of the work that I do.

And last but not least, there’s another thought that has been on my mind for quite a long time. I’ve been doing Swift/Objective-C for a few years, and even though Swift is open source, I’m not as motivated as I used to be with it. I feel like its scope is limited to an operating system and that it’s constrained to Apple interests. When I look at other languages such as Kotlin, Ruby, Go, or Javascript I feel that they are more community driven and that there are fewer interests behind them. People experiment with them and try to push them to their limits, like Javascript on mobile apps with React Native, or on Desktop with Electron. Kotlin being transpiled to Javascript, or compiled to run on iOS devices. I see a lot of criticism on the iOS/macOS ecosystem when intruder languages are making their way into such a closed ecosystem that Apple provides. To be honest, that makes me feel sad. Rather than opening the ecosystem, and allowing languages such os Kotlin, or Javascript, or opening APIs so that other companies can provide their IDE they prefer to keep things private.

I’m considering a significant turn in my career, and I’ll most likely focus on Javascript/Ruby. Ruby because it’s a very mature language, with a mature ecosystem and community, and it’s the primary programming language at Shopify. It’s an excellent opportunity for me to learn from the people that I work with, and work on projects that are not tied to the Apple ecosystem but can be used in many other operating systems. Javascript because of similar reasons, and because I’d love to do some web with frameworks such as React, and it can even be used to build mobile apps. In the end, we are all software engineers and focusing on such closed environment narrows our points of view on the problems around us. I’d like to break that barrier and look at those issues with a broader and more open angle.

]]>
<![CDATA[I sat down after work and thought about some things that have been in my mind for some time. I wrote them in this blog post that if I have to summarize it, it talks about minimalism, open source, egno, and career paths.]]>
Wrapping up 2017 📦 https://pepicrft.me/blog/2017/12/25/wrapping-up-2017 2017-12-25T00:00:00+00:00 2017-12-25T00:00:00+00:00 <![CDATA[

2017 is almost over. What a year! Like I did last year I’d like to reflect on what 2017 has been for me and wrote down the highlights in this post. I don’t have any structure for it other than just a list of things that come up to my mind while I’m writing it:

  • Budapest 🏰 - I lived in this city from January to July. María José, my girlfriend, was living and working there and I asked the company for working remotely. They allowed me to do so and I moved there. Budapest is a lovely city with a wonderful tech ecosystem growing up. I got to make some friends in the city and even organized a Meetup for runners. I liked from Hungary that people preserve values that have been lost in other countries like appreciation and gratitude. Eventually, Maria José found a job in Berlin, and we decided to move to Berlin. I had fallen in love with Berlin, and I missed it so much that I found it hard to get used to Budapest.

  • ADDC 🎤 - It was the first time in my life that I organized a conference. ADDC took place in Barcelona, and we brought app designers and developers from all over the world to learn and connect. I learned that organizing a conference is not as easy as it might seem and it requires a lot of energy, coordination, and a good team. The first edition of ADDC was very successful and we are very excited to work on the second edition of the conference that will take place in July. We got a lot of feedback that we’ll use to make ADDC better. You can check out the conference website.

  • German 🇩🇪 - My German got terribly stuck. Since I moved to Budapest, I didn’t put much effort on it. When we moved back, I realized that I should continue learning it and invest more energy. Maria José joined in November a company, Chatterbug which offers an effective method to learn languages (she is working on the Spanish curriculum). I signed up , and I have gained some discipline taking lessons almost every day. I hope I have good news about it in next year’s wrapping up post.

  • Social Networks 👨‍👨‍👦‍👦 - I learned how to use social networks more thoughtfully. I was somehow addicted to them and most of my time was spent scrolling through their infinite timelines or stories list. I don’t like the social interactions that happen on social networks and decided to invest that time in something more valuable to me, like reading or meeting people outside the Internet. It was tough to do it because sometimes I felt I wanted to check my social profiles but by disabling push notifications, uninstalling the apps, and using extensions on Google Chrome, like News Feed Eradicator I got a lot of discipline. Although I spend some time on them nowadays, it’s nothing compared to the time I spent one year ago.

  • Twitter 🐦 - I changed the way I use Twitter. Rather than keeping it open to see what’s going on at any time and participate in any active discussions, I only check the timeline at the end of the day. When I find something interesting to read, I use Instapaper to read it later. I tweet much less and try to clean up my tweets from time to time. When there’s any hot topic, everyone is talking about I try not open Twitter. I’m not used to processing so much condensed information. Trying to keep up with everything makes me feel anxious.

  • From the music 🎵 to the e-commerce 🛒 - As I announced a few days ago, I moved on from SoundCloud, and I will be joining Shopify in January. I grew a lot as an engineer during my time at SoundCloud. It’s an fantastic company with a great culture and a lot of talent to learn from. I became very interested in scalability and contributed with modularizing the codebase to make developers more productive working on new features. I wrote some guidelines on this repository and gave a talk at Mobiconf about how we built features at SoundCloud. This passion for scalability led me to Shopify and its Developer Acceleration team, where I’ll be working on tools and processes to scale the company’s mobile apps. Can’t wait for it!

  • Open source ✏️ - I’ve been a big fan of the open source philosophy since I started my career. I like sharing what I learn with others and work with other developers to build tools that can be leveraged to create apps. One of the open source projects I’m most proud of is xcproj, a Swift library to read and update Xcode projects. xcproj became part of an open source organization, xcode.swift which I’m part of and where I got to know amazing people with whom I share my daily struggles working with Xcode. We all help each other and build open source tools in Swift. In 2018 I’ll continue contributing to xcode.swift, helping developers that want to use or participate in any of the open source tools that we are building.

  • Homo Deus 🐒 - It’s my favorite book of 2017. If you haven’t read it yet, I’d recommend it to you. Before reading Homo Deus I read another book by the same author, Sapiens. It was fascinating to know how our history influenced how humans are nowadays. Homo Deus is the continuation of that book and talks about how humans are using technology towards becoming superhumans.

  • React ⭐️ - I’m not married to any technology or programming language so when there’s something that I find interesting to learn I just read about it. By understanding other platforms, technologies or languages, you gain different points of view when you need to solve problems. Moreover, you know the good and bad things of each of them and have enough grounds to participate in discussions. React was one of these technologies that I wanted to learn, and I ended up learning the basics. Together with React I also learned a bit about Typescript (I missed not having types).

  • Running 🏃 - I used to run more a few years, ago and this year I miserably failed at trying to get in shape again. I didn’t run as much as I wanted and I got some extra kilos as a result. I participated in Berlin’s marathon, but it was the hardest one for me since I hadn’t trained enough.

  • WWDC and my first time in the US 🇺🇸 - In June I visited USA for the first time. I attended Apple Developers event, WWDC in San José. I took the opportunity to visit the so famous San Francisco, took a photo with the bridge in the background, visited Facebook’s office and saw UFO-alike Apple’s campus. What’s more, we drove to Las Vegas and visited the Grand Canyon on a helicopter. The Grand Canyon’s views from the helicopter were stunning!

  • Coffee ☕️ - I reduced the amount of coffee intake that I used to drink. It was tough because I went from drinking around 5 cups every day to not drink any. The first two days I had extreme headaches, but after it, I had much better and deeper sleep.

  • Other 😜

    • I learned how to think slower and be less hot-headed.
    • I learned how to give a fuck about what people say about me.
    • I lost the fear to share my opinion even if it goes against others’.

Goals for 2018

  • Learn German and try to get to a B1 level.
  • Lose the extra kilos that I gained and commit to a running routine.
  • Read a bit more than what I did this year (preferably non-techie books).
  • Learn Kotlin and how it can be used to build iOS apps.
  • Organize the second edition of ADDC.
  • Visit a new country (Iceland and Japan are on my mind).
  • Write an open book.
]]>
<![CDATA[A retrospective on what 2017 has been]]>
Linting your Xcode projects with xclint https://pepicrft.me/blog/2017/11/02/xclint 2017-11-02T00:00:00+00:00 2017-11-02T00:00:00+00:00 <![CDATA[

In this post I’ll talk about a tool that I have recently released xclint, which validates the structure of your Xcode projects, offering insightful warnings about things that might be wrong in the project structure.

Xcode projects are hard to work with, especially when there is a team behind using git. It’s straightforward to mess things up and end up with a project, which Xcode might be able to read and compile, but that internally is not in a good state:

  • There are entries that are duplicated (a common issue when solving git conflicts in the project file).
  • There are elements that refer to others that don’t exist anymore.
  • There are attributes that are missing.
  • Some file references point to files that don’t exist anymore.

Keeping Xcode projects in a healthy state is very important and unfortunately, there was no tool that helped you validate that. CocoaPods for instance throws a warning when you do pod install, if there are multiple elements with the same identifier.

Since xcproj, opened an API to read any Xcode project, I decided to leverage it to build a command line tool, xclint that we could use to validate the state of our Xcode projects.

Install & Usage

You can install the tool using Homebrew:

brew tap xcodeswift/xclint [email protected]:xcodeswift/xclint.git
brew install xclint

Or run it using Mint 🌱:

mint install xcodeswift/xclint

Its usage is very simple, you just need to pass one parameter, which is the Xcode project that you want to validate:

xclint MyProject.xcodeproj

The screenshot below shows an example of the tool output when there are validation errors:

xclint output when there are warnings

What’s next

The tool currently supports validation of missing references, and attributes that there are things that we’d like to support in future editions:

  • Detect multiple elements with the same reference.
  • Spot files that are referred from the project that are missing.

Moreover, we plan to support CocoaPods, so that you could install the tool using CocoaPods and use the binary from a project build phase.

Feedback

It’s the first version of the tool, 0.1.0 so it’s very likely that you encounter some errors. If so, don’t hesitate to open issues on GitHub with all sort of issues that you found using the tool. Moreover, if you have any idea of things that we could validate or features that we could add to the tool, feel free to open issues or pull requests with your proposals. You’re very welcome!

]]>
<![CDATA[In this post I talk about a tool I've been working on that allows you to check the state of your Xcode projects, finding missing references and duplicated files.]]>
Consistent vs convenient https://pepicrft.me/blog/2017/10/30/consistent-vs-convenient 2017-10-30T00:00:00+00:00 2017-10-30T00:00:00+00:00 <![CDATA[

Have you ever used programming paradigms like functional or reactive programming? Have you tried the revolutionary approach to model how the state is contained and flows in your app, Redux? I find it great that companies and open source organizations try to solve issues that we developers, have to face on a daily basis by introducing new concepts in the industry. We’ll see more of those coming, and what it’s cool nowadays won’t be in a matter of months/years. Do you think reactive programming is the coolest thing ever? Let’s see in a few years if it was the coolest thing ever, or there was still space for even cooler alternatives.

If there’s something I’ve learned in my short career as a software engineer is that there’s no perfect paradigm, pattern, or architecture and that it all depends on our problems or particular needs. Something that is convenient for other projects and teams might not be convenient for us. Have you worked on a code base where someone introduced a new concept/library, like a Reactive programming library, and you ended up very concerned when that thing spread all over the codebase? There are libraries that we introduce and that are isolated, we can easily abstract them with an interface, but there are others that as soon as we use them, they spread very quickly across the codebase. They are like viruses, and as soon as you open them the door, they have everything they need to spread around.

I have nothing against the paradigms mentioned above. I think each paradigm that changes the way we code should be used consciously, and we should regularly evaluate if the choice we made works out for us and that is not becoming a burden that we need to carry on.

I wondered why those elements end up behaving like viruses. There must be something, in the team or the project, that helps them spread that fast. I think the reason why that happens is that it brings up a dilemma to developers when they need to work with code that already includes the virus. They need to decide between choosing the consistent solution, or the convenient one, and most of the times we lean towards the consistent solution. You’ll understand this better with a couple of examples.


Example 1 - Reactive programming

Let’s say there’s a component A, whose interface is mostly implemented with reactive observables (even if methods are just simple getters that return values synchronously). You need to extend that interface to just return a property whose value you need from outside. Will you expose an observable, or just a getter? For your particular use-case it may make more sense going with just a simple getter since you don’t need any reactive magic like scheduling in threads or doing a binding. But you’ll most likely be influenced by the existing API and do it with observables, since you want to be consistent with the code that is already there. Little by little, what was an isolated interface, is powerful enough to force the usage of observables far beyond the closest element.

Example 2 - Redux store

Redux was introduced in the codebase, and we already moved a lot of states to the store. The session of the user, the state of the navigation, the settings, the search results that will be presented in the search view. We need to work on a new view whose simple state we need to persist in memory. Again, it might be more convenient to do it in a view model next to the view, but since we already have some views that do it in Redux, we do the same, and we add some complexity to the store state.


These are two examples to illustrate what I’m talking about, but you can think of any other element that you can introduce in a codebase: VIPER, declarative programming, MVVM, Realm. When we developers, have to choose between consistency and convenience we usually lean toward consistency. By doing that we push new element beyond its scope and we turn it into a burden that the team has to carry on. Usually, when we realize that the burden is a heavy burden, it’s too late to give steps back.

I’m a bit skeptical when it comes to introducing such elements in codebases where more people are working on. This involves being able to control the hype those new technologies, libraries, paradigms come with. In my experience, the closest you are to the language and the system frameworks, the more familiar your team is with the codebase. They feel more confident when they need to make changes when they need to propose improvements, and they don’t have to go through any cumbersome dilemma that the codebase exposes them to. If you are not sure whether a new thing could potentially be a “virus” I recommend you to talk to other teams that have experience using it. Talk to as many teams as possible before making the decision because, because you won’t find such information in documentation websites, READMEs, or even tech-talks.

]]>
<![CDATA[I analyze in this post why some decisions that we make in our projects might turn into bad viruses that spread all over the code base.]]>
Modular Xcode projects https://pepicrft.me/blog/2017/09/29/modular-xcode-projects 2017-09-29T00:00:00+00:00 2017-09-29T00:00:00+00:00 <![CDATA[

Building modular projects with Xcode requires a good understanding of the project structure and its foundational concepts. The project structure is something that we don’t usually care much about unless we start growing the project by adding more dependencies. Even in that case, most of the projects use CocoaPods that does the setup for us, or Carthage that doesn’t do the setup, but makes it as easy as just adding a couple of changes in your project build phases. When the configuration becomes more complicated, it’s very likely that we get confused because we didn’t fully grasp all the elements that are involved in Xcode projects. I usually get asked questions like:

  • Can I have Carthage, Cocoapods and my dependencies?
  • I added my dependency but when the simulator opens the app crashes.
  • Why do I have to embed the frameworks in some targets only?
  • Should my framework be static, or dynamic?

In this blog post, I’d like to guide you through the Xcode projects elements, and the principles to modularize your setup by leveraging them. I hope that the next time you face any of those issues, you don’t need to spend a lot of time on Stack Overflow trying to find a non-random response.

Elements ⚒

Target

Projects are made of smaller units called targets. Targets include the necessary configuration to build platform products such as frameworks, libraries, apps, testing bundles, extensions. You can see all the available types of target here. Targets can depend on each other. When a target depends on another, that target is built first to use its product from the dependent target. The target configuration is defined in the following places:

  • Info.plist file: This file contains product specific settings like the version, the name of the app or the type of app. You can read more about this file here
  • Entitlements: It specifies the application capabilities. If the capabilities in the entitlement file mismatch the ones in the developer portal, the signing process fails.
  • Build settings: As its name says, those are settings necessary to build the target. Build settings can be defined in the target itself, or in a xcconfig file. The configuration of a target is inherited being the first parent the config file (if any), then the target configuration, and in the last place the project configuration.
  • Build phases: The build pipeline is defined using build phases. When a target gets created, it contains the default build phases (building source code, copying resources) but you can add as much as you want. As an example, there’s a shell script phase that allows you to do some scripting as part of the build process. Those scripts have access to the build variables that are exposed from Xcode.

Due to the composability and reusability of .xcconfig files, it’s very recommended that you define the build settings in those files. Changes in the target configuration such as changes in the build settings, or build phases are reflected in the .pbxproj file, a custom-plist representation of your project that is a common source of conflicts when we work with Git in our projects. The easiest way to update the configuration in the pbxproj file is using Xcode, which knows how to read and write changes to those files. If for any reason, you were interested in updating those files without using Xcode, you could use tools like Xcodeproj for Ruby or Swift

I hope that Apple makes the definition of the projects more accessible by either using a different definition syntax or opening APIs. This lack of accessibility led me to write a tool like xcodeproj to read and update your Xcode projects in Swift.

The output of building the targets are either bundles such as apps, extensions, or tests that are loaded on the platform they were built for, or intermediate products such as libraries or frameworks that encapsulate code and resources to be used for other targets. The products that are generated as a result of building a target can be seen in the group Products of your target. A red color in these files references indicates that there’s no product, most likely because you haven’t built the target before.

Scheme

Another element of Xcode projects is schemes. A project can have multiple of them, and they can be shared and included as part of the project to be used by the people working on the project. Schemes specify the configuration for each of the available actions in Xcode: run, test, profile, analyze and archive. We can specify which targets are built, in which order, and for which actions. We can also define the tests that will be run when we test that scheme and the configuration that is used for each of the actions.

It’s worth mentioning a few things about the build config of the scheme. When we specify which targets are built for which action, we don’t need to include the dependencies of our target in the following two cases:

  • If the dependency is part of the same project and is already defined in the Target dependencies build phases.
  • Find implicit dependencies is enabled.

By enabling the second flag, the build process should identify the dependencies of the targets that you are building and build them first. Moreover, if you enable Parallelize build, you’ll save some time since targets that don’t depend on each other will be built in parallel.

A bad build config might eventually lead to errors building your targets such as Framework XXX not found. If you ever encounter one of those, check if all the dependencies of your target are getting built when you build the scheme.

The scheme definition is stored in a xml file under Project.xcodeproj/xcshareddata/xcodeproj.xcscheme. In this case the format is plain xml and can be easily modified with any xml editor.

Workspace

Multiple projects can be grouped in a workspace. When projects are added to a workspace:

  • Their schemes are listed in the workspace list of schemes.
  • Projects can depend on each other as we’ll see later.

As with schemes, workspaces are plain xml files that can be easily modified.

Dependencies 🌱

Targets can have dependencies. Dependencies are frameworks or libraries that our targets link against, and that include source code and resources to be shared with our target. Those dependencies can be linked static or dynamically:

  • Static linking:
    • The linking happens when the app gets compiled.
    • The object file code from the library gets included into the application binary (larger application binary size).
    • Libraries use the file extension “.a”, which comes from the (ar)chive file type.
    • If the same library gets linked more than once, the compiler fails because of duplicated symbols.
  • Dynamic linking:
    • Modules are loaded at launch or runtime of an application.
    • Application and extension targets can share the same dynamic library (only copied once).

The difference between a framework and a library (linked static or dynamically) is that frameworks can contain multiple versions in the same bundle, and also additional assets that can be used by the code.

A library is a .a file which comes from the (ar)chive file type. A single archive file can only support a single architecture. If more than one architecture needs to be packaged, they can be bundled in a fat Mach-O binary, a simple container format that can house multiple files of different architectures. If we would like to generate a fat Mach-O binary, modify an existing one, or extract a library based on a specific architecture, we can use a command line tool called lipo.

You can read more about frameworks/libraries and static/dynamic on the following link.

Applications can depend on precompiled and not-compiled dependencies.

Precompiled dependencies

Carthage is a good example of this kind of dependencies. Some SDKs are also distributed as compiled dependencies, like Firebase. When precompiled dependencies are libraries, they include the .a library and the public headers that represent the public interface of the library. When they are frameworks, they are distributed as a .framework that contains the library and resources.


When our app depends on precompiled dependencies it’s important that the dependency is built for the architecture we are building our app for. If any of the architectures is missing, we’ll get compilation errors trying to compile our apps. As we’ll see later, Carthage uses lipo to generate frameworks that contain the necessary architectures for the simulator and the device, stripping the ones that are not necessary based on the build configuration.

Non-compiled dependencies

CocoaPods is a good example here. Dependencies are defined in targets that compile the frameworks/libraries that we link against. There are multiple ways to specify in Xcode that our target depends on other target’s products:

  • If the targets are in the same project: You can define the dependency in the build phase Target dependencies. Xcode will automatically build that dependency first to use its products for the target that we are building.
  • If the targets are in different projects: We can define the dependencies between the targets using Schemes. In the scheme Build section, we can define the targets that are built and in which order (based on the dependencies between them). Xcode is able to guess the dependencies if you enable the flag Find implicit dependencies. It’s able to make a guess, by understanding what the target that you are building depends on, and who builds that product. If there is anything missconfiguration in the scheme, you might get an error like xxxx.framework not found. You might also get that error if you have circular dependencies between the frameworks that cannot be resolved.

A note on dependencies and configurations: The configurations of all the dependencies should match. If you are building your app with the Alpha configuration and any of the dependencies doesn’t have that configuration, the compilation will fail with a framework not found error. When that happens, Xcode doesn’t compile the framework but doesn’t throw any error.

Linking with Xcode

Targets can link against other targets outputs, and we can define the dependencies using Xcode tools like schemes, or target dependencies, but… how can we glue the dependencies defining the links between them?

1. Linking (static or dynamic) libraries and frameworks

We can define the linking via:

  • A build phase: Among all the available build phases, there’s one for defining the linking, Link Binary With Libraries. You can add there the dependencies of your target, that can be part of the same project, or from another project in the same workspace. This build phase is used by Xcode to figure out the dependencies of your target when the target is getting built.
  • Compiler build setting: A build phase turns the list into compiler flags underneath. That’s something we can also do by defining some build settings:
    • FRAMEWORK_SEARCH_PATHS: We define in this setting the paths where the compiler can find the frameworks that we are linking against.
    • LIBRARY_SEARCH_PATHS: Similarly, we specify in this setting the paths where the compiler can find the libraries that we are linking against.
    • OTHER_LDFLAGS (Other Linker Flags): We can specify the libraries we are linking against by using the -l argument: -l"1PasswordExtension" -l"Adjust". If we are linking against a framework, we use the -framework argument instead: -framework "GoogleSignIn" -framework "HockeySDK". If we try to link against a framework/library that cannot be found in the defined paths, the compilation will fail.

2. Exposing library headers

Libraries headers need to be exposed to the target that depends on the library. To do that, there’s a build setting, HEADER_SEARCH_PATHS where we can define the paths where the headers of the dependencies can be found. If we link a library but forget about exposing their headers, the compilation will fail because it won’t be able to find its headers.

3. Embedding frameworks into the products

App targets that link against dynamic framework need to copy those dependencies into the application bundle. This process is known as framework embedding. To do that we can use a Xcode Copy Files Phase, copying the frameworks to the Frameworks directory. Not only direct dependencies should be embedded, but also dependencies of our direct dependencies. If we miss any framework, the simulator will throw an error when we try to open the app.


Cases of study 👨‍💻

In this section, we’ll analyze how tools like CocoaPods and Carthage leverage the concepts introduced above to manage your project dependencies.

CocoaPods

CocoaPods resolves your project dependencies and integrates them into your project. It’s been very criticised for modifying your project settings, but they improved a lot since the early versions, and they do it in such a way that it doesn’t require many changes in your project. What does it do under the hood?

  • It creates a project (Pods.xcodeproj) that contains all the dependencies as targets. Each of those targets compiles the dependency that needs to be linked from the app.
  • It adds an extra target that depends on the other targets. It’s an umbrella target that is used to trigger the compilation of the other targets. It does it to minimize the changes that are required in your project. By linking your app against that target, Xcode will compile all its dependencies first, and then your app.
  • It creates a workspace with your project and the Pods project.
  • Frameworks/libraries are linked using .xcconfig files that are added to your project group and set as configurations of your project targets.
  • Embedding is done using a build phase script. Similarly, they copy all the frameworks resources using a build phase.

The image below illustrates how the setup looks like:


Carthage

Carthage approach is pretty different. Besides the resolution of dependencies, that in this case it’s decentralized, the tool generates pre-compiled frameworks that the developer needs to link and embed from its app:

  • Carthage resolves the dependencies and compiles them to generate dynamic frameworks that you can link from the app and their symbols for debugging purposes. These frameworks are fat frameworks, supporting both the simulator and device architectures.
  • The frameworks are manually linked by the user using the Link Binary With Libraries build phase.
  • The embedding is done using a script that Carthage provides. The script strips those architectures that are not necessary for the destination that we are building for.
  • The same script copies the symbols to the proper folder to make them debuggable.


I hope you found this post very insightful and that you could find answers to any doubt that you might have had. I’m not an expert on Xcode so you should expect some mistakes in the post. If you find any, please, don’t hesitate to report it to me. Xcode has good and bad things, like any other IDE, but having a better understanding of its elements will help you get the best out of it. Don’t be afraid of changing the setup and playing with schemes, targets, configurations. If you find yourself lost, I left a bunch of references in the section below. I recommend you to also use CocoaPods and Carthage as a reference and learn from them because they already spent a lot of time getting to know Xcode better to provide you with excellent tools for your projects.

Please, drop me a line [email protected] with any question, concern or doubt you have.

References

]]>
<![CDATA[This post presents some elementary concepts of how Xcode projects are structured, and introduces an structural approach to build modular Xcode apps.]]>
Providing mocks and testing data from your frameworks. https://pepicrft.me/blog/2017/09/13/frameworks-testing 2017-09-13T00:00:00+00:00 2017-09-13T00:00:00+00:00 <![CDATA[

If you build your apps in a modular manner using Swift, you have probably been in the situation where a ModuleX defines some mocks or testing data for its tests in its tests target, but they cannot be shared to be used from other tests targets, playgrounds, example apps… (essentially because they cannot import a tests target) With Objective-C it wasn’t an issue at all because we could mock the interface of our dependencies at runtime with just one line of code. In Swift, the definition of mocks or data for testing requires some manual work (unless you have some code generation in place) that we don’t want to have to do it more than once. Listed below you can see some scenarios where you face this issue:

  • Playground of ModuleX wants to access a Track.testData() defined in ModuleXTests.
  • Example app of ModuleX wants to access a Track.testData() defined in ModuleXTests.
  • Some tests from ModuleY, that depends on ModuleX, wants to access MockClient defined in FrameworkXTests.

Module stands for Library or Framework, depending on your project setup.

We can overcome this issue by adding another module, that depends on ModuleX and exposes your module testing elements, ModuleXTesting. Since we would like to use the components in there from Playgrounds and example apps it’s important that ModuleXTesting doesn’t depend on the framework XCTest. In the snippet below you can see two examples, one of them shows how we use ModuleXTesting to define a mock for a protocol, and the other one how we extend an entity to provide testing data.

// ModuleX - MyProtocol.swift
public protocol MyProtocol {
  func sync() throws
}
// ModuleX - Entity.swift
public struct Entity {
    public let name: String
    public init(name: String) {
        self.name = name
    }
}

// ModuleXTesting - MockMyProtocol.swift
import ModuleX

class MockMyProtocol: MyProtocol {
    var syncCount: UInt = 0
    var syncStub: Error?
    func sync() {
        syncCount += 1
        if let syncStub = syncStub { throw syncStub }
    }
}
// ModuleXTesting - Entity+TestData.swift
import ModuleX

extension Entity {
    static func testData() -> Entity {
        return Entity(name: String.random)
    }
}

Applying this little tip to your projects setup, you’ll have more reusable testing components and you will save a lot of time when you need to write tests for your projects. The only thing you need to do, is defining your testing elements in a new module that playgrounds, example apps, and other modules have access to.

Note: Defining testing data or mocks in a new module applies only to the elements with public access. Those are the ones that are accessed from outside the framework tha contains them.

I hope you find the tip useful. If you have gone through this challenge before, I’d love to know how you overcame the issue in your projects.

]]>
<![CDATA[This post introduces an approach to share testing data and mocks from your frameworks to other frameworks that might need them for testing purposes.]]>
Conditionally embed your dynamic frameworks https://pepicrft.me/blog/2017/09/13/xcodembed 2017-09-13T00:00:00+00:00 2017-09-13T00:00:00+00:00 <![CDATA[

As part of dynamic linking frameworks in your Xcode apps frameworks need to be copied into your app Frameworks folder. There are multiples ways to do so:

  1. Add a new Copy Files Build Phase, selecting Frameworks as the directory where you want the frameworks to be copied.
  2. Running a script that automates the copy step for you.

The most popular dependency management tools in the community use the second one. CocoaPods for example creates a new build phase called [CP] Embed Pods Frameworks containing all the frameworks that need to be copied. If you dive into the script you’ll see that CocoaPods doesn’t use those inputs and outputs (although they Xcode exposes them as environment variables). Having your frameworks as input and outputs allows Xcode determine if the copy script has to be executed based on the changes in these files. Similarly, Carthage asks you to add an extra build phase that executes their script for copying the frameworks that they build under Carthage/Build.

What if you want to embed a framework only when a given condition is satisfied? For example when you compile the project for “Debug”. You might want to link your app against a framework that is only used for debugging purposes. CocoaPods handles this but if you are not using CocoaPods, it’s not possible without a bit of scripting.

I found myself in that situation and I was between modifying in the CocoaPods bash script, or coming up with something in Swift that other developers could easily install and use in their projects. I decided opted for the latter, and I added a new command to xcode.

You can easily install the tool using Homebrew with the following command:

brew tap swift-xcode/xcodembed [email protected]:swift-xcode/xcode.git
brew install xcode

A side note on xcode

xcode is a command line tool built in Swift that offers tasks that facilitate working with your Xcode projects. The first task supported by the tool is embedding frameworks, but there are more that are about to come, like cleaning build settings or linting your Xcode projects.

Embedding your frameworks

If you have used Carthage before in your projects you might already be familiar with the process. You’ll need an Xcode build phase in your project that runs the command passing the frameworks that will be copied. The example below shows your build phase should look like:


An example of the Xcode build phase that uses the command to embed the frameworks

  1. We run the command xcode frameworks embed using bash.
  2. We specified as input files all the frameworks that will be copied. The path can be absolute or relative to the project directory. Remember that you can use any of the available Xcode build variables, like $(SRCROOT).
  3. Similarly we specify where those files will be copied. The path should be $(BUILT_PRODUCTS_DIR)/$(FRAMEWORKS_FOLDER_PATH)/MyFramework.framework where MyFramework.framework is the name of the framework.

Note: Input and output files must be paired. In other words, the output file x path is used to determined where the input file x will be copied.

Besides copying the frameworks, the tool also copies the symbols and the bcsymbolmap files of your framework, stripping the architectures that are not necessary. By default, the command embeds the frameworks for all the configurations. If you would like to do it only for some configurations, you can do it by just passing a parameter to the command above:

xcode frameworks embed -config Debug,Release
xcode frameworks embed --config Debug

Out of curiosity, Xcode uses the input and output files to determine if the script needs to be executed. For example, if an output file is missing, or any of the input files changed, Xcode will run the script the next time your build your project.

Big thanks

To CocoaPods & Carthage for diving into this issue in the past and coming up with their own solution to this problem. Both of them were a good reference for me to build up this command for xcode.

References

I hope you find the tool useful. Don’t hesitate to open any issue or PR with fixes, ideas, comments!

]]>
<![CDATA[A command line tool written in Swift for copying the frameworks from your project to the output frameworks directory.]]>
Little tweak to be more productive writing XCTest tests https://pepicrft.me/blog/2017/08/21/tweak-xctest 2017-08-21T00:00:00+00:00 2017-08-21T00:00:00+00:00 <![CDATA[

Are you using Quick or Specta for your XCTest unit tests? They provide you with a nice DSL to define your tests in a more descriptive way using the keywords: “describe”, “context”, “it”. Although it makes your tests more readable, it breaks the integration of XCTest with Xcode. The example below shows a test written with Specta:

An example of a test using Specta

At SoundCloud, we found the process of executing a single unit test with Xcode a bit cumbersome when using one of those DSLs, especially if your tests target contains a lot of tests. By using plain XCTest, Xcode automatically shows you a play button next to your tests that you can click if you want to execute that particular test. With a DSL there’s no play button next to your tests. The only way to manually execute one is either using the exclusive keywords from the DSL or executing the tests once to see all the available tests in the “Tests Navigator”.

The example below shows a plain XCTest and the button to run that individual test:

An example of a test using plain XCTest

Putting on a balance readability and productivity we decided to go for productivity, coming up with some guidelines to make plain XCTest tests very readable, and leveraging the IDE to be more productive writing and executing unit tests.

Since then running and debugging our tests is much quicker and pleasant. And you? Do you use any DSL for your tests?

Happy testing! :tada:

]]>
<![CDATA[Because readability might compromise productivity.]]>
Introducing xcodeproj https://pepicrft.me/blog/2017/07/31/introducing-xcodeproj 2017-07-31T00:00:00+00:00 2017-07-31T00:00:00+00:00 <![CDATA[

Today I’m pleased to announce a new open source project I’ve been working on for the last few months, xcodeproj. Xcodeproj is a library written in Swift that allows you interact with your Xcode projects from Swift. It provides a foundational API to build up your scripts and tools on top of it. It’s entirely written in Swift, documented and ready to be used. In this post, I’ll explain the motivation behind working on xcodeproj, show you the “get started” steps to start using the library, and give you some hints about ideas that I have to leverage this tool.

Motivation

This year, I’ve been very passionate about leveraging modularization to overcome scalability issues in mobile projects/teams. For every new module that I created I had to always go through the same manual steps: create the project, set the config, link dependencies, update the schemes… It was a very repetitive process, and it was easy to forget any of the steps, having some inconsistencies in the setup. I tried automating it by coming up with a script that clones a template Xcode project, and modifies some values in it. Although it solves the problem, it does it partially. It’s not flexible at all since it was hard to extend the template (you had to update the original one, or create copies of it according to your requirements).

At some point I thought, what if there was a way to specify the project that you want, and there was a tool that would generate it for you? Something like: “I want a project, with an iOS app, an iMessage extension, and a framework to share some code between them”.

The closest tool that I found was xcake. It does exactly what I was thinking of. It’s written in Ruby and it uses a gem from the CocoaPods team, xcodeproj that allows you modifying your Xcode projects and workspaces. Although I’ve done some Ruby before, and I know the APIs a little bit, I’m not very familiar with them and I wanted to experiment with Swift. I thought it would be a good idea to come up with something written in Swift, that other Swift developers could use to build their own tools on top of a foundational API. I started writing the swift version of xcodeproj.

Xcodeproj is a swift library that provides components for reading, modifying and writing Xcode projects and workspaces. In other words, it opens a new API for interacting with your Xcode projects. If you wonder what you can use xcodeproj for here are some ideas:

  • Finding duplicate files.
  • Checking if the groups hierarchy matches the system one.
  • Detecting which targets have to be run/tested as a result of some files being modified.
  • Generating projects from a specification file.

There are bunch of opportunities and I’m eager to see what developers can build using this new API.

How to use it

If you are very excited and can’t wait to try it out, these are the steps that you can follow to start using xcodeproj within your projects.

First of all you have to add the dependency. You can do it by specifying the dependency in your Package.swift

gist:pepibumur/b0d53f59b471047abb7a4275008964d6

If you are using Marathon, you can update your Marathonfile to specify the dependency

gist:pepibumur/f069ea32f1685fa85680608670d9ed14

Once done, you can use it by just importing xcodeproj. Here you have some examples of things that you can do with xcodeproj:

gist:pepibumur/649fa8fe22f27d176ddf78c5e524e536

What’s next

While I was working on xcodeproj I got a few ideas about how to use xcodeproj. Here are some of them.

  • Generation of Xcode projects. That was my original motivation but @yonaskolb was faster than me and started building XcodeGen. Instead of starting another project I think it makes more sense to support the project contribute with it.
  • Project linting: Have you been in that situation when the compiler tells you about duplicated files, or when a file is missing but it’s still your project group (because those things happen when you screw it up solving your project conflicts). I’d like to provide linting for projects that identify those problems in your projects, alert you about them and offer the option to fix them automatically.
  • TDD: I was fascinated when I saw scala-sbt, the build tool for Scala. When you use it in your projects it has a mode that detects changes in files and runs only the tests of the files that have been impacted by those changes. I’d like to port that idea to the Xcode build system and being able to do something similar. That would allow developers iterate faster with confidence.

Thanks

This project wouldn’t have been possible without all the resources and open source projects that I’ve listed in the sections below. Also, I’d like to thank @yonaskolb, first official user of xcodeproj that is leveraging it to build XcodeGen, a tool for generating Xcode projects from a specification file. He’s contributed a lot to the library. Also thanks to @saky and @sergigram reviewing the post.

Also thanks to @saky and @sergigram reviewing the post.

References

]]>
<![CDATA[Read, update and write your Xcode projects from Swift]]>
Moving back to Berlin https://pepicrft.me/blog/2017/07/18/moving-back-to-berlin 2017-07-18T00:00:00+00:00 2017-07-18T00:00:00+00:00 <![CDATA[

On January 2017 I moved to Budapest. My girlfriend was living there, and the distance was very hard. I had been in Berlin for approximately two years, and although I loved the city, I thought that moving to Budapest was the right call to make. I liked, and like my company, SoundCloud, so I wanted to continue working for it. I appreciate a lot the opportunity that I was given by them, working remotely supporting the team and the project from there. I was very excited; new working setup, new city and life style.

Working remotely is great if you are a remote person. I’m not.

The first thing that I noticed during that period is that I’m not a remote person (thanks Maria for helping me to notice it). I had read a lot about remote working and how cool it is, and I started the remote setup full of appealing expectations. Unfortunately, none of them were as expected, and I went through a very tough period. I missed hanging out with my colleagues, grabbing a coffee or a beer after work, brainstorming using the whiteboards at the office. I felt that I lost what I liked the most of working from an office, the people. I tried to prove myself that what wasn’t working wasn’t me, but the setup. That working from a coworking space would help with my bad feelings, but it turned out that it didn’t. At the same time, I was hiding all those feelings because I didn’t want Maria to see that moving to Budapest was affecting me so badly workwise. But she is smart, and she noticed it from the first day. I still remember her describing me as a “non-remote person”. She knows me very well!

Budapest is a charming city, full of stunning views to enjoy, and thermal baths to relax after a very intense day. However, I found very difficult to have friends in the city. Compared to Berlin, where you find a lot of people relocating to the city and having the need to meet new people, Budapest was very hard in that sense. It makes total sense. The majority of the population is local and they have their friends circles. The same thing would happen if you moved to Spain, people would have their friends and use Spanish all the time. This difficulty making friends, together with the work situation, wasn’t a good experience for me. My team at SoundCloud and Maria were very supportive all the time. They both were aware of my feelings and tried to help me as much as they could.

Maria, who had been searching for a job to move on from Budapest, got a job offer in Berlin to start in August. I couldn’t believe it! Both of us were very excited. I would come back to the office, and she’d start working in the area she had studied for, translation and interpretation.

Sometimes in your life, there are calls that you have to make, and put your emotions in front. I don’t regret having moved there, even though the decision came with some uncertainty. Eventually, those decisions pay off, and your life finds its way to its fullness. I learned not to keep those things with myself, as I did with Maria, because the people around you, who know about your feelings, are the people that might do their best to help you.

On Sunday we are flying to Berlin! New adventures and experiences are about to come, and I can’t wait for them. Ping me if you are around, or you plan to visit the city, I’d love to meet up.

]]>
<![CDATA[A brief retrospective on what my life has been in the last few months and my thougts on my move to Berlin.]]>
Composable UIs https://pepicrft.me/blog/2017/03/02/composable-uis 2017-03-02T00:00:00+00:00 2017-03-02T00:00:00+00:00 <![CDATA[

One important aspect that as developers, we should keep in mind, is trying to reuse the code that we write whenever it’s possible. The main reason behind that is saving time. For example, if you write a cell for your app, TrackCell and you need the same (or similar) cell in a different collection view, you should try to reuse the one that you already have. However, sometimes the specifications of that cell change a little bit, and we end up with a bunch of properties that are being passed and a lot of if-elses in code updating the layout accordingly. Another approach would be using inheritance, but we’d easily get into a mess, breaking the SOLID principles and spreading the cell logic between the parent and the children. Have you ever find yourself in that situation?

Making the matter worse, even if the UI is reusable, the data that feeds the UI comes from a data source that most likely is far from the component. On iOS collection views you’ll have a schema similar to the one shown below where a model from the store, is mapped into a plain entity, provided by a data source, that is hooked with the collection view, and passed to the cell by using the collection view presenter.

An example of a typical collection view presenting data in cells

If you wanted to reuse any component from that cell you’d be forced to update all components in that hierarchy, all. And if you have unit tests for these components, you’d need to update them as well. It doesn’t scale well, does it?

Another common issue besides the reusability is the fact that whenever the data changes, we either reload whole collection view, or only the cells that changed. In both cases, we need to do it at the collection view. If your app is very heavy in background operations that might eventually lead to crashes if you don’t manage the concurrency properly or performance degradation.

React and React Native have solved this problem nicely. UIs are defined in components, these self-contained components have a lifecycle, and know how to update its state. Moreover, these components can easily be composed in higher hierarchies. The benefit of that components-based approach is that you can easily drag-and-drop these components around. For example, if you come up with a LikeButton that you use in a TrackCell, you can use the same LikeButton in another cell by just inserting it and defining how it should be positioned. Another awesome benefit of using React is that it knows what needs to reload and it only reloads that element in the dom. That’s very powerful if you combine it with Relay and GraphQL.

An advantage of using components is that it’s easier to ensure consistency. The same component is used from different places, and when a change needs to be done, you do it in one place and you get the change in all the places where the component is being used.

Despite you can use React natively with React Native, you don’t need a framework for that, but just change your mindset when defining app ui-data hierarchies. Companies like Spotify have come up with a similar approach, the Hub Framework, that abstracts you from composition, action handling, and lifecycle management. I like how flexible the framework it but I’m not a big fan of very opinionated frameworks, and Hub Framework is. As soon as you start using it, it influences the architecture of your apps heavily. I recommend you to watch this talk from John Sundell, Backend-driven UIs. It looks magic!

As I pointed out, with a mindset change, you can also have your own component-driven UI, with reusable and composable components:

An example of UI built with the component-based style

A component is a class that gives you the view and an interface to set up the view. Whoever uses these components shouldn’t know anything other than what it needs. Internally the view can use programming patterns like MVP, MVC, MVVM… But these patterns are invisible from the outside.

class LikeComponent {
   typealias TrackId = String
   var view: UIView
   func setup(for: TrackId)
}

Components can also respond to actions. For example, if you are trying to like a track, that turns into a few background operations to persist the new state into the API and the local store. In case the action response is more complicated and involves some UI, you can delegate de action to the app by using a delegate pattern. As an example, some actions might require a confirmation from the user. That confirmation can be handled from the outside.

Since with this approach, each component brings its state from the data source, so it’s important that the access to the data is fast. Otherwise, the UI will flicker, and that’s terrible for the user experience. One idea to prevent that is having an in-memory data source where the states are indexed, for example using a Dictionary. This data source can be filled lazily, fetching the data the first time the data is needed, and ensuring the data keeps synchronized with the store underneath (Core Data, Realm, serialization into disk..)

Compose all the things!


Do you follow a similar approach in your projects? Are you considering moving towards that approach? I’d like to hear about your experience and the problems you found along the way. Reach out to [email protected] or leave a comment below.

]]>
<![CDATA[Build UIs based on reusable components that you can compose in more complex hierarchies.]]>
Divide and conquer https://pepicrft.me/blog/2017/02/16/divide-and-conquer 2017-02-16T00:00:00+00:00 2017-02-16T00:00:00+00:00 <![CDATA[

Divide and rule (or divide and conquer, from Latin dīvide et īmpera) in politics and sociology is gaining and maintaining power by breaking up larger concentrations of power into pieces that individually have less power than the one implementing the strategy. The concept refers to a strategy that breaks up existing power structures, and especially prevents smaller power groups from linking up, causing rivalries and fomenting discord among the people.

More and more we see companies that run away from Xcode to try the dynamism of languages like Javascript on their mobiles apps. Just mentioning some examples:

Amongst other reasons, teams decide to give an opportunity to React Native because they want their teams to ship features faster, eliminate the compile-install cycles or overcome scalability issues that they are facing because of the team and project’s growth. It’s a reality that the tools and the patterns that Apple gives us don’t scale. While everything works when the app and the team are small, it quickly becomes a nightmare.

  • Slow compilation times.
  • Slow testing cycles, it makes TDD impossible.

If we counted the time we spend on these slow cycles, we’d notice the amount of time that developers spend unnecessarily. Moreove, motivation in the team goes down. It takes a lot of time for them to build something, and the product managers are putting pressure on them to deliver features fast. The company thinks that it’s a matter of people in the team, and they hire more, but the productivity stays the same.

Other companies, like Spotify, have preferred to moved that dynamism to the server (with their Hub Framework) and have developers focused on building components that they decide how to render with some backend logic. Companies are desperately looking for dynamism in their projects that is not given by Apple and their tools.

I’m not against React Native, I like it, and the direction all these projects are taking makes total sense. But this whole movement that is becoming a standard makes me feel sort of sad. It’s becoming standard to open Medium and read an article about a new company trying React Native. Developers and companies need to find their workaround to a problem that I’d expect Apple to fix it. It seems to me that they didn’t stop to think about how their tools allow projects to scale. How could Facebook build their apps using Xcode or the command line tools? How can developers do TDD in large projects avoiding these slow build/run cycles?

At SoundCloud we were suffering similar pains, build and testing times were a nightmare, our Core Data model didn’t scale anymore, and the engineers in the team were looking forward to starting doing pure Swift in the apps. Although React Native has always been around, we’ve been figuring out with the help of everyone in the team and other companies, how to tackle these issues with the tools our developers are familiar with, Swift, Objective-C and Xcode. It’s been very challenging, and thanks to this awesome team’s effort we’re seeing some light and changes start having a huge impact on the app and teams’ performance.

We’re making our Xcode project great again.

To add up to the lack of dynamism, our extensive use of Core Data didn’t scale either. We shared a data model across the whole app. We were suffering performance issues, and our architecture was much Core Data dependent.

Surprisingly the solution started with something very simple:

Dividing and conquer

Our app was a monolith. One target with the source code, another one for the specs, and the rests from CocoaPods for the external dependencies. The bigger these targets are, the more they take to compile. Although Xcode doesn’t recompile the targets that didn’t change, sometimes it has to, for example:

  • When it’s a clean build.
  • When Xcode messes it up (something it’s very likely to happen).
  • After doing pod install.

Inspired by other companies, and our micro services architecture, we built a similar approach. The fact that we splitted the monolith into smaller pieces made workflows faster; developers could modify a class, or a test, and build/run the tests in a matter of seconds. We started modularizing our iOS application.

From monolith to frameworks

It was just the first step toward that project environment we were aiming for. Soon we noticed the fact that frameworks didn’t have to talk to the existing Objective-C code base allowed teams to do pure Swift (at least in private). They didn’t have to deal with the bridging all the time as it was happening in the primary application target. The motivation went up; Swift was becoming real!

It was also a good opportunity to review how we were building the iOS app, the code architecture. @garriguv iterated with the team over what would be the Swift architecture for the project. Defining things such as:

  • How and where we would fetch the data from.
  • How the architecture elements would fit in all the frameworks.
  • How teams could be components that could be reused by other teams.
  • How the interfaces should look like to ensure a compatibility with the existing code base.

Everything was moving at a good pace. We decided not to rely on external dependencies and build everything by ourselves because it’d be easier to maintain. We focused on building just the things that we needed, keeping them simple and open to extension. Although we initially brought Quick and Nimble as dependencies, we soon figured out that they were breaking the good integration that Xcode has with plain XCTestCases (allowing you to run the tests directly from the IDE editor). We stepped back and reverted the few unit tests that we wrote to be plain XCTest.

We also came up with testing guidelines. We didn’t have the Objective-C runtime, and libraries like OCMock or OCMockito on Swift and people followed different approaches for mocking and generating data. The fact that we’re a very proactive team led Graeme, one of my colleagues, to come up with testing guidelines that the team could stick to. Furthermore, we added a Testing framework, where all our custom expectations and testing helpers would be. Whenever a developer came up with a testing component that could be shared with the rest, he/she could add the component to that framework.

As I mentioned earlier, Core Data was extensively used. We were hitting some performance issues and most of our features were very coupled to Core Data. Data that didn’t need persistence ended up persisted, data that was very critical, got removed because migration issues in other models, our model concurrency was limited to prevent threading issues. We learned from companies like Facebook, Linkedin that moved away from shared models and embraced distributed stores with immutable models. If you haven’t watched these talks, I recommend them to you:

Features data will be persisted (if needed) in different stores, deciding about the invalidation policy, if migration is supported, and providing APIs in case other parts of the app need to access the data that they are storing. Developers don’t have to worry anymore about Core Data, and the big shared model that we’re currently maintaining. The immutable nature of the models will prevent a lot of threading issues.

We were inspired by the components driven movement. By building your UI in components you make these components more reusable. You can drag & drop them in different parts of your apps without caring much about what’s inside the component.

Let’s say you work on all the engagement features, for example, likes. Instead of exposing the access to the data, you could expose a like button (that can be customized). The like button responds to actions (triggering data operations) and updates its state accordingly (observing data changes)

In our previous setup, adding a like in a cell, involved changes in the cell, the presenter, the data source. The only change that is needed with the components-based approach is the UI layer. That’s it!

Divide and conquer


I’m very excited about the direction of the project and all the challenges to come. Although we are not that close to the dynamic experience Javascript and React Native provides, we’re getting better compare to the experience we have with the monolith approach. There are things that we’d like to try, like Buck from Facebook. The build tool that speeds up builds with a powerful distributed cache (the support with Swift and dynamic frameworks is not that good yet).

Although it seems a very straightforward journey, it isn’t. Migration can be a nightmare because all the dependencies that your components might have. Where should I start from? What if I build this abstraction layer here that gives me some flexibility? Delegation to the app will be your best friend during the migration.

And closing with one good tip: control the excitement. An empty framework is a white canvas for developers; they can start adding code without many constraints. The guidelines that you have for the app doesn’t work anymore for the new architecture, and it’s crucial that you come up with some. Otherwise, you’ll easily find yourself with inconsistencies on APIs and patterns.

Don’t forget to document the architecture in your frameworks


If you want to join us in this exciting journey and help connecting people through music, SoundCloud is looking for iOS engineers.

]]>
<![CDATA[How modularizing your apps is helping us to scale our Xcode app.]]>
📦 Wrapping up 2016 https://pepicrft.me/blog/2016/12/18/wrapping-up-2016 2016-12-18T00:00:00+00:00 2016-12-18T00:00:00+00:00 <![CDATA[

The year is almost over. A friend of mine, Esteban, published a blog post, wrapping up his year and it inspired me to do the same.

Starting from my personal life, I unexpectedly met Maria José, my girlfriend. We were friends in the high school, and we hadn’t talked or a long time. Nevertheless, the world is sometimes not that big, and you don’t know when and where you can meet a person again. The fate brought us together; we met in Budapest and magic came out. After some flights between Berlin and Budapest I decided to move to Budapest, so I’ll start living there from January. I’ll be working for SoundCloud, the company I joined in October 2015. So far, it’s been the company I’ve been working the longest time with. It has a very inspiring culture, and it’s a good place to grow as an engineer because of its values, people and product. In 2016 I moved to a new team, Core Clients, working this time on tools for other developers.

In 2016 I’ve also attended many conferences, speaking about a project that we’re currently developing at SoundCloud, “Framework Oriented Programming” or “Apps Modularization”. All the slides are published on my SpeakerDeck account. I spoke at the following conferences:

I had the opportunity to meet world-class engineers and people, like Esteban Torres, who I have the pleasure to work with. Benjamin who I met at iOSCon, I still remember the very inspiring talk that we had in our way to the airport. Roy, whose open source contributions were part of most of my projects. Boris and his #yatusabes🍷 and Marius.

I travelled around Europe more than what I used to. Living in Berlin makes it easier since you can be in another country in just a few hours. I took the flights on Friday evening, and I flew back on Monday morning. I went straight from the airport to the office. Budapest, Granada, Malaga, Sevilla, Amsterdam, Copenhagen, Scotland, London, Stockholm, San Sebastian, Logroño, Viena, Praga, Greece, Slovakia, Krakow, Switzerland, Verona and Morroco. This is how my mother sees me after 2017 full of trips.

I started a newsletter and quit it after almost ten issues. Thanks to it, I had interesting discussions with Raimon since we had similar concerns about the topics I talked about. I kept writing articles. One of them, In a world… became very controversial in the iOS and Android communities and had a lot of retweets and comments on the social networks. Another one, Micro Features Architecture for iOS turned out to be attractive to the community and got almost 100 likes on Medium.

I published on a new spare-time project with Isaac, GitDo, and stop its development to focus on a new project with Sergi and Ana, Caramba. We’d develop and design our apps as we did when we started as developers. So far we’ve developed multiple apps, Open Source projects, and written numerous articles that we’ve shared on GitHub and Medium respectively.

I’ve tried to read more this year. Among other books, I’ve read: Peopleware, Tribe, Who moved my cheese?, Drive - The surprising truth about what motivates us. I started using GoodReads to track them and get recommendations based on my reads and my friends’ lectures. You can find me as @pepibumur on the platform.

2017 - What’s coming?

Languages have been a challenge for me. I want to keep improving my English this year, and try with a new language, which will most likely be German (yes… now that I moved to Budapest). I also want to learn a new programming skill. My focus, since I became a developer, has been iOS. I learnt Objective-C, and later Swift. However, when it’s about the web I know just a little. While I know how to develop and app from scratch and ship it to production, I don’t know how to do the equivalent with a website. I’d like to be able to code a frontend (HTML and CSS) and include some client Javascript logic. Moreover, Swift can be used in server environments and macOS applications. I’d like to explore Swift on the server side, implementing a backend on Swift, and learn about AppKit and learn apps development for macOS.

I’ll keep writing and sharing my learnings in the open. I’ll be in Budapest most of 2017, but I might consider moving somewhere else… Looking forward to starting 2017.

]]>
<![CDATA[The year is almost over. In this post I summarize everything that happened this year and my new year resolutions.]]>
Extensions, dependency injection and frameworks https://pepicrft.me/blog/2016/11/16/extensions-frameworks 2016-11-16T00:00:00+00:00 2016-11-16T00:00:00+00:00 <![CDATA[

I’d barely used extensions in my Swift code. When we started using Swift at SoundCloud I noticed a common parttern that most of people follow. They created extensions to organize the interface methods in different “namespaces”. As shown in the example below:

struct MyStruct {
  let name: String
}

extension MyStruct {
  func run() {
    print("\(name) runs")
  }
}

Extensions were used for pure style reasons, keeping the interface well organized. The interface could even be separated in different files follogin the Objective-C name style, MyStruct+Runner.swift.

With the transition into frameworks we found out a couple of use cases for extensions that might help you if you plan to transition your monolithic app into frameworks. I’ll go through them and show you some exaples:

Implicit dependency injection

When a framework provides a feature, it takes all the dependencies (aka Services) from the app. Typically a constructor of a feature that is defined in a framework looks like this:

// Feature.framework
public class MyFeature {
  private let client: Client
  public init(client: Client) {
    self.client = client
  }
  public var service: Service { return MyFeatureService(client: self.client) }
}

Every time we instantiate the feature from the app, we’ll end up writing the same initialization, passing the dependencies managed by the app:

// App
class Services {
  static var client: Client!
}

let feature = Feature(client: Services.client)
let anotherInstance = Feature(client: Services.client)

The more dependencies our feature has, the more code we’ll duplicate, since by default, all the instances will take the same dependencies (in rare cases we’ll inject a different dependency into a feature). Here is where extensions came very handy. Since we want to prevent our developers from write the same initialization logic all the time, we can extend the class from the app, adding up a convenience initializer:

// App
extension Feature {
  convenience init() {
    self.init(client: Services.client)
  }
}
let feature = Feature()
let anotherInstance = Feature()

Behaviour conformance

Another very useful use case for extensions in a frameworks setup is the conformance of application protocols from a framework model. In our transition into frameworks, we wanted to be able to reuse components from the app until we had time to migrate them into their own framework. To reuse these components (e.g. a TableViewCell presenter), the component (in the app) and the model (in the framework) had to speak the same language, in other words, they had to know about a shared interface. Since these interfaces are something application specific, that the framework shouldn’t be aware of, it didn’t make sense to pull them out to the framework. It’ll be clearer with an example:

  1. The model SearchEntity is extracted into its own Search.framework.
  2. From the app, the user can select a search result and open the player. The player requires the entity to conform a PlayQueueEntity protocol that is defined in the app.
  3. Since the Search.framework could be used in a different app/target where the results are not opened in a player, conforming the PlayQueueEntity protocol from the Search.framework wouldn’t make sense. Otherwise, the framework would know where it’s going to be used.

Thanks to extensions we can solve this issue. By just conforming a protocol from the app, we provide our framework models with a behaviour that they didn’t have originally:

// Search.framework
struct SearchEntity {
  let title: String
  let identifier: Sring
}

// App
extension SearchEntity: PlayQueueEntity {
  let artworkUrl: URL {
    return "xxxx/\(self.identifier)"
  }
}

By doing that the Search.framework can be very generic, and depending on where they are used, we extend the interfaces of its models.


These are two examples where extensions in Swift saved us a lof of time, duplicated and coupled coded. If you are using also entensions in your projects and you’d like to share your use cases do not hesitate to leave a comment below. I’m looking forward to hear how you use them.

]]>
<![CDATA[Learn how handy protocol extensions can be, when used in a frameworks architecture.]]>
Developing tools for developers https://pepicrft.me/blog/2016/11/12/developing-tools-for-developers 2016-11-12T00:00:00+00:00 2016-11-12T00:00:00+00:00 <![CDATA[

It’s been a few months since I moved to the iOS Core team at SoundCloud. The team is responsible for providing other developers with the tools that they need to develop features for users. We could say that our users are the developers. We are responsible for designing, implementing and maintaining the tools that allow them to interact with core services like the storage and the API. Although what we do at the end is writing code, there are a lot of things that change in comparison with coding features for users. It took me time to adopt the new mindset, and I keep learning new things as I work with my colleagues iterating over the processes and the solutions that we’re designing. In this short journey, I’ve discovered a lot of things, working with other teams, and with people inside the team. In this post, I’ll share some of these learnings.

Developers love to have fun. If you don’t provide your team with a challenging environment, they might lose all the motivation that they had for the project and start working on side projects. One of the reasons why people start working on side projects is because their full-time job doesn’t allow them to experiment and try out new things. Make sure you as a team, or your company work on providing the team with that space. Give them the playground that they’re asking for and make them feel like children inside the project.

One of the projects that we’re currently working on at SoundCloud is the “app modularization”. Developers can enjoy writing pure Swift without dealing with Objective-C interoperability. Moreover, since these modules are reusable, they can try out the interfaces from Playgrounds, or experiment with other targets and platforms.

Don’t force teams to do the things in a particular manner. For example, if you expose a reactive interface from the tools you limit them to follow the same paradigm. What if the developer is not familiar with it? What if that paradigm is not the appropriate one for the developer use case?. Offer them foundational interfaces and let them decide about the patterns that make them most productive.

With the app modularization, we’re revisiting the interfaces of our networking and storage layers. The old ones exposed reactive observables. Reactive programming ended up everywhere. It also slowed down the onboarding since we had to onboard people in a new paradigm they weren’t familiar with.

But if developers have freedom, does that mean that they could end up misusing the tools? It could happen, but thankfully you can prevent it by designing restrictive interfaces. You have the modifier final for classes and methods, and the access levels to decide whether something should be exposed or shouldn’t. If a developer needs something that a tool doesn’t provide it’ll open a discussion, feature-request like, where developers will discuss how the new feature fits into the tool. GitHub issues are very handy for that, in the same way, users request/propose features using social networks, developers can request/propose features/enhancements using the issues of the repository where the tools are. It’s very important that your team encourages communication. We are developers, and to achieve X we know we can take many paths to get it done, no matter if any of them implies workarounding any of the existing tools (for example, by doing some reverse engineering). If all developers start doing it, we’ll end up with solutions that are supposed to solve problems that are only solving by being workarounded. In that regards, it’s very important that you actively review PRs and how they use core solutions.

Work following the lean principle as a startup would do with its product:

  • Identify the problem, collect as much info as possible, see if other teams are also affected and define your success criteria.
  • Come up with a first iteration of the solution, merge it to master and pair on the new solution with the teams.
  • Evaluate the success criteria. If it’s not satisfied, work on a new iteration of the solution until you get something that solves that works for all the interested teams.

A solution that works today might not work in a few months. In this, so dynamic ecosystem, many things can happen. In that regards, it’s crucial to define your team KPIs for your work. You could revisit these KPIs from time to time to make sure you are aligned with them. One important KPI could be the teams’ performance: How much time do they spend implementing a new request for the API? How many classes do they need for that? Do they spend a lot of time testing? Don’t expect these KPIs to be easy to quantify. If you work on a feature for a user, you can report events to analytics providers and see the impact that your new feature has on the users. However, If you work on a core solution that is supposed to improve the team performance how do you measure that? One thing that helps a lot is offering feedback channels. It can be a pair-programming session, a meeting, or a backlog of issues in a repository. Via these feedback channels, you know if your solution achieves what it’s supposed to achieve. You should be very open to feedback. Most of the times you’ll have to encourage being transparent. When there’s a developer’s effort in the component, people stay away from giving feedback. Why so? Because they think that by giving you feedback might imply that you have to keep working on the solution. And that’s totally fine; that’s part of the iteration process of coming up with the solution. Listen actively; it’ll help you to identify when something is not working. Developers will thank you when you help them make their daily work easier.

Working as a core engineer and being able to take decisions about the project architecture allows you to influence other areas of the team. An area that we’ve also been able to influence is the on-boarding of newcomers. When a project gets bigger, the learning curve that they have to go through before they get into speed gets higher. That’s a bit frustrating for them. They have to learn about a lot of components, paradigms, tools, and it can take weeks or months. What if they were able to influence the code base since the first day?. Thanks to our modularization project we’ve been able to split the cake into small manageable pieces that include their on-boarding (for example using Playgrounds in Xcode or an example app). Developers can get on-boarded in any particular area, and when they became interested in other, they can just go through the on-boarding of that area to get familiar with it.

Do you work on tools for other teams and would like to share them here? I’d like to hear about them. Do not hesitate to share them using the comments below or reach out to me via Twitter, @pepibumur, or email, [email protected]

Would you like to work with the SoundCloud iOS in a very challenging and exciting environment? SoundCloud is currently looking for iOS Engineers, check out the positions page for engineering to know about the open positions.

]]>
<![CDATA[Learnings from being a core developer and providing other teams with the tools that they need for their daily work.]]>
Stepping off the social world https://pepicrft.me/blog/2016/11/09/off-the-social 2016-11-09T00:00:00+00:00 2016-11-09T00:00:00+00:00 <![CDATA[

I became addicted to social networks. What’s addicted for me? Opening apps like Facebook, Twitter and Instagram from time to time and scrolling on their home page up and down for minutes. I was at that point when unconsciously ended up opening the apps. Do I have free time? Then let’s open Facebook. Is the project taking a lot of time to compile? Let’s check what’s going on on Instagram. A lot of context switching during the day…

Adding up, I became crazy using Snapchat and Instagram Stories, posting videos talking about my life, I’ve done this, and I’ll do that, and this weekend I’ll be in this other place. Was I trying to feel socially active? Was I trying to be an influencer? I didn’t track it but I could say I spent at least 1 hour every day in social networks. Besides the context switching, staying in the loop boosted my ego and made me feel bad sometimes. I travel somewhere, and I have the need for sharing your trip, because why not, everyone is doing it. I had a bad day, and you scroll your Facebook timeline all the way down, and everyone seems happy, why should I feel so bad? These places are not spaces where you usually talk about bad things; these are things that we usually keep for ourselves.I cared more about my social Pedro than the one in the earth.

Moreover, it brought me some anxiety, in particular with Twitter. Many things are going on there, that I felt bad if I wasn’t in the loop, and I didn’t open Twitter for days. I felt that I’d missed a lot and then spent half an hour reading what I’ve missed. I also became addicted to reading short pieces of text, status on Facebook, Tweets… When I tried to relax and enjoy reading larger pieces of text, I struggled to keep concentration and get the ideas out of the chapters. Most of the times I couldn’t concentrate on the content, and my brain kept thinking about what was going on the Internet.

I then jumped back in time and think about me a few years ago, when I rarely used Twitter, when I didn’t open Instagram in weeks, and I didn’t care about the updates on Facebook. I had no distraction; I enjoyed more every single thing that I did. I could do some jogging, read a book, or enjoy a dinner with some friends, having my mind present, and in only one context. Because I didn’t give a shit about my social presence. It happens to me nowadays, that I try to keep a conversation with someone and it is like trying to talk to a wall, just throwing words out of your mouth that go nowhere. It also happened to me trying to follow someone’s speech and not being able to stay concentrated. I felt terrible. I think about all the tools, which are coming out to make people more connected, but they are getting the opposite. I used to think that I was more connected by posting more on Facebook, or by Tweeting about the ultimate tools that I was using as a developer when I wasn’t.

Why was I doing it? From the developer side I was unconsciously trying to find some recognition in the community (Person X has done app Y). That opens up more opportunities but also some pressure. People set expectations, and you set yourself under pressure to satisfy these expectations. I tried to stop using Twitter, but after a few days, I thought… but all developers are using it, in a few years I’ll be unknown if I don’t use Twitter, where all the things are taking place. Is there something bad about it? Not really, I enjoyed building apps and tools being unknown. I enjoyed more doing stuff as an unknown developer than as known one. Now I care more about sharing than about doing. I felt that I became a sayer instead of a doer. Communities and companies are misusing the terms good developer and known developer nowadays. I’d like to be more unknown, use Twitter less, and if in a few years someone recognizes me, that he/she does it because of the work that I’ve done as a developer, not as a Twitter rockstar. Hopefully, newspapers, books and technical blogs will stay with us for a long time.

I wondered the same question for Facebook and Instagram. I mostly use them for my personal stuff. I share the achievements, trips, and the cool things that I do. I use it mostly when I’m travelling. I share the things that I do abroad, waiting for these likes, (or loves) on Facebook. I like when I see people liking the photo and commenting on it. But these things end up leading to ego: “Don’t forget to share the cool thing that you’re doing right now, either your awesome trip or the food at the fancy restaurant”. When I did show off my ego using these tools I ended up regretting. What if I do focus on the moment itself and forget about sharing it? Yeah, no one will know about your life unless you live in the same city where they live, but trust me, catching up while you grab a beer with your friends is worth it.

I uninstalled these three apps. I don’t have that easy way to use them from the phone. Since I did, I’ve spent more time reading and focusing in things other than the social activity. I also sleep better at night since checking social networks is not the last thing that I do before going to sleep. I’m not saying with that that you should do the same. The problems that I mentioned come from the way I use these tools and not from the tools themselves.

]]>
<![CDATA[I became addicted to social networks. What’s addicted for me? Opening apps like Facebook, Twitter and ]]>
Be Reactive my friend https://pepicrft.me/blog/2016/07/12/be-reactive-my-friend 2016-07-12T00:00:00+00:00 2016-07-12T00:00:00+00:00 <![CDATA[

In a world where data comes from everywhere being reactive when coding your apps could make a huge difference in the user experience. Even though Reactive is becoming more popular across the community the benefits are not clear enough. Entities can trigger events from many places around the app. It can be either from an user action, an application lifecycle or a push notification. These triggers turn into some state changes that our app has to reflect somehow. Some examples could be:

  • Opening a detail view when a push notification is received.
    • Push Notification into Navigation State Change
  • View reload due to a sync of data in background.
    • Data Refresh into a Collection View Reload
  • Events from a player, for example new track.
    • User action into a Player Controls Update

When our code is imperative, we notify the interested entities manually. How many of you have written pieces of code like these in your apps:

// Synchronizing data
apiClient.synchronizeTracks { [weak self] tracks in
  self?.tracks = tracks
  self?.tableView.reloadData()
}

// Player
func loadTrack(track: Track) {
  self.player.instance.loadTrack(track)
  self.playerControls.updateWithTrack(track)
}

Entities change the state in our sources of truth from many points (e.g. tracks in a database or current track in the player). Think about a player that is playing tracks in background. The player controls are floating in the UI and the user opens a track from a new view controller. What if the developer forgets about updating the controls? Or think about a predictive data sync. For example, we sync the notifications when a push notification is received. How would you update the collection view if you don’t know about the notifications view controller from the AppDelegate? (and no… appealing to singletons or inspection is not a good solution).

There’s a clear need to be reactive, react to the things that are happening in your app, no matter who’s triggering them and from where. Being reactive doesn’t mean add one of these reactive libraries in your project as a pod and use it. But design your application to be Reactive. These libraries will help you making things easier and nicer.

Being reactive is about designing your app to react to changes. It requires some effort to set your mindset away from the imperative world.

I’ll guide you in this post through some common scenarios and explain how to separate the action triggering from the reaction.

Databases and data synchronization

Reactive database

Scenario

Databases are sources of truth in our apps. They save the data, in most of cases coming from an external API. Since apps want to offer the best user experience, they get sync with the API as the user navigates through the app. viewDidLoad() is the common place where most of developers trigger the data sync. However, there are more places where the data can be sync. Those apps that sync the data predictively do it from places such as background operations or push notifications.

In these cases it’s important that any entity in the app interested in that data, gets notified when these syncs take place.

Example

When the StreamViewController is loaded we ask for data synchronization. This event doesn’t report anything but it could return an error if we’re interested in presenting an alert in that case. When the data is updated in the database we’ll be observing it and updating the collection view presenting that new data that has been inserted.

class StreamViewController: UIViewController {
  func viewDidLoad() {
    super.viewDidLoad()
    self.synchronizeStream()
    self.observeStreamTracks()
  }

  func synchronizeStream() {
   // Synchronization logic
  }

  func observeStreamTracks() {
    //TODO
  }
}

Implementation

  • If you’re using Realm/CoreData you can use SugarRecord that provides a StorageObservable object.
  • There’s a Realm reactive extension for RxSwift, RxRealm that supports notifications.

Keychain and user login/logout

Reactive keychain

Scenario

If we support authentication in our apps we might be saving user’s credentials in Keychain. Login is done from the login view and the logout most likely from an account/settings view. When login/logout take place in the app there are some entities that might be interested about session changes in the Keychain, for example, your app navigation and your http client:

  • When the user signs in, you might want to reset the application rootViewController and create the new views hierarchy for authenticated users. The same with the signout.
  • If you have a http client that holds the user session state you might need resetting the token from that client instance, otherwise you’ll be sending requests with the wrong authentication state.

Example

We subscribe in the AppDelegate to session changes. Since we’re only interested in the transition between states, we use the operator distinctUntilChanged. When there’s a new .Next(Session?) event sent we update the root view controller according to that.

class AppDelegate {

  func observeUserSession() {
     let sessionObserver = SessionObserver(name: "my-service")
     sessionObserver
      .map { $0 != nil }
      .distinctUntilChanged()
      .subscribeNext { [weak self] authenticated in
        if authenticated {
          self?.showLogin()
        } else {
          self?.showApp()
        }
      }
  }
}

Implementation

Keychain is not KVO compliant so we cannot subscribe to changes at a given keypath. How can we observe it then? You have to proxy the access to the Keychain registering observers there. Whenever the Keychain is accessed through this proxy all the observers are notified if they are concerned about the updated keychain element. It’s very important that the access to the keychain is always done via that proxy class, otherwise your observers wouldn’t get notifications when the state changes in the Keychain.

The interface would look like this one:

class KeychainRepository {
  static let instance: KeychainRepository = KeychainRepository()
  func save(session session: Session, name: String)
  func clear(name name: String)
  func fetch(name name: String) -> Session?
  func observe(name name: String) -> Observable<Session?>
}

UserDefaults and local states

Scenario

There’s some data that is not persisted in databases for example the user profile or local configuration that is not sync in the API and that is cleaned up once the user removes the app.

Example

func viewDidLoad() {
  super.viewDidLoad()
  self.userObserver = UserObserver()
  self.userObserver.rx().subscribeNext { [weak self] user in
    self.avatarImageView.setImage(url: user.avatarUrl)
  }
}

Implementation

If we support authentication in our apps we might be saving user’s credentials in Keychain. User signs in from the login view and the logout most likely from an account/settings view. When login/logout take place in the app there are some entities that might be interested about session changes in the Keychain. For example, your app navigation and your http client:

protocol Repository {
  typealias T
  func save(entity entity: T, name: String)
  func clear(name name: String)
  func fetch(name name: String) -> T?
  func observe(name name: String) -> Observable<T?>
}

I did an implementation that is available on this gist. It defines an UserDefaultsObservable that allows you to subscribe to. To keep the subscription alive you must keep a reference to the observable.

Conclusion

Our apps are full of these scenarios where triggers and observations are completely separated. In that sense reacting to these triggers would prevent us from imperatively looking for the entities that might be interested in these events. We also decouple synchronization from fetching so if we wanted to replace any of those elements, you could easily do it without affecting the other. Moreover, don’t think that being reactive is about using reactive concepts and libraries out of there. You can be reactive as well by subscribing to the NSNotificationCenter or using a NSFetchedResultsController. Libraries such as RxSwift and ReactiveCocoa provide a typed, more stream based solution, with some operators that allows you combining these events before they reach the subscribers.

]]>
<![CDATA[Article that explains the benefits of reactive programming in the iOS world.]]>
Micro Features Architecture for iOS https://pepicrft.me/blog/2016/07/10/microfeatures 2016-07-10T00:00:00+00:00 2016-07-10T00:00:00+00:00 <![CDATA[

When teams grow, maintaining large codebases can be a big pain in the ass. You end up having a lot of conflicts because when a feature is built up, it relies on horizontal layers and across teams that are shared with other features. One example for an horizontal layer could be the database.

Have you ever seen yourself in a situation where one of your colleagues modified that layer and bugs came out from other features?

Companies such as Facebook, Uber or Spotify have their projects organised in small projects that are linked together, maintained by different teams that are responsible of their development, versioning, documentation, testing… Unless the architecture of your project is as atomic as your teams, you’ll end up with more conflicts that you had initially since these teams will end up depending on other teams progress. As there isn’t a lot of information about how these companies are managing it I started thinking about it, and what these atomic features would look like answering questions such as how would the navigation to these features be?, who’d inject the dependencies between features?, would it be possible to share wrappers across these features?…

The problem is sort of similar to what happened in backend services. Code bases written with frameworks such as Ruby and Rails, or Django didn’t scale as teams became bigger. They wound up moving into something you might have probably heard about, ”Microservices” (Martin Fowler writes about it here). In that architecture, backend infrastructures are organised in multiple microservices responsible of different areas, for example, one microservice just for the payments, another one for users, one for the search feature… They discover each other and interact between them, for example, if the payments microservice needs to fetch something about the user whose information is provided by another microservice, it will use that microservice instead to fetch that information. These backend microservices are completely atomic, they can use any programming language, internal architecture, dependencies… The only requirement for these atomic microservices is that they provide an accessible interface that other services can consume through network.

Since they tackled the same problems in backend services I started thinking about how the same ideas would apply to an environment other than servers, iOS.

Would it be possible to build atomic features that could be hooked up within the app?

It is a big challenge, but what if we could? Features as packages, that implement their own views, models, programming language, business logic… They’d offer a linkable interface and the app responsibility would be just hooking all of them and injecting the dependencies as needed.

On this post, I’ll go through some basic definitions and ideas that I came up with about this architectural challenge, establishing analogies with microservices for server environments.

Framework

framework

Features should be atomic, thus, contained in themselves. They should have clearly defined responsibilities and boundaries. In microservices we have instances running on servers, either Ruby on Rails projects, Scala… where the boundaries are defined by the network layer. But what about mobile? frameworks. Frameworks are a way to encapsulate your source code, deciding which elements should be accessible, and in essence, defining the interface of your feature in code.

Frameworks should speak the same public language. As we can code on Swift/Objective-C/Objective-C++/React Native for iOS we have to ensure that no matter which language they use internally, there’s a contract for the public language. Otherwise the connection between them would be impossible.

It doesn’t necessarily mean every feature has to be one framework. In most of cases it’ll be but it can also be more than one. For example, a Player feature, could be 2 frameworks, a PlayerCore with everything that has to be with the interaction with AVPlayer and a PlayerUI offers the View/ViewController that uses the PlayerCore underneath.

If you want to know more about frameworks, I’ve written about them before, but there are also good articles out there where they explain what a framework is in essence, what’s the difference between framework and a Library, and the difference between Static and Dynamic ones. Here you have a list of good references to check out:

Data Source framework

As I pointed out, frameworks should be atomic. However some will access resources that are shared with other frameworks. Just to mention some, the disk space through NSFileManager or the user preferences via NSUserDefaults… We should try the access to be also atomic. How?, organise the resource atomically, providing subspaces in those resources that you access from all your feature frameworks. As an example we could have different databases in different folders inside a Databases root folder:

Databases/
  Player/
  Database.sqlite
  Stream/
  Database.sqlite

Since frameworks shouldn’t know about where others are saving their data, you might have conflicts accessing/writing that shared resource. You could end up with a non consistent resource structure where each feature has taken its own ”portion” of the pizza and used it. How can we prevent it? Providing access frameworks for these resources

Providing wrappers for accessing these resources, you ensure the consistency when organising the shared resources. A CoreDataframework for example, could guarantee that the structure is the one shown above. Or a Keychainframework that guarantees a proper access to the Keychain from multiple frameworks.

These wrappers should never be tied to use cases. If we think about CoreData, they shouldn’t provide a data model. This one should be defined and provided externally from the framework using the wrapper.

Some examples of these frameworks could be:

  • CoreData framework
  • Network framework
  • Keychain framework
  • FileManager framework

These frameworks are also useful to avoid the boilerplate setup code that some persistence solutions require, for example CoreData.

Shared microfeatures

Backend framework

A backend framework is a framework that doesn’t provide an UI interface. In most of cases they’ll fetch data from somewhere, apply some business logic and return data back to be consumed. In some other cases they might be input only frameworks, for example, we could have a framework that is responsible for downloading images and persisting them in the disk. The API of that framework would look like this:

// Image Caching framework
class ImageCaching {
  func isCached(url: String) -> Bool
  func fetchCaching(url: String) throws -> UIImage?
}

UI framework

A UI framework represents a component that your application can navigate to. It could internally include backend components, or have them as a separate framework that would be the backend of your feature. Think about any of your apps, and think about the features users can see. If I take as a example for example SoundCloud, these features would most likely be:

  • Search
  • Stream
  • Player
  • UserProfile
  • Settings

The example below shows two different setups. The one on the left includes everything in the same framework whereas the one on the right separates the UI layer from the backend one:

UI microfeatures

UI frameworks must be navigatable (i.e. the application should be able to navigate to them). We can achieve that by just exposing the ViewController but there’s a more interesting approach that doesn’t expose any UIKit component but coordinators. Before diving into the idea I’d like to share with you this talk from NSSpain about Presenting Coordinators that introduces the coordinators idea and inspired me to use them for this architecture.

The idea of coordinator is extracting the navigation from ViewControllers and move it to entities called Coordinators. Coordinators are responsible of instantiate your ViewControllers and setup everything necesary to navigate to the ViewController. Coordinators build up a tree that you can navigate through, and the only thing they need to navigate is a navigation context, for example a ViewController. They could also set up some information, for example a track identifier. The example below shows how these components would work in practice.

// Player.swift
class Player {
  let storage: Storage
}

// Player+API
extension Player {
  func createQueue(tracks: [PlayerTrack]) -> String
}

// Player+Coordinators.swift
extension Player {
  func coordinator(fromViewController viewController: UIViewController, queueId: String) -> Coordinator
}

Then let’s say we launch the player from the search results (Search framework):

// Search.swift
class Search {
  let player: Player
  let storage: Storage

  init(player: Player) {
    self.player = player
    self.storage = Storage(model: "Player")
    self.storage.setup()
  }
}

// SearchResultsCoordinator
class SearchResultsCoordinator {
  weak var search: Search?
  weak var viewController: UIViewController?

  func userDidSelectSearchTrack(track: Track) {
    guard let search = self.search, viewController = self.viewController else { return }
    let player = search.player
    let queueId = self.createQueueFromSearchTrack(track)
    let coordinator = player.coordinator(fromViewController: viewController, queueId: queueId)
  }
}

Schema

This is an example of what the architecture would look like in an application such as SoundCloud. I haven’t drawn the dependencies between them but the frameworks that we’d have in each of these layers. The frameworks will vary depending on your application features but you’ll probably need an API framework or a Session framework that is responsible to provide the user session to those frameworks that need it, for example API:

Microfeatures schema

Dependencies

Some of the modules that are defined require some setup and an instance to be created. Since the might be expensive we cannot be creating module instances from the other modules, but pass the instance instead (we inject the module dependency). It’s exactly the same concept that we use for code but in this case in a higher level and with modules.

Remember something very important. Your modules should be stateless. Only if needed, instantiate your modules with a setup configuration. And that’s all. They should be like a REST APIs, they don’t hold any state but send you a representational state of the data that comes from a data source.

Modules must be designed to be injectable. What does it mean? We have to define a module class that is the entry point of our module. We should then think our module API as a class that we instantiate.

// Offline.swift
class Offline {

  // MARK: - Internal

  internal var storage: Storage


  // MARK: - Init

  public init() {
    self.storage = Storage(modeL: "Offline")
    self.storage.setup()
  }
}

Since we have extensions we’re not forced to implement the entire API in the same Swift file. We could separate it in multiple files and have everything better organized:

// Offline+API.swift
public extension Offline {
  func isTrackOffline(track: Track) -> Bool
  func downloadTrackIfNeeded(track: Track, completion: (error: NSError?) -> Void)
}

Coupling

Compared to microservices where the communication is performed via network in this case modules know about each other and communicate directly calling the methods from their public APIs. As mentioned earlier it requires some dependencies to be injected that leads to a coupling between these modules. Depending on how we handle that coupling, replacing the framework in the future might be a truly pain in the ass. How can we prevent it?

  • If framework A depends on framework B that exposes an API, B_API. A could define an access protocol for any B framework and could extend B_API to conform that API (using protocol extensions). Since A then depends on the protocol, if the framework is replaced in the future, the new framework API should be extended as we did with B conforming that protocol.
  • We could use the decorator pattern and decorate the B instance in new instance that exposes another interface. That new instance interface is define and known by A and it’d behave as a proxy.

In either cases we might also avoid the coupling with modelss since other frameworks might expose their own models. As we might not be interested in all the exposed properties we can define a simplified version of the models that we’d use from the framework that is depending on another. These models can expose a constructor that takes the model coming from the other framework.

Versions and dependency graph

The more frameworks you have the more complex the graph becomes. In that regards, keeping a good versioning system is very important, for example semantic versioning. If you are not familiar enough with Semantic Versioning, this is what it states:

  • MAJOR version when you make incompatible API changes,
  • MINOR version when you add functionality in a backwards-compatible manner, and
  • PATCH version when you make backwards-compatible bug fixes. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

Used properly, we notify the modules that rely on our module when we might break the communication with our public API due to important changes.

Since multiple frameworks might depend on another one it’s very important, when we bump the major version, for all to be aligned with that version bump. Maybe bump a dependency version number, benefit from the new improvements, but also get a broken module because of the change. Big companies have specialized teams for this, they are responsible of ensuring a good connection between all the frameworks within the app and that all the teams are aware about the state of their dependencies.

Summing up

Monolithic projects don’t scale when used in big or feature teams. The sooner you tackle these conflicts the more straightforward the transition into modules will be.

This is one approach with some ideas, an example of how to do it but this is not the only ones. Think about your app and think about the components that your app has. You could slice it in very small modules or just give your first steps with a few modules. There are great tools out there that help in this regards. Thanks to CocoaPods you can define your modules as pods and integrate them using CocoaPods. Or you can just define your projects in the same Xcode workspace and connect the dependencies manually.

As side advantages of this movement as well as having teams and features atomic, your teams could define their own style guidelines, the private language they use, Swift, C++, Objective-C++, React Native,… As long as they offer a generic interface that everyone can access to, it works. If your company can afford it, a good team organization could include a team that ensures consistency in the contracts between these public interfaces and connection between them.

Are you a big company and you are already doing or trying to move into modules? I’d love to hear about you. At SoundCloud we’ve already started this transition. And amongst all the benefits, we want to be able to have modularized teams, be able to include features writen in languages such as React Native, and experiment with build tools such as Buck from Facebook

Thanks reviewers

I’d like to thanks people below that helped me reviewing the article:

]]>
<![CDATA[Organize your app in small features that you ]]>
Network Testing - Say hello to Szimpla https://pepicrft.me/blog/2016/06/22/testing-networking 2016-06-22T00:00:00+00:00 2016-06-22T00:00:00+00:00 <![CDATA[

Where does Szimpla come from? For those who are curious about the naming, I got the name from a famous Ruinpub in Budapest I liked the name the first time I heard about it and I decided to use the name for this testing library (later on I discovered that there’s also a Szimpla in Berlin). Translated from the Hungarian it means “Single” which doesn’t match with what the library is actually doing…😖

What’s Szimpla? It’s a Swift framework that helps developers testing Networking on their applications.

But.. what’s the purpose? Although we might provide unit and acceptance tests with our apps, for example, how request factories build the requests (unit tests), how the application can navigate from one view to the other (acceptance tests), there’s some stuff that is hard to test with the existing testing approaches & frameworks. That explains why companies like Facebook came up with a library for testing layouts using snapshots, and developers keep building tools, ensuring we don’t leave any area without testing. Szimpla does something similar to what the Facebook library does but instead of snapshotting views, it records requests. The library allows you record all the requests that have been sent during some code execution or while navigating through the app and use the recorded data as expectations for future executions.

Where is it useful for? It’s useful for all the networking stuff whose result has no direct result over the UI, for example Analytics. How many times have you forgotten sending an event, or you just sent it with the wrong parameters and them the analytics team at your company complained about events not being sent or sent with the wrong information? Probably a few times before… One of the reason why these things happen is because we cannot test it with Acceptance Tests since it’s a “backend” functionality and because these network calls are triggered probably from user actions, views life cycles and it’s something we don’t usually test with unit tests. There’s a clear need there, isn’t there?

Inspired by a similar library that we use at SoundCloud for acceptance tests implemented in Frank I came up with a solution more flexible that can be used directly XCTest. Check it out at https://github.com/pepicrft/szimpla


Installing Szimpla

Thanks CocoaPods! It’s super easy if you’re using CocoaPods with your project. Just add the line for the Szimpla dependency:

gist:pepibumur/056f9d27d90096f7084fa7f24bdbd3fc

Defining the Snapshots Directory

Part of the setups including specifying in which directory the requests snapshots should be saved. It’s done via an Environment Variable that has to be defined in the application scheme as shown in the screenshot below:

If for any reason you forget this step, the test will assert trying to initialize Szimpla.

Using it with Acceptance Tests

The first time you define the test you should run it recording the requests and saving them locally. Execute your test with record. Once it gets recorded. You can update the recorded requests according to your needs (you can even use regular expression). Then update the record method to use the validate one. Future tests executions will use the recorded data to match the requests.

gist:pepibumur/7bfe2c9bc71298cb2bb02989111b6afd

Using it with Unit Tests (Nimble Expectation)

Szimpla also provides expectations for Nimble. You can apply the same logic to your pieces of code and check if after a given closure is executed, a set of requests have been sent:

gist:pepibumur/09cb0838aef77fdd3726fa98271f7b16


Next steps

I have some ideas in mind to keep improving the library and adding more features. Once developers and companies start using it I guess more will come up. Just mentioning a few of them:

  • Allow custom validators: The users could provide the own validators. So they could define how to match the recorded requests with the saved requests.

  • More filters: The library only provides one filter based on the base URL. New filters would allow the user to filter the requests depending on parameters, headers, …

Feedback is welcome, I’m looking forward to hearing from you and improve the library with your help. Drop me a line to [email protected] if you’re considering using it and you have some concerns or ideas.

]]>
<![CDATA[Introduction post for the last library that I've been working on, Szimpla.]]>
A journey into Frameworks - Le Testing Framework https://pepicrft.me/blog/2016/06/21/a-journey-into-frameworks 2016-06-21T00:00:00+00:00 2016-06-21T00:00:00+00:00 <![CDATA[

I’ve been lately focused on architecting the apps in multiple frameworks that are platform independent. I’ve also given [some talks] about it and applied these principles to a personal project, [GitDo]. However, who could benefit most from that architecture that would be SoundCloud. I proposed it internally and started taking the first steps on one direction. Being a so consolidated project used by many users around the world makes things more complicated compared to GitDo, but… let’s accept the challenge, shouldn’t we?

The application was organised in one single application target whose external dependencies were brought with CocoaPods as static libraries. We hadn’t moved yet into Swift so we kept the libraries as static so that we didn’t affect the application launch time. Once we had an idea of how the frameworks stack would look like we designed how the iterations would be. We’d create these frameworks progressively, starting for those at the bottom (foundation frameworks) and going up in the stack. The two big frameworks in the bottom would be Core and Testing. Core would be the framework that includes the Foundation components such as ReactiveCocoa, CocoaLumberjack, Crashlytics, … These are frameworks that are needed from all the layers since we log from all of them, or report errors whenever any is thrown. The brother of this one would be Testing. In the same manner, this one would include foundation testing components such as testing libraries like Specta, Expecta, or mocking ones, OCMock, OCMockito. This one would also include any helper class that we created to automate testing tasks that we repeated over an over.

Between these two Testing was the first one. I had to redo the setup of this one multiple times until I found the setup that match our needs. We want the Frameworks not to affect the launch time (since a lot of them does), avoid conflicts when changing between branches and commits, and have a very robust setup that didn’t break with language or IDE updates. These are the setup configurations that we tried and why we decided to move away from them:

  • External dependencies manually fetched into the project and separated in Frameworks: We fetched the external dependencies manually, put them into its own folder, and created a framework per dependency (plus the Testing one). We ended up with around 8 Frameworks just only for testing. That setup wouldn’t affect the user launch time since only the Testing targets would be linked against them, but that setup for other Frameworks would make the launch time slower. Moreover the manual fetching caused one of the developers, to be the contributor in the project of these external dependencies code (which is not true).
  • External dependencies manually fetched into the project and one Framework: Dependencies would still be fetched manually but instead of adding them in separated Frameworks that Testing would link against to, all of them were part of the same Testing framework. We did setup the Testing framework to work with all these dependencies in the same place and it worked pretty well. However, we still fetched the source code of these external dependencies manually. These dependencies would be part of our source code, and we didn’t want that.
  • External dependencies with git submodules and one Framework: This setup is similar to the previous one but instead of adding the external dependencies as part of the project we just fetched them with git submodules.

For those who might wonder we not CocoaPods for all the setup there’s a couple of reason. The first reason is that we’re not going to be versioning for the first iterations of this setup. That makes hard to work with that setup in teams since CocoaPods would be caching versions of the frameworks and wouldn’t take the changes. The second reason is that if we use the use_frameworks flag, it converts all the dependencies into Framework. That would turn into more than 20 Frameworks for the project (And [according to Apple] it shouldn’t be more than 6 Framework if you don’t want to get your app startup time affected)

Steps

It’s not that hard as it seems, but we are so used to let CocoaPods that we forget about the ”what’s behind.

  1. Create a Framework. In our case we called it Testing.
  2. Setup the project to use a [multi-platform configuration file] for that Framework. You should be able to compile the Framework for more than one platform.
  3. Fetch the external dependencies that you need using Git submodules. Each dependency should be in a different directory. Add the source code of these dependencies as source files of your Framework.
  4. Check the .podspec of these dependencies. Some might need some especial flags or be linked agains a system Framework. In that case, make sure you link your Framework agains these frameworks and add these flags.
  5. You might need some macros or custom setup if these external dependencies code is not valid for all the platforms. In that case, fork the dependency and modify these things that prevent you from compiling the Framework for other platforms.

Note: You might find some external dependencies as already compiled Frameworks (e.g. Crashlytics). In that case, add the Framework as part of the project and link your Framework against these Frameworks. They’ll probably offer a binary per platform. In order to keep your Framework multi-platform you should play with the Frameworks Search Path. You should point that attribute to a different folder depending on the platform:

gist:pepibumur/79ce98a26949e20d526acb201359433b

Next steps

Once the Testing framework is defined these testing dependencies can be removed from the Podfile and use the Framework instead. It can be easily linking from the application target Build phases. You’ll probably have to refactor some imports because they were importing the external dependencies directly. It becomes simpler thanks to the Framework.

@import Testing;

The next one in the list: Core

Enjoy frameworks!

]]>
<![CDATA[One of posts that tells about the migration from a monolithic architecture based in single target to have multiple reusable Frameworks.]]>
Being disconnected in a connected world https://pepicrft.me/blog/2016/06/20/being-disconnected-in-a-connected-world 2016-06-20T00:00:00+00:00 2016-06-20T00:00:00+00:00 <![CDATA[

We live in a connected world. We all are connected with our mobile phones, computers, social accounts. We spend our time tweeting, posting the photos of our last trip on Facebook, or recording Snapchats to let everyone know what we are doing right now. I watched this TED talk, Connected but alone? while I was having a coffee and it made me think about the way people interact nowadays. How this has changed compared to a few years ago and what does it mean for us, and for the future generations.


Social Networks

We don’t know how to live without social networks. It’s a reality. The young generations started this movement and later on, parents, and also grandparents joined it. When we first tried the social revolution it was odd. Why should we be sharing what we are doing every time something happens? No one used it as first, but the more people started using it the more addictive it became. People became interested in it, they could know about other peoples lives and also show off theirs. We got to that point where we needed to share every single thing that happened to us to truly believe it was happening. Your last party, the trip with your friends, your first summer swim, your birthday gifts, … It was (and it is) the perfect place to show off yourself. It became a race that fed ego and envy and I’m sure most of you have already felt it (I’m also part of it). Social networks also became the perfect procrastination place. They are the perfect showcase for people full of ego, for people that feel alone in the real world but full of friends in the social space and for gossip people that love knowing about others people life. Interactions turn into asynchronous.

Moreover, everybody seems happy on the internet. We share happiness and turn loneliness and bad things into positiveness. Because people out there cannot see us sad, right? That makes social networks a fake environment. When you get surrounded by such positiveness any negative feeling in your life makes you feel bad. How is it possible that I feel such in a bad mood when everyone around me seems to be so happy? In that regards, I try to stay away from social networks as much as possible. It’s hard since social networks is the only way to reach most of your friends nowadays, and you want be in touch with your friends and family that are remote. It’s very important to use social networks consciously, but I’m the first one being unconscious when it’s about using them. I spend time scrolling down to see what’s new around me, to see who posted what, or who had a trip recently.

We stopped interacting face to face, your eyes in front of other person’s eyes. Instead, we spend time thinking about how to formulate our thoughts, how to make the other person think about us what we’re not actually feeling.


Messaging

Social networks were only the first step. Messaging apps quickly joined to this party. Whatsapp, Facebook Messenger, Hangouts, Snapchat. Each of them trying to offer more than their competitor, videos that can only be seen once, stickers, customised photos,… Social networks like Facebook noticed this was going to be the next big revolution in communications and they acquired Whatsapp paying $22 Billion. People started moving their conversations to these platforms, where you can think about what you are going to reply, where you can use emojis to express yourself and hide yourself behind a screen. People feel more secure when they use these apps because they are not looking at the other person eyes. You can be sharing something which is not what you’re feeling. Only when you talk to that person face by face, can notice these feelings. And that’s what makes communication so special. We’re getting so used to messaging apps that when we have to speak in front of another person we don’t know how to express ourselves.

It has happened to me trying to express myself in front of another person or a group of people and not being able to say anything. Struggling to convert my ideas into words, to listen to the other person and keep up with the conversation. I had to made some effort to learn back what I forgot.

It’s hard for me to see how new generations are more and more into mobile phones and social apps. Seeing group of young people hanging out and looking at their mobile phones instead of talking to each other laughing and having fun. It’s also hard to see how my family is too addicted to it. Talking to them means talking about what’s going on on the social world, who married whom, who posted what,… People are forgetting what listening it, what paying attention to a person who is talking is, because when you’re speaking, the other person is thinking about when he/she will get a new Whatsapp or when he/she’ll get a new like on his/her uploaded photo. Can we be connected in this social revolution?

  • Don’t use your mobile phone when you’re with your friends. Enjoy the beer! And make sure, you all have fun.
  • If you’re dating a girl/guy, remember, mobile at home!
  • If your children spend a lot of time with the mobile phones then spend some time teaching them how fulfilling real relationships are.
  • If you feel like you are scrolling too much on Facebook, think about your finger getting damaged and another cool things that you could be doing instead.
  • If other person’s life is more interesting than yours then move the focus back to yours because your life is passing.

Have social networks affected you or any person around you negatively? Feel free to write a comment commenting how you addressed the problem. It’s up to us to stop this.

]]>
<![CDATA[In a world where social networks are moving relationships to the Internet people is becoming more disconnected.]]>
Boy Scouts rule with Danger https://pepicrft.me/blog/2016/05/23/danger-and-boyscout-rule 2016-05-23T00:00:00+00:00 2016-05-23T00:00:00+00:00 <![CDATA[

Boy Scouts

Some months ago I did introduce Danger in the SoundCloud project. When I first read about it was in Artsy Engineering Blog where Orta explained how they used at Artsy. For those who don’t know what Danger is, it’s a Ruby tool designed to be run on CI and allows you to execute some checks on your PRs and report the results back to the PR adding a comment. It’s like a linting step but that have access to the PR information and send the report back to the PR.

When we started using it, there was no support for plugins (now it has) and we came up with a solution for creating our reusable checks as explained on this post. With the time we’ve been adding new checks to Danger and so far we have around 12 Danger checks that run every time someone commits something to GitHub. What initially turned out to be a tool for ensuring good code quality and prevent some mistakes to be merged into master has turned into a tool that enforces the famous Boy Scouts Rule for programmers.

If you don’t know what the Boy Scouts Rule is about I’ll copy & paste it here:

The Boy Scouts have a rule: “Always leave the campground cleaner than you found it.” If you find a mess on the ground, you clean it up regardless of who might have made the mess. You intentionally improve the environment for the next group of campers. Actually the original form of that rule, written by Robert Stephenson Smyth Baden-Powell, the father of scouting, was “Try and leave this world a little better than you found it.” What if we followed a similar rule in our code: “Always check a module in cleaner than when you checked it out.” No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we’d see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We’d also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I don’t think this rule is too much to ask. You don’t have to make every module perfect before you check it in. You simply have to make it a little bit better than when you checked it out. Of course, this means that any code you add to a module must be clean. It also means that you clean up at least one other thing before you check the module back in. You might simply improve the name of one variable, or split one long function into two smaller functions. You might break a circular dependency, or add an interface to decouple policy from detail. Frankly, this just sounds like common decency to me — like washing your hands after you use the restroom, or putting your trash in the bin instead of dropping it on the floor. Indeed the act of leaving a mess in the code should be as socially unacceptable as littering. It should be something that just isn’t done. But it’s more than that. Caring for our own code is one thing. Caring for the team’s code is quite another. Teams help each other, and clean up after each other. They follow the Boy Scout rule because it’s good for everyone, not just good for themselves. by Uncle Bob.

That summarizes with this statement Always leave the campground cleaner than you found it. The problem is that if you don’t commit none of the developers in the team would apply it unless there was something that would force them to do it. There is where Danger played an important role for us.

Danger enforcing Boy Scouts rule

Danger provides you with the list of files that have been modified in the PR. We started running the checks against these files and reporting warnings in the project. Some of these checks that we run against new PRs are:

  • Check that there are no TODO comments.
  • Check if there are header comments.
  • Check if there are implemented tests.
  • Check if there are Nullability macros.

All these tests are run against modified files, and as a result, it warns you to improve these files (even if you are not the original author). For example:

  1. I modify a class method because I have to pass an extra parameter.
  2. The file doesn’t include Nullability Objective-C macros, NS_ASSUME_NONNULL_BEGIN (worse compatibility with Swift).
  3. I create a PR and Danger will warn me about that modified file not including the macros.
  4. I’ll take advantage of my changes to add the macros to that file.
  5. Voila :tada:, thanks to Danger I improved the project a little bit.

Since we started using it at SoundCloud the project has been improving progressively and we’re enforcing new good practices every day with new rules. Whenever we feel something can be “Danger Checkable” we implement the check and add it to the list of existing check.

How do you ensure good practices in your projects?

]]>
<![CDATA[Post where I explain how Danger helped us at SoundCloud to apply the programming Boy Scouts rule to our workflow]]>
Automating iOS review tasks with Danger https://pepicrft.me/blog/2016/03/23/automating-review-tasks 2016-03-23T00:00:00+00:00 2016-03-23T00:00:00+00:00 <![CDATA[

This week I’ve been working automating some review tasks at SoundCloud with a tool called Danger from @orta and @krausefx. We had some linting tasks in CI that analyzed the code and stopped the whole build process notifying the affected developers about something not matching the project specs. Developers had to go into Jenkins (in our case), check out the build log, fix what was failing, commit and push the changes restarting the pipeline execution. What if we could report all that handy information and check results directly to GitHub? That’s exactly what Danger tool does. I first heard about it reading this very interesting article from Orta title “Being a Better Programmer When You’re Actually Lazy”. Just summarizing what Danger does:

  1. You include an extra step in your CI build process that executes danger bundle exec danger
  2. Danger reads a file Dangerfile that contains the checks (ruby code).
  3. It exposes a set of useful environment variables like the PR title, the files that changed,… It also exposes methods to report the result of these checks warn(), fail(), message().
  4. Once Danger completes it sends a comment to the opened PR with the results (as you can see in the screenshot below taken from the article mentioned).

The tool uses the user you specify through a DANGER_GITHUB_API_TOKEN environment variable

Danger

Creating “dangers” in multiple ruby files

When I tried the tool I felt that adding all the Ruby logic in a single Dangerfile was going to turn the file into a big mess. What about having a danger folder with all the tasks? Then we could require these tasks from Dangerfile and execute them one after another.

The steps below show how I ended up doing it. It doesn’t mean it’s then only way. There’re probably some other alternatives. This is the one I tried and that worked with our project structure, keeping all the danger checks in their own folder.

  • First create a folder danger where all the tasks/checks will be.
  • Each of these checks represents a ruby file. Its structure would be like this one:
require 'danger'

module Danger
  module Checks

    # This Danger step checks if the number of line is over a maximum value.
    # In that case it warns the developer
    class PRSize < Base

      def initialize(dangerfile, max_lines)
        @max_lines = max_lines
        @dangerfile = dangerfile
      end

      def execute
        @dangerfile.warn("This PR is over #{@max_lines} lines of code. Make it smaller or create multiple PRs.") if @dangerfile.lines_of_code > @max_lines
      end

    end

  end
end

Where every check inherits from Danger::Checks::Base. That base class defines a base constructor with taking a Danger::Dangerfile instance that contains all the environment variables exposed from Danger, variables like the number of lines of your PR, the new files added,…:

require 'danger'

module Danger
  module Checks
    class Base

      attr_accessor :dangerfile

      def initialize(dangerfile)
        @dangerfile = dangerfile
      end

      def execute
        raise "-execute method must be overriden"
      end

    end

  end
end

If you want to use warn, message, fail methods and environment variables you can access them from the @dangerfile attribute.

Then the structure of your Dangerfile would look like this one:



Dir["./danger/*.rb"].each {|file| require file }

## Constants
MAX_PR_LINES = 500
PINGEABLE_RESOURCES = [
  { regex: /SoundCloud\/Classes\/Player/, username: "pepibumur", name: "Player"}
]
# Checks
Danger::Checks::PRSize.new(self, MAX_PR_LINES).execute()
Danger::Checks::IncludeSpecs.new(self).execute()
Danger::Checks::Todo.new(self).execute()
Danger::Checks::Ping.new(self, PINGEABLE_RESOURCES).execute()

These are just some examples of checks that we’re using but the options are infinite:

  • PRSize: Checks if the number of lines in the PR is over a given value.
  • IncludeSpecs: Checks if any new .m file includes unit tests.
  • Todo: Checks if the developer forgot any // TODO somewhere in the code.
  • Ping: Analyzes modified files and notifies developers that might be directly concerned about these changes because they, for example, own the feature whose file has been modified.

Conclusion

There’re manual processes that are unavoidable, even though, tools like Fastlane & Danger are helping to automate the majority of them. When we’re so focused on our projects we don’t worry that much about the time we spend in repetitive tasks (since we only think about developin). The time that we can spend on these tasks can be huge and it’s worth to spend some time setting up either Danger, Fastlane and try to automate as many processes as you can.

]]>
<![CDATA[Post that explains how to automate review tasks with the help of the tool Danger]]>
Marcheta en la vida https://pepicrft.me/blog/2016/02/28/marcheta-en-la-vida 2016-02-28T00:00:00+00:00 2016-02-28T00:00:00+00:00 <![CDATA[

Volando de vuelta a Berlin, ha sido un fin de semana reconfortante con familia y amigos, de esos que te sirven para cargar las basterías. Llevaba tiempo queriendo escribir sobre la experiencia que ha supuesto salir del país, y lo que ha sido para mí el casi un año que llevo viviendo en Berlín. Era difícil elegir el título para esta entrada, en la que a priori no se ni lo que voy a acabar contando, pero finalmente me vino marcheta a la mente. Aquellos que me conocen, conocen muy bien lo que marcheta significa para mi. Desafortunadamente no se encuentra en el diccionario y lo más parecido es marchar (ni qué decir de explicar fuera de España lo que es marcheta). Marchar significa seguir hacia adelante, puedes marchar en una carrera, y también puedes hacerlo en la vida. También puedes hacerlo de casa, o marchar de fiesta. De marcha, marcheta, y de marchares este último año, un año de marchetas.

Hace casi un año salí a vivir fuera del España, mi vida cambió de un día para otro y decidí que la mejor decisión en aquel momento era marchar a otro país. La empresa con la que estaba trabajando por aquel entonces estaba trasladando sus oficinas a Berlín, la novia había decidio poner punto y final a la relación, ¿por qué no? pensé. Nunca he tenido miedo a las aventuras, vulgarmente me gusta decir que “me ponen palote”, tanto aventuras como retos, y este cambio para mí fue una auténtica aventura.

Cuando alguien me pregunta qué tal vivir en Berlín, o qué tal vivir fuera de casa, no puedo evitar recomendarle a esa persona que si tiene la oportunidad, que no dude en hacerlo. Es apasionante, pero también difícil, echas de menos muchas cosas, pero se compensa con las otras muchas que aprendes: otras culturas, formas de pensar, idiomas, visión del mundo y de la vida. ¡Sal de España! Ahora casi un año después echo la vista atrás, pienso en todo lo aprendido, y todas las aventuras que me gustaría vivir o países a los cuales viajar.

“Esta es tu vida y se acaba a cada minuto.”. El club de la lucha

La vida es corta, cada oportunidad no aprovechada, es una oportunidad que puede no volver a aparecer. Aprendí a no cuestionarme más de dos veces si hacer o no las cosas, tan pronto como mi cuerpo me pide “marcheta”, yo le doy marcha. Nos acostumbramos a vivir rodeados de inseguridad, también lo hacemos a nuestra zona de comfort, y tenemos miedo de salir de ella. ¿Por qué aprender un idioma si con el que utilizo a diario me es suficiente? ¿Por qué aprender a dibujar si soy técnico? Qué miedo, y ¿viajas a esos sitios sólo? ¿Y sí te pasa algo?.

Son las preguntas que escucho constantemente cuando vuelvo a España, las mismas que yo mismo me hacía hace unos años. Cuando tenía ganas de hacer algo, antes, tenía que responder unas cuantas preguntas, y después buscar aprobación de familia y amigos. Ahora todo es distinto, si tengo ganas de cumplir un sueño, alineo mi vida para conseguirlo, incluso si no tengo tal aprobación. Esta parte es dura especialmente para los padres, que te ven como un loco. A unos padres, no les hace especialmente ilusión ver a un hijo marchar a otro país, ni a unos amigos, perder a uno de la pandilla por largas temporadas, sin emargo es lo que te hace feliz, es tu combustible, ¿por qué no hacerlo?.

La vida y sus caminos

Tuve la oportunidad de conocer a un grande del deporte, que admiro un montón, Valenti San Juan. Su vida dio un giro rádical, mucho mayor si cabe que el mio, y encontró en el deporte el motor de su vida. En uno de sus documentales donde cuenta la hazaña de una carrera ciclista en Cuba mostró una cita que me hizo pensar bastante. La cita es de Henry Charles Burowski y enuncia lo siguiente:

Si vas a intentarlo, ve hasta el final, de lo contrario, no empieces siquiera. Tal vez suponga perder novias, esposas, familia, trabajo, y quizá, la cabeza. Tal vez suponga no comer durante tres o cuatro días. Tal vez suponga helarte en el banco de un parquet. Tal vez suponga la cárcel. Tal vez suponga la humillación. Tal vez suponga desdén, aislamiento… el aislamiento es el premio. Todo lo demás es para poner a prueba tu resistencia, tus auténticas ganas de hacerlo. Y lo harás, a pesar del rechazo y de las ínfimas probabilidades. Y si será mejor que cualqueir cosa que pudieras imaginar. Si vas a intentarlo, ve hasta el final. No existe sensación igual. Estarás sólo, con los dioses, y las noches arderán en llamas. Lllevarán las riendas de la vida hasta la risa perfecta. Es por lo único que vale la pena luchar”.

Me encanta pensar en la vida y las aventuras como caminos, cómo retos que permiten ponerte a prueba y te ayudan a ser mejor persona y a encontrar tus límites. “Commit to your dreams”, o para mis padres, ser un cabezón, este no para hasta que lo consiga. He sufrido inseguridad anteriormente, he buscado validación para cumplir mis sueños, he esperado que muchos factores se alinien para ir a por el sueño, y he dejado cosas por el camido por esa inseguridad. ¡Mal! Recórrelo cuando veas la entrada, ve hasta la salida, y sé valiente, comprometido, y deja que tu motivación te lleve.

Aprende a recorrer los caminos sólo, podrás encontrar compañeros durante la aventura, otros incluso estarán ahí animándote, pero al final eres tú y tus metas. Compáralo con la vida, al final no deja de ser otro gran camino, de unos cuantos años, pero un camino. Gente te motivará en la vida, especialmente la familia, tus amigos estarán ahí para decirte que loco estás, y tu si cabe, te pones más palote de pensarlo. Amigos se quedan por el camino, es normal, otros no están ahí cuando los necesitas, pero aprendes a hacerlo sólo, y no cesas el paso. Es muy difícil aprender a caminar sólo en tus aventuras, pero recuerda, son tus aventuras. Las personas que te quieran, y que quieran lo mejor para tí estarán ahí para darte la energía que necesites. ¡No olvides agradecerlo!:

  • No esperes que alguien te diga de salir a correr para empezar a hacer deporte. Ponte las zapatillas y sal a la calle.
  • Te gusta relacionarte con más gente y tus amigos no están muy predispuestos, busca actividades y eventos, ve allí y conoce a gente.
  • ¿Quieres visitar un país y ninguno de tus amigos tiene ganas? Ve a Skyscanner, busca un buen precio, y píllate el billete. No te arrepentirás

Idioma

Este año viviendo en Berlín ha sido un año de retos en este camino llamado vida. Llegas a otro país, muy ilusionado e inconscientemente acostrumbrado a tu país. Acostumbrado a un idioma, y también a una cultura, al principio sorprende, luego te acabas adaptando, incluso contrastando. ¿Qué nivel de Inglés tienes? B2, respondía yo en España, porque así lo decía un papelito, de un examen llamado First. Supuestamente estás apto para mantener una conversación con cualquier nativo de manera fluida, fluída en España quería decir. Cuando llegué Alemania me sentía un completo incompetente, pues cuando intentaba usar el Inglés para expresar mis sentimientos, mis ideas, o dar mi opinión sentía que no sabía utilizar el idioma. Las primeras veces es duro, eres incapaz de articular dos frases seguidas, y sufres, mucho. Vas aprendiendo a usar verdaderamente el idioma, y a organizar mejor tus ideas para expresarlas con mayor claridad, pensando en el idioma que estás utilizando. Finalmente, ¿por qué no?, intentarlo con otro idioma (para mi todavía un reto pendiente, el Alemán me espera).

El idioma es una herramienta muy potente, y en general la comunicación. Cuando no dominas el idioma, te sientes muy indefenso, especialmente en las situaciones en las que tienes que acabar usando gestos para comunicarte.

Comida

Aprendes a valorar el cocido de tu abuela, o las paellas del fin de semana. Echas la vista atrás y te recuerdas renegando a tu madre cuando aparecía con un plato de cocido en la mesa. ¡Ya no lo harás más! El Jamón serrano pasa a ser delicatessen, la espetec, sólo en los supermecados más internacionales, y pescado fresco, pasa a no serlo tanto. ¿Tapas? No las conocen, ¿cerveza?, la mejor. Así que si vives cerca de casa y tienes la oportunidad de seguir probando esos platos, saborea cada cucharada como si fuera la última.

Amigos

Los encuentros con tus amigos pasarán a ser encontronazos. Cuando vives en la misma ciudad, y los ves todos los días, tus relaciones acaban siendo monótonas. Sin embargo cuando pasas mucho tiempo fuera y vuelves para fechas señaladas, aprendes lo que son auténticas fiestas. Cada vez que vuelvo a España, aunque sea por pocos días, vuelvo cansado, cansado de no haber dormido lo suficiente, pero contento de haber pasado muy buenas noches con tus amigos. O como me gusta llamarlas, noches de “marcheta”.

Relaciones

Conoces otra cultura, y te das cuenta de como nos ven a los Españoles (y en parte lo entiendes). El calor, la alegría, y forma abierta de ser de los Españoles es muy particular. Pasas de eso a una forma de ser más fría, racional y formal. No es nada malo, de hecho enriqueces carácter con ello. También conoces a personas muy interesantes, como bien decía, en el camino personas entran y salen, personas que conoces por trabajo, por la calle, o en situaciones que no esperarías. Personas con las que compartes experiencias, y de las que acabas aprendiendo, así como enseñando. Aprendes a ser más humilde, y tus fallos como persona se hacen más visibles, ¡no somos perfectos!, y siempre estamos a tiempo de mejorar. Te vuelves incluso a encontrar con personas en otras partes del mundo, y compruebas que sí, efectivamente el mundo es un pañuelo, por eso la importancia de cuidar tus relaciones, nunca sabes cómo, ni de qué forma, esa persona aparecerá en el futuro en tu vida.

Los verdaderos problemas

Viajando, tuve la opurtunidad de coincidir con una compañera de instituto que hacía muchísimos años que no veía. Hablando e la experiencia de vivir fuera, y de todo lo que habíamos aprendido durante esos años, me sorprendío ver la cantidad de elementos en común. En concreto este:

Es que cuando vuelvo a mi casa y escucho los problemas de mis amigas, me río.

Porque no somos conscientes de las preocupaciones tan tontas que tenemos cuando estamos cerca de casa hasta que realmente estamos fuera. Es entonces cuando los problemas aparecen, y nos sentimos totalmente indefensos, sin nuestro idioma, sin nuestra familia, solo tú y el problema. Dejas de tener preocupaciones tontas cuando tu chico no te ha escrito desde hace 3 horas, o porque no sabes que ponerte esa noche.

Llena tu vida de experiencias

Y para mi sin duda, uno de mis mayores lecciones ha sido aprender la importancia de llenar tu vida de experiencias. He ido deshaciéndome de todos y cada uno de los elementos materiales a los cuales me sentía atado, y he evitado atarme a nuevos:

  • ¿Comprarme un coche? No por ahora, si puedo evitarlo con transporte público y o bicicleta, paso de seguros, revisiones, averías, …
  • ¿Casa? Nada, hipotecas para los bancos. Que no me espere el sueño de las familias Españolas de casarse cuanto antes, comprar una casa tan pronto como se tenga trabajo, porque claro, hay que dejarle algo a los hijos en el futuro, ¿no?
  • ¿Tres armarios de ropa? Claro para tener miles de conjuntos posibles. Me deshice de ropa que no necesitaba donándola, y ahora soy feliz con menos de un armario de ropa. Puedo viajar empaquetando lo que necesito en una mochila.

¿Dónde invierto parte de mis ingresos? En esas experiencias. Entre las muchas experiencias que puedas tener en la vida, hay una en concreto, que me ha llenado como ninguna otra, viajar. Hong Kong, Bali, Pekín, Camboya, Thailandia, Budapest, … Por mucho que pasen los años nunca podré olvidar cada uno de los momentos vividos en esos viajes. Desearía tener mucho más tiempo libre para poder viajar, pero como no es posible aprovecho fines de semana para salir a cualquier país de Europa y visitar otras ciudades. Mochila a la espalda y con mucha ilusón.

Ahora cada vez que tengo que gastar dinero en algo, me detengo a pensar cual es mi relación con ese algo. Si se trata de algo material, pienso si realmente lo necesito y si puedo prescindir de ese capricho. Si puedo, lo evito, es dinero tirado a la basura. Unos meses después ese algo se deteriorará y abrá pasado a la nada. Y si se trata de gastar dinero para vivir una experiencia nueva y disfrutar del momento, adelante:

  • Un viaje
  • Unas cervezas con unos amigos.
  • Visita a alguien que llevas tiempo sin visitar.
  • Viaje de vuelta a casa a casa.
  • Inscripción a una maratón.
  • Unas cenas con una persona querida.

“Lo que hacemos en esta vida, tiene su eco en la eternidad”. Gladiator

La vida continúa, no se donde estaré de aqui a dentro de un año, en qué estaré trabajando, ni siquiera cuales van a ser futuras metas. Sin embargo, tengo una cosa muy clara, estaré disfrutando como un niño donde quiera que esté. Como bien decía “Gladiator”, lo que hacemos en esta vida, tiene su eco en la eternidad, disfrútala, gózala, y ponte bien palote con cada cosa que hagas en ella. No dejes que tus sueños se trunquen por estar atado a algo o a alguien. Vida hay sólo una:

  • ¿Te gustó una chica? Díselo.
  • ¿Y si tengo vergüenza? Te la tragas.
  • ¿Te apetece viajar? Viaja.
  • ¿Quieres vivir en otro país? ¿Qué te lo impide?
  • ¿No te gusta tu trabajo? Busca otro.
  • ¿Echas de menos a una persona? Reencuéentrate con ella.
  • ¿Tienes un sueño? Ve a por el.
  • ¿Y si tienes varios? No dejes que cesen.
  • ¿No te apoyan? No lo necesitas.
  • ¿Te dan una patada en el culo? Coje impulso y ponte un casco.
  • ¿Y si crees que no puedes? Creeme, puedes.
  • ¿Opinan de tí? ¡Mejor!
  • ¿No lo hacen? Haz que lo hagan.
  • Y sobre todo, se feliz y disfruta de lo que te rodea.
]]>
<![CDATA[Mi experiencia de haber salido a vivir fuera de España y las lecciones aprendidas]]>
Xcode scripts to rule them all https://pepicrft.me/blog/2016/02/07/xcode-scripts-that-rule-them-all 2016-02-07T00:00:00+00:00 2016-02-07T00:00:00+00:00 <![CDATA[

Scripts

I have been recently working on SugarRecord 2.0 and one of the things I tried to do for that version was making it easier for contributors to clone the project and start contributing with it. I realized a few months ago that Carthage and ReactiveCocoa had a folder called script with a set of normalized scripts. I cloned these projects, executed the script bootstrap there and I had the project ready to contribute with. Wow! that’s awesome.

The idea was great, I copied these scripts cloning them from this repository from @jspahrsummers. He got implemented a set of very reusable scripts for any Xcode project. I’ve been using them since then. Why?:

  • Developers don’t need installing dependencies like Fastlane, Bundle Gems, … Scripts are implemented using bash.
  • Setup task turns into just one line of code in your console.
  • You can build all your project shared schemes and use that build script for CI.

If you’re working on Xcode projects, no matter if they’re a company project, an open source library, whatever.. Add them in your project and leave a comment in the README.md explaining how to use them (very straightforward).

Where does the idea of normalized scripts come from?

This idea comes originally from Github. It was a few weeks ago when googling, I found this blog post from the GitHub engineering team. They realized that thereis a set of repetitive tasks on any project that could be normalized in scripts. The decided to stanrdize these scripts and call it Scripts to Rule Them All. What a brilliant idea.

Thanks GitHub for the idea and @jspahrsummers for creating the equivalent version for Xcode projects.

]]>
<![CDATA[Set of normalized scripts very useful for Xcode projects. Individual contributors will be familiar with them after they clone the project.]]>
States - The source of truth https://pepicrft.me/blog/2016/01/14/states-the-source-of-truth 2016-01-14T00:00:00+00:00 2016-01-14T00:00:00+00:00 <![CDATA[

I have been these days thinking about how we do manage states in our iOS apps. States are a source of information but also the source of bugs, why? Because we spread states across multiple components, duplicate them, and we forget about considering derived states. Our app shows off unexpected behaviors, and we struggle to find the reason. The user credentials are persisted in the Keychain, so we know whether the user is logged in or not, but our ApiClient also contains that information. Which one should I trust? Are you sure both are synchronizing when any of them changes? I’m sure you miss some. States are a common source of bugs in our apps. There are no silver bullets to solve this problem but multiple approaches out there, programming paradigms, patterns and tricks that can help you with the state simplification.

Singletons

An attempt to centralize state

Have you ever wondered why we use singletons for some components in our apps? There are performance reasons, it also provides an instance that can be accessed from any point of your app, and in regard states, it has a reference to a state. The famous ApiClient.sharedIntance() that has information about the access token is a great example. Singletons are great to keep states, as we can access them from anywhere. Nonetheless, we tend to modify its state imperatively without considering that all the consumers of the singleton instance might get into an inconsistent state since they are not subscribed to state changes of the singleton instance.

Since we don’t subscribe to singleton state changes, we might reach inconsistent states in entities that depend on the singleton state.

States

Flux

Centralized state with propagation

If you haven’t seen it yet, I recommend you to watch this two talks:

  • Unidirectional Data Flow in Swift: An Alternative to Massive View Controllers: Link
  • Flux — Application Architecture for Building User Interfaces: Link

I heard about Flux a few days ago when I watch the talk Realm published on their website. Flux is an architecture originally proposed by Facebook that aims unidirectional data flows in apps to build user interfaces. What’s the core idea of Flux?

States

  • States are persisted in stores. You can have multiple stores in your app depending on how many states you want to have. For example, you can have a state that reflects the app navigation state, or another state that reflects the user session in your app. States would be persisted in two respective stores, NavigationStore, and UserSessionStore.
  • Actions fire state changes: Actions are the source of state changes as side effects. Actions can be view lifecycle events, user actions, … Whenever something can change the store state, that’s an action. They don’t contain information about how the state will change.
  • State changes are driven by reducers: Whenever an action is received, it’s passed to all the reducers registered in the store. A reducer is responsible to given the current status and the action that took place, decide the new state of the store. A store can have multiple reducers.
  • States changes are forwarded to subscribers: When the state of the store changes, it’s notified to multiple subscribers that might be interested in these changes.

There are some frameworks that implement the core concepts of Flux for Swift, the most popular one is ReduxKit that also offers reactive wrappers.

States

Reactive Programming

Aiming unidirectional data flow

In this playground with states moving around, paradigms like Reactive Programming contribute creating harmony. The core idea of Reactive Programming is that information flows through streams from data sources. Streams events can be combined, and manipulated but side effects can never be introduced in the equation. How does it relate to states propagation? Sources of truth, where states are located are the source of these streams. Every time the state changes, the source sends the change through the stream. Interested entities can subscribe to these streams, deciding what to do when the state changes.

How do we move from the imperative world to a reactive one?

Some frameworks provide components that are based on notifications when state changes take place:

  • CoreData: NSFetchedResultsController notifies when there are changes in the database. We react to these changes, updating our collection, and inserting, updating, deleting items from a collection/table view.
  • NSUserDefaults: We can use NSNotifications to detect when something has changed in user default and then propagate the changes back to the subscribers.
  • Realm: It also provides very generic notifications which might not be enough for your use cases (they are working on improving them). Hopefully, there are libraries like RealmResultsController that implements the idea of NSFetchedResultsController for Realm.

These components could be easily wrapped into Observables, or Signals/SignalProducers if you want to work with Reactive concepts. Once wrapped, you can map, filter, combine, observe on a given scheduler, … The magic of Reactive.

Reactive is not a requirement for unidirectional data flow design but makes it easier

Data source and states

Performance implications

There are some scenarios where it’s impossible to have a single source of truth because of performance reasons. Accessing data sources like databases or APIs is expensive in term of resources consumption. We cannot execute a request to an API for every row in a TableView, or a execute a query against a database every time we render a cell. Those are typical examples where we are forced to duplicate our source of truth and have a cached version that can be quickly accessed (typically in memory).

Whenever you think about cache the source of truth ask yourself the following question:

Is it expensive in terms of performance?

Is it expensive accessing NSUserDefaults? And Keychain? If it isn’t why do we create cached copies and add complexity making sure states are synchronized? Can’t the ApiClient have an accessor to Keychain instead of the copied token that is restored when the app is opened?

Examples of cached states

We are forced to cache states in our apps to offer a good experience to our users. Two common scenarios where we cache states are when we’re persisting data from an API in our database since we want the state to be quickly accessed and when we cache a database query results in memory to be presented in a collection view.

Local database caching an API state

The reason for persisting the API state in our local database or storage is to make navigation in the app faster. If we download the user repositories on GitHub and persist them in our database, when the user goes into the repositories view we don’t have to show a hilarious spinner while we are downloading the data. The schema of states would be the following one:

States

Deciding when the state is synchronized is crucial for a good user experience. Synchronizing it too often can be bad in terms of performance but synchronizing it not enough times can lead to a bad experience. Once you decide which states are going to be persisted from the API and you have a schema of your app structure, design when the states will be synchronized.

Designing the data model is as important as deciding when the data model is synchronized with the source of truth which is in the API. Be predictive and make sure the states reflect the real value when the user access them.

Memory collection caching a local database collection

As I mentioned earlier, due to performance reasons, when the data from a database is accessed we create a cached version in memory that we access to. It means we add a new state to the game, having then the API state, the database state, and our copy in memory of the database state. Three states that has to be synchronized! The schema looks like this one:

States

The complexity increases since we need to synchronize two states, the API with the database, and the database with the memory. Plus, the view has to react to the memory copy changes. Since the state is automatically propagated from the presentation layer we have to subscribe to these states and update the view accordingly and trigger the synchronization action.

Recommendations

We cannot avoid states in our apps, but simplify and avoid derived states. There isn’t a perfect solution, synchronization is complex. You cannot ensure that states are perfectly synchronized since synchronization between local and remote states involves network requests (and they can fail), but at least, follow safe principles that will get us close to the ideal synchronization state. These are some principles that I personally try to follow in my apps:

  • Design where the sources of truth will be: Where your session will be persisted, where the collections of data will be. And whenever it’s possible, avoid duplicating states. Don’t copy the Keychain user session to an instance in memory, or the user profile persisted in NSUserDefaults to a copy in memory. If there are no performance implications, don’t copy the data.
  • Designing when the states are synchronizing is as important as designing the states: You can have a good states design but they are not properly synchronized. Consequently, the user experience of your app is bad. States synchronization is usually driven by your Application lifecycle and navigation. Be predictive!
  • Design unidirectional flows for states propagation: Don’t move states in multiple directions. Derived states should be generated automatically when the source states change.
  • State generation independent from state subscription: Keep the schema of Flux in mind. With actions states change but nothing else. Any side action is the result of a subscription to these states.

I hope you liked the article, the best solution for your app depends on your app itself, its data model, its persistence solution, … Find what’s the architecture that best adapt to your problem and try to keep the principles mentioned above in mind and simplify states!

]]>
<![CDATA[Overview of states in iOS apps, how we tipically handle them, current challenges with states and how to overcome them]]>
Install the last Carthage version on CI services (Travis, Circle, ...) https://pepicrft.me/blog/2015/12/29/install-last-carthage-ci 2015-12-29T00:00:00+00:00 2015-12-29T00:00:00+00:00 <![CDATA[

I’ve lately been working with multiple libraries and integrating them with CI, in particular Travis-CI because these libraries are Open Source. These libraries have dependencies that are resolved and built using Carthage which is distributed through Github Releases and Brew. However, the version in brew does not always match the last version available on Github Releases and your CI providers don’t offer the last version either. What can you do then?. Get the last version from Github and install it with a very simple script, how?:

gist:pepibumur/3e088a936b9b03359af1

Use that bash script passing as argument the version of Carthage that you want to install. It’ll download the last .pkg available and install it. For example, if we wanted to use it in our .travis.yml script:

language: objective-c
notifications:
  email: false
xcode_project: SugarRecord.xcodeproj
osx_image: xcode7.2
before_install:
  - bash update_carthage.sh 0.11

Short but useful! Enjoy coding

]]>
<![CDATA[Very simple script to keep your Carthage version updated without depending on Brew.]]>
Rewriting SugarRecord, 2.0 https://pepicrft.me/blog/2015/12/21/rewriting-sugar-record 2015-12-21T00:00:00+00:00 2015-12-21T00:00:00+00:00 <![CDATA[

SugarRecord is one of the libraries I’m most proud of. It has currently 1.155 favs and 98 forks on Github and a couple of issues opened. I wrote this library when the first version of Swift was released since I wanted to learn the language and I thought that writing a library could be a great idea for learning.

SugarRecord 1.x, suffering Swift evolution

Swift changed very fast with every new version, moreover we didn’t have CocoaPods support yet so all the integration steps were manual. Realm was still giving its first steps and my knowledge about CoreData was quite limited. I had used MagicalRecord and I used it as inspiration to implement the Swift equivalent. The structure was quite similar but taking advantage of Swift features like generics. One of the most difficult things by that time was designing an abstraction layer that could wrap both, Realm and CoreData simultaneously. The interface of Realm was much more fresh than CoreData and I was trying a to achieve a similar approach with SugarRecord.

The power of CoreData and Realm but with a nice to use interface.

I released the first version of SugarRecord and kept updating and adding new features according to developers requests. However, a few months later I had to left the project because I didn’t have enough time to invest, I spent most of my time with my full-time job and I was simultaneously working on another project in my free time. I tweeted if someone could be interested in continuing with the project and make it a great reference in the Swift community. With some help a few versions were launched after I abandoned the project but there was a lack of motivation in the team that handed-over the project and it remained outdated for some months. Developers started forking, solving the bugs on their own repositories, and some amount of issues opened in the repository increased.

Swift 2.0, time to update

I still had the SugarRecord account active and I saw how people were tweeting about SugarRecord and reporting issues on Github. Swift 2.0 was launched by that time and people were asking for support for that new version. SugarRecord was completely broken in that version and people needed a new version of SugarRecord for their Swift 2.0 projects.

“I born that project and helped it growing, couldn’t leave it abandoned” I thought, and then took the decision to start working in SugarRecord 2.0. Here’s the tweet I published notifying the developers that the next version was in the oven:

I decided not to continue with the same codebase which was strongly inspired in MagicalRecord but starting from the scratch. Take all the good practices, new Swift 2.0 features, and problems that developers reported with the previous version and try to do something better, more robust, safe and actively supported. I started working on it. It was a few months of development since I didn’t have much time, but I was able to finish it and publish the first version of SugarRecord 2.0 whose features are listed below:

Features

  • Carthage & CocoaPods: With the first version I had to explain developers how to install it manually. All of them were used to use CocoaPods and it was kind of difficult to do it manually (although it wasn’t at all). When CocoaPods finally launched its version with support for Dynamic Frameworks (yes, it’s popular use_frameworks!) I couldn’t update because Realm wasn’t giving support yet, so had to keep the manual process at least for Realm. With this version 2.0 I could finally give support to CocoaPods, adding the external dependencies like Realm, ReactiveCocoa and RxSwift as pod dependencies of the library. I also added support to the recently popular dependency manager Carthage. Now it’s up to you to choose the solution that fits into your project, SugarRecord supports it.

  • Realm inspired: These months I’ve been using Realm more and more and I wanted to inspire the SugarRecord interface in Realm. I built a fluent interface for building fetch requests and also operations methods that you can use to perform saving/updating/deleting tasks. Mapping that API with Realm was relatively easy but it was a bit complicated in case of CoreData. I could manage to solve it creating the concept of Context and Storage that behaves as a proxy class for accessing the database.

  • Reactive interface: I’ve also been playing recently with Reactive Programming and I thought it would be a great idea to expose SugarRecord methods as Observable entities. Then if you’re app is based on Reactive, you can have the same paradigm in your data layer, fetch requests and operations turn into Observable entities that are executed once you subscribe to them. I also added fetch in background with a fetch method that takes a map function, fetches your entities using a request in background, and return the thread-safe entities to be used from the thread you need (e.g. Main thread for presenting in UI).

  • Storage protocol: Although SugarRecord offers two predefined storages, CoreDataDefaultStorage and RealmDefaultStorage you can add your own. The only requirement is that they conform the protocol Storage. Although the default storage will be enough for the 90% of the cases, you might need something extra that the default storage doesn’t provide. You can extend it and add a non-existing functionality. If you come up with a new storage, propose a PR so that we can include it as a Storage of the library.

  • Fully Tested: I wanted the library to be robust and the best way to ensure that is through unit tests. I ensured that every of the methods provided behaves as expected and that the designed storages don’t throw unexpected errors that might cause instability in developers apps. Thanks to Quick and Nimble for its great Swift testing and mocking frameworks.

Things learned with SugarRecord

  • Developers want an example project: Event if you fill the README with a lot of examples they want to see a project working where you show all the things they can do with your library.
  • Developers don’t read: Use cursive, bold or quotes. They won’t read, they want to install it using CocoaPods and start using it. You’ll find developers that even don’t know anything about CoreData and think that the magic of your library is the only thing they have to learn.
  • Developers prefer requesting instead of proposing: It’s hard to find developers that want to contribute with your library, if they find a bug they will report it, if they need a feature, they’ll ask you for it. They want to be consumers of your product.

With SugarRecord it’s the first time I see someone creating an issue that starts with I need an example project……. It made me feel like a servant.

  • Building a team around an Open Source project is not easy: It’s hard to find developers that want to actively contribute with the project, and if you find them it’s difficult to keep them as motivated as you are. I couldn’t find people that actively continued the first version of 1.0 and I still haven’t found the team for the version 2.0.

  • Developers will ask you for a lot of features, but you decide: They want to do everything with your library, but you, designer of code, are the responsible to decide if that requested feature is in the scope of your library. You don’t always have to say YES if it doesn’t make sense for your library.

  • Make it easy: Try to make everything easy, from the setup to the use of your library. If a developer finds your library and thinks it is too complex for him, he/she won’t use it.

    • Integrate it with the most popular dependency managers.
    • Design a friendly API. Avoid complexity.
    • If they ask you doubts about how to use your library, your design can be improved.

    <blockquote class=”twitter-tweet” lang=”es”> <p lang=”en” dir=”ltr”>

    SugarRecord is listed in the RxSwiftCommunity webpage after our last
    update. <a href="https://t.co/YCNoeQkdb8">https://t.co/YCNoeQkdb8</a>

    </p> &mdash; Sugar Record (@sugar_record) <a href=”https://twitter.com/sugar_record/status/678665283813441536”>diciembre 20, 2015</a> </blockquote> <script async src=”//platform.twitter.com/widgets.js” charset=”utf-8”

    </script>

SugarRecord is available for your Swift projects. If you don’t want to save time setting up your CoreData/Realm stack, you can use it. I’m also looking for contributors that want to contribute with the project, help fixing bugs, adding new features, and definitively making SugarRecord better every day. If you’re interested, drop me a line [email protected]

]]>
<![CDATA[I explain in this post how was the process of rewriting SugarRecord, a CoreData/Realm wrapper for Swift.]]>
Functional is about functions (Swift) https://pepicrft.me/blog/2015/10/27/functional-is-about-functions 2015-10-27T00:00:00+00:00 2015-10-27T00:00:00+00:00 <![CDATA[

Since Swift 2.0 was launched this term has become very popular. You attend conferences and this is usually the topic most of the people talk about. You see people even struggling for its use in their apps, really overwhelming for it. Why? It’s something that can be easily done with Swift but it’s not something new (a lot of languages where already offering fully functional paradigms before)

Ey Pedro! Do you use functional in your apps? I wanna start using it, I’m watching a lot of talks and reading a couple of books. What do you think about? Should I use it?

Maths

Functions is not anything new, back in the school we were told that a function is something that has some input arguments or variables and after some operations they return a value. From the Engineer perspective I was told that these was systems or black boxes that depend on an input stream of data and the output only depends on the input in each instance (no feedback cycle systems)

f(x, y) = x + y

Notice that when we create a function we’re actually a scope of operations that doesn’t take data out of there, consequently the logic subset is constrained. Thinking about it the concept isn’t complex at all, but… we were given more flexibility when we were told that we could save states under something called classes, voila! OOP

Black box

Object oriented programming

We started grouping these functions into something called classes and assigning it a state which is created when and instance of this class is created. We still have functions but they seem to belong now to something and in languages like Objective-C we cannot extract them from its scope (ask a Javascript developer about functions and contexts and you’ll get surprise about what they’re able to do). We sticked to Object Oriented principles and we set ourselves far from the basic function concept we saw above. Object Oriented programming introduces a grade of flexibility and mutability working with objects and its functions. I’m sure most of you have coded something like this:

class ApiClient {

  // MARK: - Attributes

  var token: String?

  // MARK: - Init

  init(token: String) {
    self.token = token
  }

  // MARK: - Public

  func reset() {
    token = nil
    cancelRequests()
  }

  func execute(request: Request, completion: (Result<AnyObject, Error>) -> Void) {
    let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
    dispatch_async(dispatch_get_global_queue(priority, 0)) {
        let authRequest: Request = self.authenticatedRequest(request)
        // Execute the request and get the response: let result
        dispatch_async(dispatch_get_main_queue()) {
          completion(result)
        }
    }
  }

  // MARK: - Private

  func authenticatedRequest(request: Request) -> Request {
    // Authenticating
  }
}

Let’s analyse the problems the implementation above presents. It’s a typical pattern on mobile apps and I’ve seen lots of workarounds facing different unexpected states that weren’t taken into account when it was implemented (Note: This implementation can be more secure with Swift immutability concepts but I just copied how it would be using the same Objective-C format)

  • State mutability: We have a function that depends on an input value, Request (great!) and also depend on two other variables, the token status and the time (yes, we added asynchronous inside). We have three time states (before asynchronous, in asynchronous block, and in main thread block) and two variable states (valid / invalid). Grouping them we have two states for every time instance. Do we usually cover all of them when we implement a function like that? Probably not. If our execution reaches any of the states our app won’t know what to do or with treat it as an other existing contemplated case.

Note: Adding threading logic inside functions adds an extra complexity because by the time these closures are executed the state might have changed.

  • External control: In relation with the previous, this mutability is mostly controlled externally, we end up using a Singleton pattern used all around the app. When we login we set the token, when we logout, we clean the token. This adds some uncertainty to our internal implementation. What happens if I try to execute the request and someone removed the token, for example resetting the client?

  • Retain cycles: I talked about classes as scopes for functions that have a state. When we define functions that use the object state we’re indirectly retaining that scope which means, if that function is in memory, the object that contains the function will be as well. In the example above the GCD asynchronous closure is retaining self to generate authenticated request. Until this closure gets executed ApiEntity is going to be retained by two entities. If for any reason the closure was retained in memory, you’ll be retaining side objects.

This is a common mistake for Objective-C, and most of them even don’t know what’s going on when they’re using reference objects inside blocks. Fortunately Swift solved it pretty well with types and retain level specification when closures are defined.

I’ve seem the problems above manifesting when the user logouts and you want to reset the state of your static/singleton API client. Things start to behave quite randomly.

Functional

Let’s bring our function concept and make it simpler. Remember:

func execute(request: Request, session: Session, completion: (Result<AnyObject, Error>) -> Void) {
  let authenticatedRequest: (Request, Session) -> (Request) = { request in
    // Authenticates the request
  }
  let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
  dispatch_async(dispatch_get_global_queue(priority, 0)) {
    let authRequest: Request = authenticatedRequest(request, session)
    // Execute the request and get the response: let result
    dispatch_async(dispatch_get_main_queue()) {
         completion(result)
    }
  }
}

The example above have in essence two functions:

g(request, session) = auth_request

f(request, session, completion) = execute(g(request, session), completion)

We’re not accessing any external scope that retains the state, we’re instead passing all the data needed for this operation as input parameters and even using internal helper functions that in the same way take input parameters and returns data.

As you see functional is not something magic, it’s something that has been always with us, but that being Objective-C developers we forgot and now we have the opportunity to use it again with a better and readable syntax.

Namespaces

Where should I place my functions?

With OOP it’s seems easier to organise our code logic, we have models, controllers, presenters, factories, … Each of these entities as its own file and we group them by type or by feature inside the app. But what about organising functions? Where should I have them? You can do it everything as top level entities but you’ll end up messing up the top-level namespace with tons of functions.

Building namespaces with Structs My suggestion about this is use structs to create namespaces in Swift grouping the functions that are related to the same business logic. For example if you have a set of functions related to network. Group them under Network as shown below:

struct Network {
  static func execute(request: Request, completion: (Result<AnyObject, Error>) -> Void)
  static func authenticate(username: String, password: String, completion Result<Session, Error> -> Void)
}

Recommendations

If you’re thinking about using functional approaches in your Swift code but you don’t know how, or what to do don’t worry, it’s not something you have to necessarily do to get your app working, but it’s something definitively will help your code to be more reusable, robust, and stable. From my experience try to keep the following points in mind:

  • Think on problems as functions. If problems are too big, think about them as small problems combined.
  • Swift offers immutability concepts: Use let variables and avoid unpredictable states. Force yourself to check the state of the variables or copy the values instead of modifying existing values.
  • Prefer value to reference types: When you’ve to model an entity, try to do it first using an struct. Use let attributes in that struct and in case you want a new struct with an attribute changed, create a new struct changing that attribute. Structs offer mutability but again, force yourself to don’t mutate states.
  • Be hybrid: There’s no need to do it everything functional. When you have a bit of experience you’ll understand what your code asks you for.

And overall, don’t overwhelm with this concepts. You can also use Object Oriented Programming with Functional safety thanks to Swift immutability concept concepts.

References

If you are interested also in the Reactive paradigm you can subscribe here https://leanpub.com/functionalreactiveprogrammingswift to a book I’m writing about Functional Reactive Programming with Swift.

]]>
<![CDATA[Quick introduction to what Functional Programming in Swift is from the simple perspective of functions]]>
Implementing a Mutable Collection Property for ReactiveCocoa https://pepicrft.me/blog/2015/10/14/implementing-a-reactive-collection-property 2015-10-14T00:00:00+00:00 2015-10-14T00:00:00+00:00 <![CDATA[

I’ve been recently playing a lot with Reactive, especially ReactiveCocoa. Since they launched the version for Swift I can say I’m like a baby using it in my project. There’s something in particular which I use a lot in a MVVM pattern which are properties.

What’s a property?

For those who don’t know what a Property is in ReactiveCocoa 3/4 it’s a custom generic type that encapsulates a variable internally. Why? Because this new variable exposes a SignalProducer that reports changes on this variable as events. That way you can know when the variable value changes and subscribe to these changes using ReactiveCocoa concepts.

Here’s an example of a property:

import ReactiveCocoa

let myProperty: MutableProperty<String> = MutableProperty("")
myProperty.producer.startWithNext {(newValue) in print("This is my new value \(newValue)")}
myProperty.value = "yai!"

As you can see we in the example above the property has a producer that we can subscribe to, and it sends new values of that property when the value changes. In that case we’re updating the value to yai! and then the subscriber is printing that value.

Types of properties

ReactiveCocoa offers currently three types of Properties that cover most of the cases where we’ll need to this pattern and all of them conform the same protocol:

public protocol PropertyType {
    typealias Value
    var value: Value { get }
    var producer: SignalProducer<Value, NoError> { get }
}
  • ConstantProperty: Kind of property that doesn’t mutate its value once it’s initialized. The main advantage (in my opinion) on using this kind of property is the fact that you can connect it with other ReactiveCocoa components.

  • PropertyOf: It’s a kind of property that once created doesn’t allow the modification of it’s value externaly. You can only subscribe to the changes of this property that are sent from another Signal/SignalProducer or even another property. Consequently this kind of property can be initialized using these three components:

public init<P: PropertyType where P.Value == T>(_ property: P)
public init(initialValue: T, producer: SignalProducer<T, NoError>)
public init(initialValue: T, signal: Signal<T, NoError>)
  • MutableProperty: Compared with the previous property in this case you can update the property value after being initialized and in the same way all these changes will be propagated to the producer subscribers.

Properties in MVVM pattern

Properties are very useful in the MVVM pattern because it allows us to detect changes int hese properties values and then update the view according to these changes. For example, imagine the following situation:

class ProfileView: UIView {
    // MARK: - Attributes
    let avatarView: UIImageView = UIImageView()
    let viewModel: ProfileViewModel

    // MARK: - Constructors
    init(viewModel: ProfileViewModel) {
        self.viewModel = viewModel
        setupObservers()
    }

    // MARK: - Setup
    private func setupObservers() {
        self.viewModel.avatarImage.startWithNext { [weak self] (avatarImage) in
            self?.avatarView.image = avatarImage
        }
    }
}

class ProfileViewModel {
    // MARK: - Attributes
    let avatarImage: MutableProperty<UIImage> = MutableProperty(UIImage())
}

We want to update the avatar image in the ProfileView when we get the image from somewhere, no matter the source. Then we would define our Profile view ViewModel that includes that MutableProperty of type UIImage. From the view we subscribe to that property and when there’s a new image we just set it to the UIImageView. Data source and its use in the layout is fully decouple. The view doesn’t know where the data comes from, the image might come from a local cache, from a web request, from the camera… It just has to know how to set that image in the view. Great right? You can extend that to more views around the app and for more types of properties and then you can have your views “synchronizes” with the data source.

There’s also a great avantage when working with ReactiveCocoa properties and is the fact tat you can use Reactive concepts like for example applying functional operators and combine multiple properties in a single one.

In the example above, we could for example define a map function with the following format:

func addBadge(badgeConfig: BadgeConfig)(image: UIImage) -> UIImage {
    // Add badge logic
}

Then have a new property in the view model

lazy var avatarImageWithBadge: MutableProperty<UIImage> = {
    PropertyOf(avatarImage.value, producer: avatarImage.producer |> map(addBadge(myBadgeConfig)))
}()

Collection property

When you work with collections in your view it’s very complicated to have granularity with these properties deteting what really changed in the collection. I noticed I ended up calling reloadData() method in the table/collection view and forcing a relayout of all the elements in the view. Not good performance right? Components like the NSFetchedResultsController were designed to avoid this things but in this case the component is extremly coupled to CoreData, if you want to use it for example with your custom collections you have to look for a custom implementation (I don’t have any in mind right now) that proxies collections operations and notifies different observers about these operations like insertion, deletion, update, passing the index back where these operations where executed.

What if we had this approach based on ReactiveCocoa, using Properties? Let’s try to develop a MutableCollectionProperty

Note: I’ve created a repository where this new component has been implemented, https://github.com/gitdoapp/RAC-MutableCollectionProperty. You can clone the repository and try it on your environment

RAC-MutableCollectionProperty

MutableCollectionProperty is a ReactiveCocoa property that notifies about the changes that are produced in an internal collection. It exposes Swift array methods to modify collections in order to redirect these changes to the attached subscribers as shown in the example below:

let property: MutableCollectionProperty<String> = MutableCollectionProperty(["test1", "test2"])
property.changesProducer.startWithNext { [weak self] next in
  case .StartChange:
    self?.tableView.beginUpdates()
  case .Insertion(let index, let element):
    self?.tableView.insertRowsAtIndexPaths([NSIndexPath(row: index, section: 0)], withRowAnimation: .Automatic)
  case .EndChange:
    self?.tableView.beginUpdates()
  default: break
}
property.append("test3")
property.append("test4"s)

Every sequence of changes is preceded by an event StartChange and ends with a EndChange. It allows multiple changes together and set the view which is going to reflect these changes in an “update” state. The methods exposed by the property are:

public func removeFirst()
public func removeLast()
public func removeAll()
public func removeAtIndex(index: Int)
public func append(element: T)
public func appendContentsOf(elements: [T])
public func insert(newElement: T, atIndex index: Int)
public func replace(subRange: Range<Int>, with elements: [T])

You can get this component from here and use it with ReactiveCocoa on your projects. I’ve already proposed the feature to the ReactiveCocoa team on this PR still waiting for response :).

If you’re interested on Reactive paradigms and you want to keep learning, I’m currently writing about about the use of Reactive in Swift apps using ReactiveCocoa. You follow the status here

If you found any bug or you would like to comment something about Reactive or this port in particular, feel free to drop me a line, [email protected]. We’re using this an another Reactive concepts on GitDo

]]>
<![CDATA[These are the steps I followed to create a Mutable Collection Property for ReactiveCocoa. Very useful if you want to get events about changes produced in a collection]]>
Programación Reactiva en Swift: Parte 1, Introducción https://pepicrft.me/blog/2015/08/09/programacion-reactiva-swift-parte-1 2015-08-09T00:00:00+00:00 2015-08-09T00:00:00+00:00 <![CDATA[

Con la llegada de Swift y la introducción de interesantes operadores, conceptos funcionales, y la seguridad de tipos el paradigma de programación reactiva ha cobrado especial importancia en el desarrollo de apps. En comparación con la programación imperativa a la que la mayoría de desarrolladores estamos acostumbrados (yo incluido) programar de forma reactiva consiste en modelar los sucesos que tienen lugar en el sistema como un conjunto de eventos que son enviados a través de un “stream” de datos. El concepto es bastante sencillo, y aunque no todos los sucesos tienen carácter de stream, pueden acabar siendo modelados como tal. Desde acciones que realiza el usuario sobre la UI, hasta la información que proviene del framework de localización.

Todas las librerías que encontramos hoy en día si echamos un vistazo en Github tratan de modelar todos los conceptos reactivos a través de una serie componentes y operadores para manipular y procesar los eventos. La diferencia entre ellas es principalmente la sintaxis y el nombre que usan para los componentes. Algunas de ellas incluso añaden operaciones como por ejemplo de retry. Encontramos librerías como ReactiveCocoa que recientemente ha actualizado toda su API para adaptarse a las ventajas que Swift aporta como lenguaje, o RXSwift cuya base es ReactiveX disponible para otros lenguajes como Java o Javascript.

Si tienes curiosidad en el repositorio de RXSwift encontrarás una tabla donde comparan RXSwift con el resto de alternativas para Swift.

Reactive stream

Patrones de observación

Cuando empecé a introducir los conceptos reactivos una de mis primeras inquietudes fue entender qué patrones similares había estado usando hasta ahora, que problemas presentaban, y de qué forma la programación reactiva ayudaba o facilitaba estos patrones. La mayoría de ellos los usas a diario:

KVO

Extensivamente usado en Cocoa. Permite observar el estado de las properties de un objeto determinado y reaccionar antes sus cambios. El mayor problema de KVO es que no es fácil de usar, su API está demasiado recargada y todavía no dispone de una interfaz basada en bloques (o closures en Swift)

objectToObserve.addObserver(self, forKeyPath: "myDate", options: .New, context: &myContext)

Delegados

Uno de los primeros patrones que aprendes cuando das tus primeros pasos en el desarrollo para iOS/OSX ya que la mayoría de componentes de los frameworks de Apple lo implementan. UITableViewDelegate, UITableViewDataSource, … son algunos ejemplos. El principal problema que presenta este patrón es que sólo puede haber un delegado registrado. Si estamos ante un escenario más complejo donde con una entidad suscrita no es suficiente el patrón requiere de algunas modificaciones para que pueda soportar múltiples delegados.

func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell {
        return UITableViewCell()
}

Notificaciones

Cuando es complejo aproximarnos al componente fuente del evento para subscribirnos se usa el patrón que consiste en el envío de notificaciones. ¿Conóces NSNotificationCenter? CoreData lo utiliza por ejemplo para notificar cuando un contexto va a ejecutar una operación de guardado. El problema que tiene este patrón es que toda la información enviada se retorna en un diccionario, UserInfo, y el observador tiene que conocer previamente la estructura de este diccionario para poder interpretarlo. No hay por lo tanto seguridad ni en la estructura ni en los tipos enviados.

NSNotificationCenter.defaultCenter().addObserver(self, selector: "contextWillSave:", name: NSManagedObjectContextWillSaveNotification, object: self)

Las librerías reactivas disponibles actualmente ofrecen extensiones para pasar de esos patrones al formato reactivo. Desde generar señales para notificaciones enviadas al NSNotificationCenter, como para detectar los taps de un UIButton.

Ventajas de programar de forma Reactiva

La programación reactiva tiene grandes ventajas usada en esos ámbitos donde es bastante directo aplicar el sentido de stream. Como bien comentaba al comienzo, todo puede ser modelado como un stream, y podrías de hecho tener un proyecto completamente reactivo pero bajo mi punto de vista, acabarías teniendo una compleja lógica de generación de streams que acabará dificultando la lectura del código.

Con la programación Reactiva sucede algo similar a la programación Funcional. Se trata de un paradigma de programación que ha tenido un gran impulso en el desarrollo de iOS/OSX con la llegada de Swift pero no es necesario agobiarse y sentir una presión inmensa por migrar proyectos hacia esos paradigmas. Usa estos en tus proyectos a medida que te vayas sintiendo cómodo y notes que tu proyecto te los pide en determinadas partes. ¡Eras feliz sin ellos!, ahora puedes serlo incluso más, pero con tranquilidad…

Después de unos meses usando ReactiveCocoa en mis proyectos, especialmente en la parte relativa a la fuente de datos (local & remota) he percibido una serie de ventajas:

  • Seguridad de tipos: Gracias al uso de genéricos podemos tener validación de tipos a nivel de compilador y evitar tener que estar trabajando con tipos genéricos como AnyObject o NSObjects.
  • Facilita la manipulación de datos: Los eventos recibidos a través de los streams pueden ser mapeados, filtrados, reducidos. Gracias al uso de funciones definidas podemos aplicar infinidad de operaciones sobre los eventos.
  • Subscripción en threads: Independientemente de la gestión interna de threads que pueda llevar a cabo la ejecución de un stream (por ejemplo, una petición web), podemos indicar en qué thread subscribirnos para escuchar las respuestas. La forma de indicar el thread de subscripción se traduce en una simple linea de código.
  • Fácil composición y reusabilidad: Los streams pueden ser combinados de infinitas formas (gracias a los operadores que los propios frameworks facilitan). Además podemos generar los nuestros propios de forma que podamos obtener streams de eventos a partir de una combinación de otros muchos.
  • Bindeado de datos: Podemos conectar todos los eventos que llegan a través de un stream por ejemplo con una colección de forma que nuevas colecciones que lleguen en formato de eventos actualizarán la colección “bindeada”. De la misma forma por podemos actualizar una property de un elemento de UI con eventos recibidos.
  • Gestión de errores: Por defecto los frameworks reactivos dan la opción de reintentar la operación fuente del stream en el caso de fallo. Por ejemplo, si un stream recibe la respuesta de una petición web y queremos que está se reintente en el caso de fallo podemos usar el operador y la petición se volverá a ejecutar:
NSURLSession.sharedSession().rac_dataWithRequest(URLRequest)
            |> retry(2)
            |> catch { error in
                println("Network error occurred: \(error)")
                return SignalProducer.empty
            }
  • Simplificación de estados: Debido al hecho de que la información se modela en un stream unidireccional. El número de estados que puedan introducirse se reduce simplificando la lógica de nuestro código.

##Reactive Cocoa Aunque para Swift podamos encontrar varios frameworks que tratan de modelar todo el paradigma reactivo a través de una serie de componentes y operadores, los siguientes posts estarán basados en ReactiveCocoa. La principal diferencia de ReactiveCocoa con el resto es que este no se trata de un port de la librería original de Microsoft (Rx) sino que implementa su propia API, más sencilla y cercana a las convenciones Cocoa.

Este es el primero de una serie de posts de introducción a la Programación Reactiva donde además detallaremos el uso de la misma en el modelado de las fuentes de datos (API, y local). Todos serán recopilados en este libro https://leanpub.com/programacionreactivaenswift que actualmente estoy escribiendo y qué publicaré en unos meses.

En el siguiente post realizaré una introducción a los componentes de Reactive Cocoa.

Recomendación

En el siguiente enlace encontrarás una interesante explicación de la programación Reactiva con ejemplos en Javascript.

]]>
<![CDATA[Con la llegada de Swift y la introducción de interesantes operadores, conceptos funcionales, y la seguridad de tipos el paradigma de programación reactiva ha cobrado especial importancia en el desa... ]]>
Paginated API requests using Functional Reactive in Swift https://pepicrft.me/blog/2015/06/18/paginated-api-requests-using-functional-reactive-in-swift 2015-06-18T00:00:00+00:00 2015-06-18T00:00:00+00:00 <![CDATA[

I’ve been playing the days with Reactive Cocoa. I fell in love with that programming paradigm. I had heard about it before but hadn’t stopped to play a little bit with it. Although it might be scary at first, and most of the concepts are difficult to understand when you first take a look at them. The more you get familiarized with it the more you think in term of streams.

In order to practice a little bit with reactive programming I implemented an API client offering a public reactive interface. That client has methods that instead of using blocks to notify the completion of the API request, return a signal which is executed when someone subscribes to that signal. That API client pointed to an API that offered paginated responses, i.e. having execute different requests to get all the resources if the results number is higher than the page limit.

Taking advantange of the reactive approach of the client I implemented that paginated method and made it resusable for any client independent from the http framework you are using. Let’s see how I did it:

typealias PaginatedRequest = (page: Int, pageLimit: Int) -> RACSignal

internal func rac_paginatedSignal(initialPage: Int, pageLimit: Int, requestSignal: PaginatedRequest) -> RACSignal {
    var currentPage = initialPage
    let nextSignal = { () -> RACSignal in
        let signal = requestSignal(page: currentPage, pageLimit: pageLimit)
        currentPage = currentPage + 1
        return signal
    }
    var subscribeNext: ((RACSubscriber!) -> Void)?
    subscribeNext = { (s: RACSubscriber!) -> Void in
        nextSignal().subscribeNext({ (response) -> Void in
            if let items = response as? [AnyObject] {
                for item in items {
                   s.sendNext(item)
                }
                if items.count == pageLimit {
                    subscribeNext!(s)
                }
            }
            else {
                s.sendError(NSError(domain: "invalid.response", code: -1, userInfo: nil))
            }
        }, error: { (error) -> Void in
            s.sendError(error)
        }, completed: { () -> Void in
            s.sendCompleted()
        })
    }
    return RACSignal.createSignal({ (subscriber) -> RACDisposable! in
        subscribeNext!(subscriber)
        return nil
    })
}

Breaking down

  • PaginatedRequest: We define that typealias which represents a function that takes the page number and the page limit and returns the signal. If someone subscribes to that signal it’ll execute the request and return the results or an error.
typealias PaginatedRequest = (page: Int, pageLimit: Int) -> RACSignal
  • Paginated request signal generator: The paginated signal generator takes three parameters, the initialPage, the pageLimit and the PaginatedRequest and returns the signal. That signal encapsulates the iteration through all the pages and send the collection results as stream items.

  • Next signal generator: That closure is the responsible to return the signal associated to the next page. The function context has a variable to keep a reference of the current page and every time this method is called, that counter is increased by 1. It uses the PaginatedRequest closure.

var currentPage = initialPage
let nextSignal = { () -> RACSignal in
    let signal = requestSignal(page: currentPage, pageLimit: pageLimit)
    currentPage = currentPage + 1
    return signal
}
  • Subscribe next: The approach uses recursive subscribing to the next signal and passing the subscriber through. Subscribe next closure takes the source subscriber and depending on the next signal results it:
    • Closes the stream: Sending a completion or a failure message
    • Sends the results through the stream
    • Subscribes to the next signal when the results count is equal to the page limit
var subscribeNext: ((RACSubscriber!) -> Void)?
subscribeNext = { (s: RACSubscriber!) -> Void in
    nextSignal().subscribeNext({ (response) -> Void in
        if let items = response as? [AnyObject] {
            for item in items {
               s.sendNext(item)
            }
            if items.count == pageLimit {
                subscribeNext!(s)
            }
        }
        else {
            s.sendError(NSError(domain: "invalid.response", code: -1, userInfo: nil))
        }
    }, error: { (error) -> Void in
        s.sendError(error)
    }, completed: { () -> Void in
        s.sendCompleted()
    })
}
  • Entry signal: That’s the source signal which fires the recursive subscribing. That just calls subscribeNext passing the subscriber.
RACSignal.createSignal({ (subscriber) -> RACDisposable! in
  subscribeNext!(subscriber)
  return nil
})

Important notes

  • Collection results are sent one by one through the stream. Thanks to that you can update your collection as the items are being sent. In case you having all these results in an Array you can use the method toArray() of RACSignals. Be careful with that method because it blocks the thread execution until the stream is completed with either a completion or a failure message.
  • If any page fails the recursive algorighm stops and it sends an error message to the subscriber. The remaining pages won’t be fetched

ReactiveCocoa is very useful when you’re dealing with asynchronnous events because you can manipulate and combine them easily as you’re receiving them. In that case we have different streams that we combine in a single stream we’re we’re receiving the collection items.

If you want to use Reactive programming in your projects and don’t know how, or you wanna talk about anything related with that, drop me a line, [email protected]

]]>
<![CDATA[Reactive is magic, transform your API responses into streams of data and you'll se how easy is to build for example paginated API requests]]>
Why Open Source helps you to become a better developer https://pepicrft.me/blog/2015/06/11/you-should-try-open-sourcing 2015-06-11T00:00:00+00:00 2015-06-11T00:00:00+00:00 <![CDATA[

For those who are not conscious about that, Open Source is the reason of the existence of many development communities. Would you imagine nowadays the iOS/OSX development without CocoaPods? And what about Ruby without its Gems? A lot of developers around the world put their efforts to simplify your daily stuff publishing their work in an Open Source way through these dependency managers (for them because otherwise it would be impossible to resolve many dependencies conflicts of our projects). Open Source a project doesn’t mean just git push origin of your projects and making them public. It has some extra implications that will help you as a developer and with your future projects. Have you ever done it before? Would you like to know how it helped to me? Let’s see.

Why I started Open Sourcing some libraries?

When I started developing app some years ago I didn’t know about what the term Open Source meant. I started with iOS, by that time there wasn’t a dependency manager like CocoaPods and I integrated the libraries manually. There was though, reference websites like Cocoa Controls where I could check the last libraries that other developers had published and were offering to the community.

I’ve always been a curious person and I wonder every day how the things work. I like to say that I can’t be happy without knowing what there’s inside the magic boxes around us. I wanted to know how these libraries worked, and why developers were spending their time developing libraries instead of working on their projects.

I found the answer, and liked the philosophy behind that. What makes a language very important is not only how good it is and what it offers that others don’t but the community that there’s supporting the language with new tools/libraries/tutorials, …. And Objective-C was building a great community thanks to the mobile apps development and tools like CocoaPods, or libraries like AFNetworking that you sure know about. People, packaging, code logic, to simplify others stuff and support the community with great tools. I liked it! I wanted to support the community as well and started giving my first steps sharing some code.

I remember having some friends who started developing apps for iOS as well and that saw (and still see) dependency managers as “tools for those developers that don’t know how to integrate libraries manually” or that think that “those who use libraries are not good developers, they have to use code from others”. Sorry, but I totally disagree with that. The point is that you have bundled something you have to code and that would make you save a lot of time. Why not using it and extending in case you need extra features not supported?. Those developers tend to see their code as Gollum saw the ring:

  • This code is mine and only mine.
  • Sharing means, others get advantage of the time I spent coding.
  • But yes… I take code from others. And take part passively in the community. Resources rain over me.

It’s something really worth to try if you’re a Developer and you haven’t done it before.

Why contributing with Open Source projects helps you as a developer?

I said that it helps to developers but some of you might be thinking why?. These are some points I figured out after creating some Open Source libraries:

  • Think about code structure: Most of the times we’re constrained by defined patterns that come from the project itself, or maybe the language. We code for a product and forget about the code structure. The important fact there is coding a feature, or add a fix there that helps to solve this other bug. If you work in a team this might be a little bit different but if you work as a Freelance it’s something common. When you work on a library others developer are going to use it. The interface has to be clear, the code has to be well structured, otherwise they won’t be able to use “your product”. Before starting coding you’ll analyze your library requirements, how you would use that library if you were a developer and then design the core structure. I feel tempted to not follow these steps when I work for a project which is not a library. In my last project I’ve bundled the core business logic in modules, for example for 8fit, I called it EFKit, or EFWatchKit for the Apple Watch code It helps me to think on these code as a module that my main project depends on.
  • Tests are important: When you have an open source library published on Git developers will start contributing with it. They might not know about all the library and they want to just add an extra functionality. Do you know how easy it’s to introduce a regression there? Having everything tested ensures that nothing is broken and the project keeps stable.
  • Versioning is also very important: When you’ve other developers depending on your library you have to reflect the library changes through the versioning. I recommend you to follow the Semantic versioning 2.0. That way you can reflect big important changes in the library interface through major updates and use minor updates with small fixes.

Open Source opens your mind, and contributes to design cleaner and better code

Things you have to keep in mind

When you work an Open Source project that you are going to open to more people there’re some points that you don’t usually take care of but that you should if that project’s going to be opened to more people. These points help to get closer to the project and understand how it works. You know how it works, or it components because you have worked on that, but, what about the rest? Developers are your users and if you forget about these points, developers might end up not using your library. What are these points?:

  • Documentation: Code documentation is very important to understand the behaviour of classes/methods or other library components without digging into its implementation. Think about a method with multiple parameters, and a language not too stricter with types. How does the user know about the parameters you can pass to that method? If you’re using an strongly typed language, great! but otherwise not documented code might lead to big headaches working with the library.
  • Tests: Do you love testing? You’ll end up doing. Testing is very important on Open Source projects because the more developers you have developing for the library the easier it’s to introduce a regression and that nobody detects it. It’s very important to have your project integrated with any continuous integration solution (Travis, Jenkins, …) Most of them are free for Open Source projects.
  • Wiki page: I hadn’t used Wiki pages until I created my first project on Github. There were a lot of things I had to explain and the README file was getting larger and larger. My recommendation is that if your project size is small enough just to explain it in one page of README, do it there. If the README becomes larger, move it to the Wiki, organize the Wiki in different pages and add a home reference page that links to them.
  • Issues: Enable your repo issues. Some developers disable them because they don’t want to hear developers complaining about things working or because they don’t know how to use the library. After some libraries you’ll figure out that issues is the communication channel between developers and you. And something also important to keep in mind… Not all issues are valuable, don’t stress yourself trying to give a valid answer to all of them. You might find people asking for features that are impossible to support for the library that you designed. Think if it makes sense to add that feature, if it fits into the core code. After that design the feature and implement it. Always reference the issues in your Pull Requests because it’s a good way to let the user know that you’re working on that.
  • Contribution guidelines: Luckily your library will become popular and more developers will contribute with the project. In that case having contribution guidelines will be helpful to avoid messy code in the future. Explain there the code style you’re using, how the project is structured and how you can contribute with that structure, what new proposals must include, for example, new PR must include documented code, tests for that new feature, wiki page update, …. It’ll save you time explaining developers the things they must fix in every PR.
  • Code structure: Design first and then code. Think about how you would use that library, the public API you’ll like to find in that library, and then start designing the core components behind that API. Your code must be scalable and reusable, think where the project might have new features in the future, and design it to support them. When we develop for a product we tend to not thing about code reusability and scalability (and we shouldn’t). If we don’t think that way the library will be usable now but it probably won’t after some months (and you won’t be willing to keep supporting it, you’ll be conscious about not having done the things properly before)

Lastly and not less important, every Open Source project requires one thing, compromise. You’re building a product, you’re building a piece of code other projects depend on. I don’t know about any project that is self maintained. The project will have dependencies with system frameworks, or even with other libraries. Having dependencies mean that if these change the project will need some changes as well. If there’s no compromise with the project it’ll be unusable some months later (unless you have a community that maintains it). An Open Source project requires invested time. The most common Open Source reference projects were built with a big community behind them. They started with a person having an idea, that was later supported by more and more developers that spent time to make it better and bigger.

If you’re thinking about building something Open Source, let me give a recommendation. Try to build something that your project depends on. It’s a good way to committed to the library.

Recent Open Source project

  • GDAnalytics: I started that library for a side project I’m working on Gitdo. The purpose of that library is centralize analytics (Flurry, Google Analytics, Fabric,..) in a single API. It’s highly inspired by SegmentIO and ARAnalytics

Hope you have enjoyed the article. If you’re another Open Source geek and you would like to comment any other point you consider is important also when working for these projects, feel free to add a comment or contact me, [email protected].

]]>
<![CDATA[Most of developers haven't tried creating an Open Source component before. Since I apply it to every of my projects I feel the results and development process has improved a lot. In this article I will describe why it's so important]]>
My first Apple Watch impressions https://pepicrft.me/blog/2015/06/07/my-first-apple-watch-impressions 2015-06-07T00:00:00+00:00 2015-06-07T00:00:00+00:00 <![CDATA[

We’ve finally received the 8fit Apple Watch. We worked on 8fit on an Apple Watch App one month ago and since then we hadn’t had access to a real device. Only a few users had shown us our app working there but that’s all. The first thing did today after the unboxing was installing 8fit. I was a bit worried because since we released the first app version we haven’t released any new update. Was it working? Had it something broken? Could I send messages to the coach?

After installing it and testing I was able to complete a workout, send some messages to the coach and see the next meal, amazing! I don’t know when we’ll work in future improvements but at least I can say we did a great job with our current version and that the next step will probably be workouts independent from the mobile app. But let’s see in the future.

As a gadgets geek I have been playing all day with the toy. I wanted to check out how user interactions were. Opening notifications, dismissing them, receiving messages from messaging apps, checking productivity apps like calendars, slack, … Although the impression was good there were things I didn’t like at all. Those are my impressions:

Things I liked

  • Activity alerts: Apple Watch comes with a default Activity app that helps you to keep active. It sends you an alert when you haven’t moved enough or when you need to walk a little bit because you have been a lot of time sat. It’s specially useful if you spend all day in front of a computer and forget about disconnecting periodically. I’m not a sedentary person and I achieve the activity goals easily but for sedentary people it can be a handy tool to become healthier.
  • Notifications: If I had to count the times I pick up my phone every day to see notifications that I end up dismissing I would get frightened. You can reply quicker with the watch and get your phone only if you received an important alert. Be careful! because it can also be annoying if you hate receiving notifications constantly, the watch shakes every time you receive a new one.
  • Glances At least for me, it’s the most handy feature of Apple Watch. You get the right refreshed information just when you need it. If it’s time to have lunch it lets you know your next meal, if you have 4 tasks to accomplish today it’ll give you a quick summary of your today’s productivity. I use Sunrise and Todoist to organize myself every day and I forget adding reminders to the tasks/events I created. Thanks to the watch I can periodically check the glance and see what tasks are missing or when I have the next event in the calendar.

Things I didn’t like

  • Loading…. When I tried to use some apps I got a bit frustrated because it took more than 5 seconds to load for some apps. Most of them don’t optimise their resources in the watch can’t process them as fast as the mobile device. When I take a look to the watch I basically want two things, refreshed information and quick presentation. I’m not going to be more than 5 seconds waiting for the app being loaded because I can pick up my phone, unlock it and open the app in 4 seconds. Wasn’t it expected to be a tool to avoid that? I think it’s something developers have to improve.. Optimize resources, specially images and refresh the information only if it has changed.

  • Running. I’m a runner and yes, one of the firsts things I did was trying a running a running workout with the watch. I expected going running and that the information was presented as the main glance when you took a look to your wrist. Unfortunately it didn’t happen. After some minutes running I took a look to my watch and the first view was the default dashboard. I had two swipe to find the running glance. If you’re running or practise sport it doesn’t free you from not using your finger while running (sweaty). EDITED: There’s a settings option that allows you to specify that you want to see the last opened app when the Watch screen gets activated again

  • Open the iPhone app to use this Watch App: I’ve read this message in one of every three apps. We know that the Apple Watch SDK is quite limited and it forced some developers to rethink their apps for that platforms. There are some of them that didn’t do thought, and they tried to implement the same app core functionality on watch as well. I opened Shazam and the first message that I received was, please open your iPhone app to start detecting the song. Really? Do I really need a Watch App for that?. Hardware and SDK limitations are a good reason to rethink your app’s format for Apple Watch

  • Messy OS interactions: I haven’t get used to the Watch OS navigation yet. I still have some problems to show the notifications or open the glances list. I think the OS has to improve that part. Moreover when I turn my wrist, in order to activate the screen expecting any particular view, the watch resets the view hierarchy and then it shows you the dashboard again. Sometimes it freezes with a black screen blinking and thinking that you turned your wrist again.

Useful apps for Watch

  • Sunrise Calendar: It offers an app and a glance. They provide information about your next events in the calendar. I use the Glance a lot. I’m a very forgetful person and don’t check the calendar a quite often when I have it on my phone.
  • Todoist: It’s my best friend on the iPhone and OS X. I use it to organize my days and know at every moment what I have to do. Having it on the watch makes it even easier. Turn your wrist, swipe some glances and there you have your next task. Great format and well designed app.
  • Nike+ Unfortunately it’s the best designed (in my opinion) running app for watch. I tried Runkeeper, the one I use all the times, but the design is not so good. As it has happened a lot of times before, new Apple’s products come with a smooth Nike integration (best friends forever). I still remember using the iPod Nano with the small transmitter you had to put into your shoes. The app shows the running information in a pretty clear way, and also offers you two more screens to see the map and control your music. Apple watch also offers a Workouts app where you can start a running session and it provides more information like your BPM. The worst part is the fact that it reports your data to HealthKit (then you have data in two different services if you came from another running app)
  • Passbook I remember staying at the train’s queue, with the bag, my luggage and a lot of things in my hands and having to pick up my phone, with some difficulties to show the Passbook QR with the ticket. It’s easier now. You open the Watch Passbook app, open your next ticket, and voila! The QR is there.

Will I buy a Watch?

Sincerely I won’t. I wouldn’t use it as much as I expected. I have mostly used it to check notifications and I can do those actions using the iPhone instead of spending 400 Euros. Maybe if it allowed answering Telegram’s messages using your voice for example, i.e. a more opened SDK, so that the developers could offer richer apps, I would think about doing it. I’m not sure at all if the SDK is limited by hardware or by Apple. We know that here a product strategy comes up and they cannot offer everything at the beginning, even if they know it’s possible. They have to find a good reason to sell you the Apple Watch v2.0. I’m looking forward to seeing how the SDK improves with future versions and the things we’ll be able to do in a few months. I expected apps to rethink their formats for Apple Watch, so that they were independent from the phone but most of them didn’t. I don’t really want watch apps that tell me, open your iPhone app to keep using this one!. The SDK limitation shouldn’t lead us as developers to limitate the usability of our apps. And lastly, and as a runner, I expected a seamless experience with running apps and I didn’t. Only the Apple Workouts app has that integration (because Apple implemented it).

Have you tried an Apple watch and you would like to share your thoughts?

]]>
<![CDATA[After a day using Apple Watch I would like to share my impressions with the new Apple toy and why I wouldn’t buy the first version]]>
Full control in your hybrid mobile apps with a local server, 8fit https://pepicrft.me/blog/2015/05/25/full-control-in-your-hybrid-mobile-apps-with-a-local-server 2015-05-25T00:00:00+00:00 2015-05-25T00:00:00+00:00 <![CDATA[

Since 8fit was developed we’ve been using the webview application cache to send new updates to the users. It allowed us to update without releasing new apps (something slow if you think about the Apple review process). Although it’s something good, it has also some cons, it’s a bit difficult to setup everything properly, specially the server config to ensure the webview doesn’t cache the files that shouldn’t be cached. Moreover the control over the update process is very low and it increases the boot times because sometimes it has to get the resources remotely when it doesn’t actually need them.

Is there any other way to do it if my app is an hybrid app? Yes, there is. This solution is closer to the bundling format of native apps, where you package everything inside the app and that’s “server” to the Webview. Frameworks like Ionic does it, but if you bundle the web resource you have to follow the native release cycles and again, it’s slow in case of Apple (slow iteration speed).

We were thinking if there was a solution that took the advantages of having everything locally to serve it quickly and that had the flexibility to update remotely instead of having everything bundled with the app and started working on something we called internally “Frontend Manager’.

That approach gives us a lot flexibility, especially when working quickly on new features, designs and integrations. The web app still can decide when to download the frontend web app, when to inject it into the webview, and even force high-priority updates if needed.

HTML5 Application Cache

As I mentioned we’d been relying on Application Cache but the problems with app cache have been documented plenty of places (http://alistapart.com/article/application-cache-is-a-douchebag). The main problem it has is that even after dealing with all the edge cases it’s possible to have users stuck on old versions, or experience very long “flaxes of blank white screen”. And it’s also impossible to force an update when necessary.

A lot of users have been reporting us white screens, frozen apps, and we could only tell them to clear the app data in order to clear the Application Cache and start the app with everything cleaned. It was really a frustrating user experience.

Native Frontend manager

Why not having a kind of native controller controller that decided when and how inject the content into the app? That’s what we did and we’re very happy with it. It didn’t require a lot of changes in the frontend which is great because we didn’t have to couple our implementation to a particular problem. What did we need?

  • Manifest file: The controller had to know what files to download in order to have the whole frontend locally. We did it through a manifest file in the frontend which is generated when a new build is deployed. That file contains information about the commit, the deploy date and an array with all the files that are required for that frontend version and their routes. We’re thinking about having more control in the future here and use a custom endpoint that allows us to decide what version every user has to use but it’s still an idea and we want to firstly test this deeply, step by step.

  • Native Frontend Manager: It’s the core native component and it’s responsible to synchronize the local with the remote folder. It downloads the manifest file and checks if it’s synchronized with the frontend specified there. It does it every time the app is opened and depending on the app status the update is forced or not. If the app is downloaded, the first time it’s opened there’s a loading screen that fires the frontend download. Once it’s downloaded it’s automatically loaded into the webview and the app is launched. If the frontend exists locally and consequently the app can be launched, it just downloads the last frontend version but doesn’t force the update because it would cause the white screen we wanted to avoid. When is it loaded then? The next time the app is opened it detects if there’s an “enqueued” frontend and it loads it before launching the app.

    • Loading screen: As I mentioned the first time the app is launched there isn’t anything to load so we implemented a loading screen with a progress bar synchronized with the download process. In case of not having internet connection it’s notified to the user and we offer a contact button to report a but in the frontend manager attaching automatically an error log. The preload screen has design similar to the frontend one which makes the transition very smooth.

We’ve also another way to reload the frontend which is using a bridge call through Javascript but we only use it for testing and point the app to different environment instead of having to rebuild the app every time.

  • Local server: The first iteration we did with this feature was thought to be loaded directly the frontend directly from the disk but we lose something that our frontend is dependent of, cookies. If you load the frontend from a local file forget about cookies, forget about persisting the user’s session using cookies, it would imply a lot of changes in the frontend just to adapt the frontend to solve a particular problem. What if we loaded 8fit in a desktop then? We did some changes and it worked but we didn’t really want that. Moreover before bringing this feature onboard we added WKWebKit to our app to be used only on iOS 8 devices, and luckily the load of files into the webview wasn’t a feature, we needed a different solution. We took a look and internet and we saw that we weren’t the only team looking for solutions for that, Phonegap was affected in the same way if they wanted to use WKWebKit for their “automatically generated mobile projects”. After thinking a bit about it and analyzing the existing frameworks and libraries available we took the decision to do it launching a local web server in the app which would be stopped if the app moved to the background. It might sound scary, but it isn’t. Some other apps do the same but you don’t actually know that they are doing, those apps that allow you to control your app remotely, or just share content to the television… After reviewing different libraries we integrated GCDWebServer which offered what we needed, serve static content in a local path and proxy API calls as our NGINX reverse proxy does (the Android HTTP server that we ended up using was NanoHTTPD).

The example below shows a config of the server to serve files and proxy some calls:

- (void)setupServerHandlers:(NSString*)frontendPath
{
    [self addGETHandlerForBasePath:@"/a/"
                     directoryPath:frontendPath
                     indexFilename:@"index.html"
                          cacheAge:3600
                allowRangeRequests:YES];
    [self addAPIHandlerForRequestMethod:@"GET"];
    [self addAPIHandlerForRequestMethod:@"POST"];
    [self addAPIHandlerForRequestMethod:@"PUT"];
    [self addAPIHandlerForRequestMethod:@"PATCH"];
    [self addAPIHandlerForRequestMethod:@"DELETE"];
}

#pragma mark - Custom Setters

- (void)addAPIHandlerForRequestMethod:(NSString *)requestMethod
{
    typeof (self) __weak welf = self;
    [self addHandlerForMethod:requestMethod
                    pathRegex:API_PATH_REGEX
                 requestClass:[GCDWebServerDataRequest class]
            asyncProcessBlock:^(GCDWebServerRequest *request, GCDWebServerCompletionBlock completionBlock) {
                // Proxying the source request
                NSString *urlString = [welf.remoteRootUrl stringByAppendingPathComponent:request.path];
                NSMutableURLRequest *proxyRequest = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:urlString]];
                [proxyRequest setHTTPMethod:requestMethod];
                proxyRequest.allHTTPHeaderFields = request.headers;
                if ([request isKindOfClass:[GCDWebServerDataRequest class]]) {
                    proxyRequest.HTTPBody = [(GCDWebServerDataRequest*)request data];
                }
                NSURLSessionDataTask *dataTask = [welf.manager dataTaskWithRequest:proxyRequest completionHandler:^(NSURLResponse *response, id responseObject, NSError *error) {
                    if (error) {
                        completionBlock([GCDWebServerResponse responseWithStatusCode:[(NSHTTPURLResponse*)response statusCode]]);
                    }
                    else {
                        NSString *contentType = [(NSHTTPURLResponse*)response allHeaderFields][@"Content-Type"];
                        completionBlock([GCDWebServerDataResponse responseWithData:responseObject contentType:contentType]);
                    }
                }];
                [dataTask resume];
            }];
}

As you can see in case of being an static file we can directly specify the local path where the files has to be read (we have to ensure the folders tree is the right one) and in case of the API we proxy the requests that match a regular expression and manage them using a web client, that in that case is AFNetworking. And the magic works!

Hotfixes & pushing updates

Finally and taking advantage of the silent push notifications we have on iOS that are a kind of “content notifications” and the full control we have in case of Android we thought, why not connecting it with our frontend manager in order to sync the frontend under certain conditions? Yei! We did it. When the app receives the push notification it starts the synchronization and loads the frontend only if the app was in background. That way we don’t reload the app is the user is doing a workout. Thinking about the options that we have now with that feature…:

  • We can force an update that fixes an important bug introduced in a previous version.
  • Also force a version that includes assets related to a campaign that we have to launch.
  • We can force an update to a particular user which might have problems with the app.

Some gotchas

  • Implementing the Android version took us a lot of time. We had to implement the API proxy on our own. We had to do some research about proxying HTTP calls and do a lot of testing to ensure the API communication didn’t get broken.

Next steps

Frontend manager was our next big feature that we’ve been waiting for months. Now we want to test it deeply, include it in our release QA cycles and cover all possible edge cases because it’s a very critical feature. Frontend Manager has supposed a breath to keep thinking in the product and building more fluid experiences. We’ll start migrating features to native, the company is getting bigger and we’re having more resources to think about a future 8fit 100% written on Java, Objective-C, Swift.

Resources

Note: If you also are working on an hybrid app and you’re looking for a similar approach you can reach me at [email protected]. I’ll pleased to help you.

Thanks Pedro Sola for the article review and its help during the feature development

]]>
<![CDATA[Custom solution to have full control over your hybrid apps bundling the content locally and controlling updates]]>
Modularize your mobile projects https://pepicrft.me/blog/2015/01/28/modularize-your-mobile-apps 2015-01-28T00:00:00+00:00 2015-01-28T00:00:00+00:00 <![CDATA[

When we start building mobile apps we tend to do everything on the same bundle. If you are an iOS developer that means only one target, with one scheme with the build config and that project has some dependencies that might be attached as frameworks, static libraries, or using a dependency manager like CocoaPods. If you are familiar with Android you probably use a module with some dependencies managed by Gradle.

If you think a bit about that solution it’s mixing in the same bundle stuff which is strictly related to the device and the presentation layer, the application core logic, and the interaction with the system/external frameworks. What happens if Apple/Google change any of the existing frameworks? You’ll have to analyze your app find all the framework dependencies and replace them and what about using your application core logic in a different device with a different interface? You’ll probably end up adding a bunch of if/else statements in your code. You know that’s not a clean way to do the things…

Both iOS and Android development tools offer great tools to deal with that. However we only use them to link our big app bundle with other dependencies (aka libraries). But.. what if we structure our app in small components which instead of packaging the whole app logic they have a components with the same responsibility grouped. We could easily interchange them easily without having to refactor all the app.

If you are thinking in how to apply those terms to existing architectures like might be MVC, MVP, or VIPER an app structured in bundles separates those architecture components in different modules.

Advantages of working with bundles

  • Have your team working in small projects: How many times have you had to deal with Git conflicts because two of you have been working on the same file? This way you can have some developers working on the interface, specially those who are expert building layouts, animations, and interactions with users. Have another group of developers developing the interaction with the data and translating user interactions to events applying some business logic. And finally those data expert dealing with API requests and database persistence. You can split data in two bundles LocalData and RemoteData as well.
  • Single Responsibility Principle and decoupled components: If you develop everything in a single bundle you tend to forget that principle and implement strongly coupled component. Working with bundles helps to implement decoupled components which don’t know who’s going to use them.
  • Easy to test and fast test executions: Every time you want to execute a suite of tests you have to build the entire project just for testing some pieces of your app. If you split your project in “small projects” you’ll be able to test them individually and mock the dependencies.
  • Easy to recover from regressions thanks to splitted versioning: Argh! Juan introduced a regression on the version 2.0.2 of the Data project. Let’s keep using the last version until the Data team solves it. It’s much better, isn’t it? That will help you to avoid some headaches.

The image below shows the difference between working with only a big app bundle and splitting it in small bundles.

Large project

Components

Thinking about the components of our apps most of them can be grouped in the following sacks:

  • App: App groups everything related to the view. It’s related to the device and it’s the bundle which is going to be compiled. For example we might have an App for iPad and another one for iPhone which will result in two builds one for Android and another one for iPhone. App shouldn’t include any business logic. Just present the information it receives from a core component and notify those core components about events happening there, generally interactions with the user. Navigation logic must be included in this bundle.

    • Layouts
    • Views
    • Navigation
    • Animations
  • Core: Your application logic should be in this bundle. Core is like the link between the data source and the interface and includes the business and presentation logic. It must use the data bundle to bring data, apply the required logic and them return it to the view to be presented.

    • Presentation logic (e.g. Presenters)
    • Business logic (e.g. Interactors)
  • Data: Data will package your controllers that will interact with the system frameworks, like interaction with databases, interaction with APIs, interaction with sensors….

    • Database controller
    • API controllers
    • Device controllers

    You might wonder how to implement that on a real project, how to connect those dependencies and have everything working. Let’s see how to do it on iOS and Android

Modules on iOS

Although you can create your own library projects with XCode and create the dependencies manually, we have a great tool you probably know, Cocoapods you probably know about. We usually use it to connect our project with remote dependencies but it has the option to specify the dependency locally. Let’s see how.

You have different approaches depending on your needs. The first one consists on managing those Core/Data bundles as libraries and then connecting your app bundle using the remote repositories. That’s great if you have different teams working each one on a different “library” because they can keep their own versioning and build/deploy processes. If you have an small team and don’t have enough resources to have separated build/deploy processes for each bundle you can have those bundles locally (using Gitmodule) but integrated with CocoaPods as well. That way you have flexibility to modify and report changes directly

  • Create three XCode projects in different folders, ExampleApp, ExampleCore, ExampleData.
  • ExampleApp doesn’t need podspec file. ExampleCore and ExampleData need a podspec file with information about the bundle. The structure should be similar to the following:
Pod::Spec.new do |spec|
  spec.name         = 'ExampleCore'
  spec.version      = '0.0.1'
  spec.homepage     = 'https://github.com/Example/ExampleCore'
  spec.authors      = { 'pepi' => '[email protected]' }
  spec.summary      = 'Core logic of example'
  spec.source       = { :git => 'https://github.com/Example/ExampleCore.git', :tag => '0.0.1' }
  spec.source_files = './**/*.{h,m}'
  spec.framework    = ''
end
*Note: In case of ExampleData you might have dependencies with system frameworks or external libraries. You can specify them in the podspec as well*
  • Upload those projects into their respective repositories (e.g. github.com/Example/ExampleData, github.com/Example/ExampleCore, github.com/Example/ExampleApp)
  • Once you have the bundles on a remote repository you can bring them to the ExampleApp bundle as submodules using
git submodule add https://github.com/Example/ExampleCore dependencies/core
git submodule add https://github.com/Example/ExampleCore dependencies/data
  • In ExampleApp specify your CocoaPods dependencies locally
source 'https://github.com/CocoaPods/Specs.git'
inhibit_all_warnings!
pod 'ExampleData', :path => './dependencies/data'
pod 'ExampleCore', :path => './dependencies/core'
  • Open your app project from the .xcworkspace file. You’ll have your project linked to the ExampleData and ExampleCore bundles.

Keep in mind

If you have never developed a library before there’re some points you should keep in mind working with your bundles:

  • Expose only what is going to be used externally: The logic and communication with system frameworks are something private. Define a public communication layer and expose it making public headers which will be used by bundles that use that one as a dependency. Read more about public headers with CocoaPods here

  • Work on the bundle without thinking on who is going to use it: The idea behind that structure is also splitting responsibilities so for example the Data bundle shouldn’t know anything about who is going to use it. Or the Core bundle about which view is going to use its data to present it. Working with bundles makes that easier but doesn’t avoid the coupling if you still think on bundles as a single entity.

  • Keep a versioning process for each bundle: Each version should be documented with the fixes and new features. If you have enough resources document it, that way your workmates who are working with it know how to communicate with it. You can use Github releases/milestones which are very useful for that purpose.

CocoaPods is just a simple way to manage dependencies which in my opinion makes it easier and cleaner. If you have enough experience working with libraries/frameworks and connecting dependencies into a single project feel free to do it that way, any dependency solution is possible.

Modules on Android

In case of Andorid we’ll use Gradle to define our modules. Gradle allows you to specify in your app build file the dependencies the project has with other library-projects. We usually use that feature to link our project with 3rd party libraries but we can do it with other modules created by ourselves. Let’s see how

  • Let’s create three a main Android app module and two Android Libraries. For example, ExampleApp, ExampleCore, ExampleData

Library Projects list

  • Inside File > Project Structure define core and data modules as dependencies of the app module as shown below.

Dependencies

  • It will create automatically the dependencies in your app build.gradle file as shown below:
dependencies {
    compile fileTree(dir: 'libs', include: ['*.jar'])
    compile 'com.android.support:appcompat-v7:21.0.3'
    compile project(':core')
    compile project(':data')
}
  • I recommend you to have those library-modules in their own Git repository and manage them as Git submodules inside the app project. That way they can be maintained without any dependency with the app. You can read more about Git Submodules here

Keep in mind

Before working with Android modules in your projects there are some points I would like to highlight:

  • Data and Core modules don’t have activities: Remember they don’t have any relation with the presentation layer view layer so they shouldn’t include any kind of Ativity/Fragment/Custom View.

  • Manifest file of library projects will be empty: The Manifest file is the file in our app where we register activities, permissions, brodcast receivers, services, to be used by the app. Consequently the app’s manifest will register those components which might be in library modules. (e.g. Register in the app’s manifest a BroadcastReceiver that handles push notifications. That receiver is defined in the core library module)

  • Libraries DON’T have the main app as dependency: They must be agnostic to the app which is using them. You might find cases where you need notify something to the main app. In that case you can register broadcasts in your app which handles broadcasts coming from module libraries.

Git Submodules and versions

We’ve seen how to use modules in either iOS or Android and how to use Git Submodule to have a local git copy of those “library” modules but what if we want to have an specific branch of the core/data package? Submodules has support for it. If you edit your .gitmodules you’ll have a structure similar to this one where you can specify the branch

[submodule "ExampleCore"]
    path = core
    url = https://github.com/Example/ExampleApp.git
    branch = new-feature

Git Submodules has no support to specify a tag instead of a branch. You can manually checkout to any tag in those submodule repositories.

Documentation

  • Git submodules - Link
  • XCode targets by Apple - Link
  • Targets for free/paid apps - Link
  • How to modularize your XCode apps - Link
  • CocoaPods - Link

Thoughts

Feel free to contact me on [email protected]. I’ll be please to comment that project organization with you

]]>
<![CDATA[Learn how to split your app components in different bundles instead of dealing with an unique bundle that packages the whole app]]>
Boosting your mobile app with Javascript and some mobile knowledge https://pepicrft.me/blog/2014/12/10/boosting-your-mobile-app-with-javascript-and-some-mobile-knowledge 2014-12-10T00:00:00+00:00 2014-12-10T00:00:00+00:00 <![CDATA[

When the 8fit team started giving their first steps they decided that the product was going to be a web app with some kind of native integrations. It was something with a lot of sense if we think that the founders of the company feel very comfortable with the language and with web in general. Might we have a great mobile experience using web technologies?

There’s no web solution that equals the mobile native experience. That’s the answer but the truth is that web is getting closer to native and it’s possible (depending on your web stack) to export to native only those components that you need and have a communication interface between the web and the native world, implemented on your own, without Phonegap and some other frameworks that try to abstract the web developer from the mobile layer.

We didn’t use any kind of bootstrap library, CSS package, or communication framework which allowed us to use only just what we needed. We tried to use as less Javascript frameworks as possible, because the web Javascript engine of the mobile devices is limited (comparing it with a desktop computer), and we don’t want to load .css files that we don’t actually need.

Mobile web rendering engine is limited, get rid of fully featured web packages with tons of styles and Javascript helpers that you are not going to use at all. If you feel comfortable enough with the language like to not build a shitty and non-scalable stack do it. Otherwise appeal to any framework like Ionic Framework (based on AngularJS) which offers a mobile-like stack and components to work with.

We only used Backbone, Underscore and JQuery for Javascript which simplified our project stack a lot and we avoid repetitive code (and we haven’t found a bottleneck with that so far). No Ionic, Cordova, Phonegap or similar. How to have native components then? With the help of a mobile developer, in the 8fit case me. We built a communication layer between web and native. In the case of Android using a native WebView property that simplifies it a lot and in case of iOS using a tricky solution that we’ll talk about.

Advantages

After being working with that solution for months we have figured out that it offers some advantages when you are working with an early product. Some of them are:

  • Automatic updated without having to pass through a release review: The resources are cached by the local Webview and every time we change the frontend version we don’t have to generate a new build, update the assets, add a Changelog, wait until you get reviewed by Apple and then have another one prepared to send it again to the Apple Store. Forget about that, you can just use a Gulp task that randomizes your resources naming. That way the caching engine of the mobile browser detects that those files have changend and then it reloads them. Updates on the air!

  • One frontend version but customized for each system: The application logic is the same. What changes then between Android and iOS? Basically the navigation (Android users are adapted to some patterns that iOS users are not and viceversa) and the design. If you organize your frontend following the pattern MVC you can have the same model-controller for both iOS and Android and work change only the View (Layout) and Navigation. At the end if you work natively you talk with your Android/iOS friends and you figure out that they are implementing the same, following a similar structure, and in some cases even the same naming!

  • Centralized point of bugs: That advantage is a consequence of the previous one. The more you move your application stuff to frontend the more your bugs will be centralized and easy to detect. In that case from Javascript. Does it mean that then there are no native bugs? No! there are and you have to keep having any reporting tool like Crashlytics, HockeyApp, … but the number of those bugs will be much less than the number of bugs in frontend.

Building the web stack

The stack of web was based on a SPA, Single Page Application. For those who don’t know about what a single page is it’s basically a web which is only an HTML file that imports a bunch of Javascript which is the responsible of different tasks like routing, controlling views, binding data, …. Javascript becomes the main actor of the movie.

If you look for frameworks that help you to implement a SPA you’ll find a lot of them, like BackboneJS, AngulrJS (by Google), or even some of them focused on mobile like it might be Ionic (based on Angular). Those have different points in common and different concepts in other points. Choosing one solution on another depends on your familiarity with the concepts of the frameworks. If you looked for comparisons like I did a long time ago between the most popular ones (Angular and Backbone) you’ll see that Angular offers you a more structured solution close to Ruby while Backbone might be more powerful if you talk to other developers (even if the architecture is not so structured). My recommendation is that if you are starting with that kind of web applications, do it with Angular because Backbone requires having fought enough with Javascript before.

We use CoffeeScript instead of Javascript and Sass instead of CSS which is later converted using Gulp into the respective Javascript and CSS. Chose the tools that fits best with the way you work and your feeling with the language.

There’s nothing special here but the fact that everything has to be responsive and you have to check the Javascript support of different devices because the web engine on mobile devices is more limited than on a Desktop. You can use here a website called caniuse

Here’s a summary of things we’re using for the frontend:

  • Backbone, Marionette, Underscore & JQuery
  • Coffeescript
  • Sass
  • Jade for template
  • Gulp for build tasks
  • Capistrano for deploys

Note: For those who might be interested in, we’re using Rails for the backend

Mobile

We rejected any kind of mobile wrapper like Phonegap, Sencha or similars because we wanted custom communication plugins and we had a mobile developer to take care of that side. If you think about it, to have your web solution running on a mobile device (besides having done it responsive) you need a simple app project for Android and iOS (which you can create from the IDE assistant), add a webview, and load the URL into it. Simple really?

That would be the simplest integration but we weren’t a website, we were a platform which purpose was end up running on mobile devices taking advantage of mobile features. How many times do you complain when you have to introduce your credit card numbers for a payment? In app purchases is a great solution there for mobile, or why do you have to introduce your Facebook credentials for login if I have an app installed for that? Those are just some kind of integrations that we thought about and that we currently have.

If you are planning to load a web into a mobile app the app behaves just as a simple window where you show the browser opening your website but if you want to take advantage of the real mobile advantages you need a kind of interaction between mobile-web

That interaction is what we called native bridge and we built it from scratch. No framework, no abstraction, just analyzed the features we had on Android and iOS and then implemented it.

Native bridge

Bridging native and web depends on the platform where we’re building the bridge because Apple and Google had different thoughts about giving support to web when they developed their mobile web engines.

Android

Fortunately Android did it best. The way you can bridge native with web is exposing a Java interface to Javascript. After exposing that interface the object is visible from Javascript and that object translates calls to its method into calls to the original Java interface. The communication would be something like this:

// Communication Java -> Javascript
webView.getSettings().setJavaScriptEnabled(true);
webView.addJavascriptInterface(new JavascriptInteractor(), "NativeBridge");
class JavascriptInteractor {
    @JavascriptInterface
    public void buyIAPProduct(String productId)
    {
        // MyPaymentsController.buy...
    }
}

// Communication Java -> Javascript
public void loadJS(String js) {
   webView.loadUrl("javascript:"+js);
}

Where we can see we’re exposing a method called buyIAPProduct and executing something on Java. Notice that @JavascriptInterface is an annotation to let the compiler know that the method below should be exposed to Javascript, otherwise not.

The communication on the other direction is executed just evaluating javascript sentences directly loading URLs with the format javascript: sentence.

If we tried to close the payment flow, the exchange of calls would be something like:

NativeBridge.buyIAPProduct('pro_subscription_1mo');
webview.loadJS("Ef.vent.trigger('payment:completed', "+payment.toString()+")")

iOS

Apple did it difficult here. There’s no native component to expose a kind of interface to Javascript as Android does. How can I do then? Well, if we take a look to the UIWebViewDelegate methods there’s one called

- (BOOL)webView:(UIWebView *)webView
shouldStartLoadWithRequest:(NSURLRequest *)request
 navigationType:(UIWebViewNavigationType)navigationType;

Which is a method to ask if the webview should or not load a given NSURLRequest. Although the purpose of this method is not to bridge Javascript with Mobile we took advantage of it for that. How? Building a custom URL Scheme

Let’s say that we built a custom communication API using the scheme eightfit:// and that way any intercepted url with the scheme eightfit:// would be passed to the NativeBridge. The previos buyIAPProduct would turn into eightfit://buyiapproduct/product_id=pro_subscription.

- (BOOL)webView:(UIWebView *)webView
shouldStartLoadWithRequest:(NSURLRequest *)request
 navigationType:(UIWebViewNavigationType)navigationType
 {
    // eightfit://buyiapproduct?product_id=pro_subscription_1mo
    BOOL isNativeBridgeURL = [URLMatcher isForBridge:request.URL.absoluteURL];
    if (isNativeBridgeURL){
        [JLRoutes routeURL:request.URL.absoluteURL];
        return NO;
    }
    return YES;
 }

The parsing of the URL can be made using regex but fortunately there were other developers thinking about it befure and we find libraries like JLRoutes which helps on that. What JLRoutes actually does is to build a local API passing the endpoints and actions for those endpoints:

[JLRoutes addRoute:@"/buyiapproduct/:product_id" handler:^BOOL(NSDictionary *parameters) {

    NSString *productId = parameters[@"product_id"]; // defined in the route by specifying ":product_id"

    [PaymentsController buyProduct:product_id withCompletion:^void (NSError *error) {
        // Notify JS about the result
    }];

    return YES; // return YES to say we have handled the route
}];

Some points about the bridge

As you might have noticed there are some interesting points to comment about the bridge. The first one is that the communication is a bidirectional communication. I ask for something you answer with another thing. There’s no way (right now) to get the return parameter of the sentence evaluation and if there’s a way it’s only using the string type, (what if I want to return a more complex object?)

Another interesting point is that there’s no way to expose Javascript to Mobile (neither on iOS nor Android) Mobile doesn’t know anything about Javascript and you as a mobile entity that has a Webview can only communicate with Javascript evaluating sentences on the browser or turn to tricky solutions.

Most of communication calls will be asynchronous so you need a kind of event handlers on Javascript (JQuery, Backbone, … offer components for that) so what you actually do is call the bridge and then register a listener to listen when the mobile controller has finished what it had to do.

And finally as you might have noticed, there’s no type validation. And that’s something we can’t avoid because the bridge inherits it from Javascript. Be careful here!

Exposing features: If you want to let your frontend know about the available mobile features, the device or the app version you can use the User-Agent and do something like: 8fit-iOS-8.1/iPhone-6/1.2.3 (iap, push, conn, res) . That way your frontend knows about what is available and what’s not.

Examples of controllers

Since we starting building that bridge we’ve done more than 10 kind of native integrations. Some of them are payments (in app purchase), video/sound/audio player, push notifications, login with Facebook, resources downloader, social sharing…

And some other interesting are coming. We’re working on a frontend controller which is going to manage the frontend locally and inject it into the webview. We’re finishing the integration with HealthKit and Google Fit and planning to do the workouts more interactive implementing them natively.

Pitfalls and recommendations

Not everything can be magic, there are some pitfalls we found during the development of 8fit and that I would like to share with you too because you’ll have to face with them sooner or later:

  1. Native doesn’t know about the Javascript: You can expose for example Java to Javascript (not possible with Objective-C). But there’s no way to know what’s happening on the Javascript context (variables, objects, …). So the recomendation here is to documment the bridge in terms of (methods, objects, variables and types) and if it’s possible to be mainteined by the same developer/s. We suffered a lot of headaches here.
  2. Need a bit of mobile knowledge :iphone: : If you come from web you can learn it, the complexity will depend on the number of native integrations you have. If they are not too much give your first steps on mobile, otherwise Phonegap is not a bad solution but not too custom for us.
  3. Experience close to mobile but not the same: We can start a big discussion here but my opinion about that is that there’s no way (right now) to get the sdame experience than a native app using web technologies. Due (in part) to the companies behind the OS (Google & Apple) which don’t want to put their efforts on better web rendering engines. Fortunately, I’ve to say that the things and changes and we can see great news on iOS 8 and Lollipop :clap:
  4. Test the mobile experience: This point is not taken a lot into account on a desktop solution but should be in case of mobile. Think about what should happens if your apps loses the connection, are the models properly persisted under wrong connection health, … Check if the browser handles properly the cached resources.
  5. Be careful with the cache: We had some troubles with the browser not reloading cached resources. To avoid that we started renaming the frontend resources on every build, that way the browser detected them as updated and forced the download of them. Note: We’re building here our current frontend controller which is going to be the repsonsible to download it and inject into the Webview! Yei!

Conclusions

After the previous (and big summary) of the technologies we’re using at 8fit and different topics around it I would like to finish the post sharing some conclusions with you, the slides and the video of the talk given at @Geekshubs

  • Javascript is not a bad solution for the beginning. It allowed us to have a “ready-to-use” version of 8fit in less time that if we had decided that the development was on native iOS and Android.
  • It might be restrictive in the future in terms of interactivity, animations, … If you start noticing that, you can remove those bottlenecks using a native solution for them. We’re doing though with the interactive workouts.
  • Analyze your resources and your product If you have enough resources to have a mobile solution do it! But if now let’s make web dancing on mobile we did it!
]]>
<![CDATA[Learn how useful might be giving some steps on mobile (Android/iOS) launching mobile solutions with web knowledge and with the same mobile native experience as any other app]]>
Swift and Objective-C playing together https://pepicrft.me/blog/2014/12/08/swift-and-objc-playing-together 2014-12-08T00:00:00+00:00 2014-12-08T00:00:00+00:00 <![CDATA[

Since Swift was released, a lot of developers have been wondering about the Swift integration in their projects. If we take a look to the Apple documentation it seems that the integration is possible, the language was designed for having that kind of copmatibility instead. However the majority of them haven’t taken the decission of start using it, sometimes for fear of breaking something on the current code base, or probably for not having enough time to learn it.

After the release I started working on SugarRecord an open source library that is a kind of wrapper to work with databases (CoreData and Realm). The reason basically was that I wanted to learn deeply the language and playing with Playgrounds and short examples wasn’t going to help a lot. You end up forgetting those small examples, and not facing with the real problems you would face if you played with a big project/library.

The experience has been amazing because I’ve learnt how powerful Swift might be if we compare it with Objectve-C but a bit painful after having to deal with different updates on the language, some unstable versions of XCode (unstable already) and overall what things that language has in common with Objective-C and how we can have a communication layer between them.

It’s possible to have Swift playing with Objective-C in the same project, however there are some points that it’s important to keep in mind if you want to avoid a big headache. I would try to summarize most of them here with short examples that explain them.

Objective-C projects

Structure

If we analyzed the structure of our Objective-C projects that would be something like what you can see in the figure above. We have some libraries integrated (or not) using an external dependency manager like CocoaPods into our Objective-C code base. Everything works great, we have both in the same language and the same language features are available in both sides. What happens when Swift appears in the scene? We have features that are available in a language (Swift) that aren’t in the other and that introduces extra communication problems that we have to face. We’ll see that we can use a kind of keywords or Swift types that are automatically translated into the equivalent ones in Objectve-C but that in some other cases we might end up using a wrapper component that allows us to stablish the communication with these Swift components.

Pure Swift

Firstly we have to be sure of which features are “pure” Swift features. What does pure suppose? Features that can only be used from Swift and Objective-C won’t be able to work with them. Yes, those features make the language much better, and powerful and allows you to simplify much more your implementations but if what you actually want is to have something compatible with your Objective-C it’s better to wait until you have more Swift than Objective-C to start enjoying them. Those features are:

  • Generics
  • Tuples
  • Swift enums
  • Structures defined in Swift
  • Top-level functions defined in Swift
  • Global variables defined in Swift
  • Typealiases defined in Swift
  • Swift-style variadics
  • Nested types and curried functions

If you don’t know about any of those features I recommend you to take a look to the docummentation where you’ll learn more about them.

Remember, Objective-C doesn’t know about them and you must avoid them in the public interface of your Swift features/components. Otherwise you won’t be able to use them from your Objective-C code

Bridging

The tool Apple released to bridge Swift and Objective-C was something called bridging-header (what an original name). We have two bridging directions: Swift into Objective-C and Objective-C into Swift and consequently two bridging mechanisms:

  • Project-Briding-Header.h: That file allows you to make your target Objective-C files visible to Swift, otherwise they won’t and you won’t be able to use them. That file is a simple Objective-C header file where you have a list of imports of other headers. Note: If you wanna use Objective-C libraries that you have integrated through CocoaPods you have to import them in that header if you want to use them from Swift

Product-Bridging-Header.h

//
//  Use this file to import your target's public headers that you would
//  like to expose to Swift.
//

// I can import CocoaPods Libraries here!
#import <AFNetworking/AFNetworking.h>

// And my Objective-C classes
#import "CocaColaAlgorithm.h"

Swift-Class.swift

// Use here your Objective-C exposed classes
let cola = CocaColaAlgorithm.prepareCola()
  • ProductName-Swift.h: That file is automatically generated by XCode. When you compile the project XCode generates a header file “translating” Swift code into Objective-C. That way you can use Swift classes and components from Objective-C. That have some restrictions that I’ll tell you about because not everything will be available to use in Objective-C

Swift.swift

class NSObjectSwiftClass: NSObject { }

ProductName-Swift.h

SWIFT_CLASS("_TtC9SwiftObjc18NSObjectSwiftClass")
@interface NSObjectSwiftClass : NSObject
- (instancetype)init OBJC_DESIGNATED_INITIALIZER;
@end

Alert

What’s really exposed?

As I mentioned not all your Swift code is exposed. The point is that the compiler follows some rules to generate the header and not all of those rules are reflected on the Apple docummentation. You’ll figure out some of them working on that type of integrations. Some others you’ll learn them reading from other developers dealing with similar problems. Summarizing the most important ones, it will only be exposed:

  • Classes, attributes and methods marked with the keyword @objc
  • Classes descendent of NSObject
  • Public elements
  • Private elements are exposed if marked with @IBAction, @IBOutlet, and @objc
  • Internal elements are exposed if the project has an Objective-C bridging header
  • Only Objective-C compatible features.

Circular dependencies

When you import Objective-C generated classes into your Objective-C existing classes do it using a foward declaration @class. Otherwise you might have troubles with circular dependencies. Import only the header file in the body of your classes.

Product package naming

XCode uses your product package name for the xxxxx-Swift.h file naming but replacing some non alpha-numeric characters by an underscore symbol. To avoid some problems rename your package name using an alpha-numeric name (that doesn’t start by a number which is replaced too by underscore).

What can I do once defined the bridge?

Subclassing

You can subclass Objective-C classes in Swift, remember to use the override keyword wherever you are overriding a parent class implementation. Swift classes cannot be subclassed in Objective-C (even if they are NSObject sublcass or labeled with the keyword @objc)

Subclass

#if !defined(SWIFT_CLASS)
# if defined(__has_attribute) && ...
#  define SWIFT_CLASS(SWIFT_NAME)...
# else
#  define SWIFT_CLASS(SWIFT_NAME) SWIFT_RUNTIME_NAME...
# endif
#endif

SWIFT_CLASS("_TtC9SwiftObjc9ObjcClass")
@interface ObjcClass
- (instancetype)init OBJC_DESIGNATED_INITIALIZER;
@end

AnyObject

AnyObject is the Swift equivalent of ids. However AnyObject in comparison with id is not a class type but a protocol. AnyObject is not known until runtime execution. It supposes that the compiler can pass if you call a method on the AnyObject object that it doesn’t actually implement but if your program executes that line of code your app is going to crash. Be careful!:

if let fifthCharacter = myObject.characterAtIndex?(5) {
    println("Found \(fifthCharacter) at index 5")
}

Nils

As you probably know Swift introduced a new type of data, optionals those allow nil type and the real content of the type (in case of having) is wrapped inside that optional. Objective-C is more flexible in that aspect and allows you to call methods on those nil objects without causing exceptions or making your app crash. The way the compiler translates those variables or function return parameters that might be nil is using implicitly unwrapped optionals (var!). It implies that if you are planning to use one of those implicitly unwrapped optionals that the compiler generated from your Objective-C code do it carefully checking firstly if the value is nil. Otherwise, trying to access it being nil will cause a runtime error and your app will crash

- (NSDate *)dueDateForProject:(Project *)project;
func dueDateForProject(project: Project!) -> NSDate!

Extensions and categories

Extensions are the equivalent of categories in Swift. The main difference is that we can use extensions in Swift to make classes conform protocols that they originally didn’t. For example we can make our class MyClass conform the protocol StringLiteralCovertible and initialize it using an string:

extension MyClass: StringLiteralConvertible
{
    typealias ExtendedGraphemeClusterLiteralType = StringLiteralType
    init(unicodeScalarLiteral value: UnicodeScalarLiteralType) {
        self.pattern = "\(value)"
    }

    init(extendedGraphemeClusterLiteral value: StringLiteralType) {
        self.pattern = value
    }

    init(stringLiteral value: StringLiteralType) {
        self.pattern = value
    }

}

I recommend you that interesting post of Matt http://nshipster.com/swift-default-protocol-implementations/ where he explains different uses of default system protocols to do something like what I have shown you above.

Closures and Blocks

They are automatically converted too by the compiler. There’s only a difference and it’s that in Swift if you use an external variable in a closure it’s automatically mutable (no copy of the bar). Do you remember when you had to do it in Objective-C using the keyword __block before the variable definition? No more required!

Example in Objective-C

__block CustomObject  *myObject = [CustomObject new];
void (^myBlock)() = ^void() {
  NSLog(@"%@", myObject);
};

Example in Swift

let customObject: MyObject = MyObject()
let myBlock: () -> () = { in
  println("\(customObject)")
}

And yes! we have the FuckingBlockSyntax.com equivalent for Closures, FuckingClosureSyntax.com

@objc Keyword

When you want to specify the compiler that any Swift class, property or method must be visible in Objective-C after your code has been compiled you have to use the keyword @objc. Take look to the example below where we say the SwiftCat is going to be visible in Objective-C with the name ObjcCat

@objc(ObjcCat)
class SwiftCat {
    @objc(initWithName:)
    init (name: String) { /*...*/ }
}

Protocols

In protocols there are such exceptions crossing the protocols usage between Objective-C and Swift. While Swift can adopt any Objective-C protocol, Objective-C can only adopt Swift protocols if they are of type NSObjectProtocol. Otherwise Swift won’t be able to do it.

Moreover if you are using protocols in a Delegate pattern you have to declare your protocols as class. Why? Because not only classes in Swift can conform protocols but structs too. Strucs are passed by copy instead of by reference and we don’t want have a copied object that conforms a protocol behaving as a delegate of something because it’s not actually the real delegate object. When you set a protocol as class, only classes can conform that protocol

/** MyProtocol.swift */
@objc protocol MyProtocol: NSObjectProtocol {
    // Protocol stuff
}

Cocoa Data Types

Most of the foundation data types can be used interchangeably with Swift types (remember to import Foundation). So for example you can initialize a NSSTring object in Swift using a Swift string:

let myString: NSString = "123"

Int, UInt, Float, Double and Bool have its equivalent in Objective-C that is NSNumber

let n = 42
let m: NSNumber = n

Regarding the collection types, we have equivalents too there. [AnyObject] Swift array is automatically converted into NSArray (if the elements are AnyObject compatible). For example if we have an array of Int, [Int] it will be converted into an array of NSNumbers.

Any NSArray will be converted into a Swift [AnyObject] array. We can even downcast it into the real type:

for myItem in foundationArray as [UIView] {
    // Do whatever you want
}

And something similar happens with NSDictionaries. They are converted into [NSObject: AnyObject] and we get NSDictionaries from [NSObject: AnyObject] if the keys and values are instances of a class or are bridgeable

CocoaPods

CocoaPods

You might wonder if we can use CocoaPods with our projects where we’ve started using Swift. The answer is YES, you can so far, but only using Objective-C pods. You can add the headers into your project Bridging-Header.h file and then they will be visible in Swift.

The CocoaPods team are working on supporting Swift libraries too, https://github.com/CocoaPods/CocoaPods/pull/2835 and they are pretty closed to have it. So you’ll be able to have not only Objective-C but Swift libraries too.

Moving to Swift Advices

Finally as a conclussion of this summary/helping post I would like to give you some advices for your Swift integrations. I figured out some of them when I had to deal with some problems and I would like you not to have to deal with the same problems.

  1. Don’t touch your Objective-C code base: If you have a clean, useful code base on Swift don’t speedup. There’s no limit date to have everything on Swift. It’s just something that Apple expects to be a gradual process. Move your components to Swift only if they require a refactor or they have been deprecated and a new one (in Swift) will replace them.

  2. Most of libraries are in Objective-C: And that’s something great because you can have both Objective-C and Swift communicating with them. Libraries like AFNetworking, MagicalRecord, ProgressHUD are still being actively developed in Objective-C but some others are appearing in Swift in order to replace them like Alamofire.

  3. Implement Swift in isolated features: Avoid crossed dependencies between Swift andn Objective-C and try to use pure Swift only internally on those components. Expose those things that you need from Objective-C using @objc compatible features.

  4. Swift libraries only when there’s no option in Objective-C: If you have seen a library in Swift that you haven’t seen any like that before in Objective-C then use it. It’s probably that you have to implement any wrapper to use from Objective-C if that library uses pure Swift features. AlamoFire for example uses top-level functions, structs, … so yes we can have it in our project and use it from Swift but it’s impoossible to do it from Objective-C

  5. Swift is a language STILL IN PROGRESS: Do not get frustrated if you have the SourceKitService Crashed famous crash every 10 minutes. The compiler and the language in general have a lot of things to improve. It seems more stable now that some months ago but not enough (in my opinion). Moreover they are still changing and improving it so it’s probably that you compile your project tomorrow and then something there’s an optional somewhere that it wasn’t yesterday.

Resources

//MARK: - You should read
let swiftTypes = "https://github.com/jbrennan/swift-tips/blob/master/swift_tips.md"
let realmAndObjc = "http://realm.io/news/swift-objc-best-friends-forever"
let swiftReady = "http://www.toptal.com/swift/swift-is-it-ready-for-prime-time"
let swiftImprovesObjc = "http://spin.atomicobject.com/2014/06/13/swift-improves-objective-c/"

<script async class=”speakerdeck-embed” data-id=”797cb47061d3013267d84a36ee36a741” data-ratio=”1.77777777777778” src=”//speakerdeck.com/assets/embed.js”

</script>

]]>
<![CDATA[Start using Swift in your Objective-C projects. Avoid some headaches with these useful tips and advices for the communication layer between your Objective-C code base and your future Swift implementations]]>
Codemotion experience https://pepicrft.me/blog/2014/11/23/codemotion-experience 2014-11-23T00:00:00+00:00 2014-11-23T00:00:00+00:00 <![CDATA[

It was the first time I attended a developers event of that magnitude and I did it as a speaker. It was a challenge for me because I had given the same talk in my previous job office but not in a national event like Codemotion is. More than 1500 developers sharing their knowledge, their skills, tools, … and I was there to talk about the architecture VIPER we had been working with in Redbooth during the past months before leaving (I keep using it in my new job).

The organization of the event was perfect, time schedule, location of different talks regarding the talk expectancy, food, drinks, internet connection, … There’s something I think that could have been better but that is not related with the organization. It’s the naming of the talks. I went to some talks after reading the titlte and the description and finally found a different talk that I didn’t expect.

I’ve to say too that the level and the topic of talks there was amazing. Talks related with architecture, some others about testing, talks about tools, or workshops explaining you how to develop apps for Android wearable devices. I attended some iOS talks related with architectures where I was able to learn how other companies organize their code (especially those with a big projects that pay a lot of attention to their code base structure). I met people I only met from Twitter like @nsstudent or @alvaro_fr and talk a bit in person about our lifes and our current projects.

Regarding the talk I gave I have to say I wasn’t as nervous as some other times before. I was really relaxed because I enjoyed talking about how to make your projects cleaner and better organized. However I did it a bit faster and I had some problems with the project resolution and contacts which didn’t allow some attendants to read clearly the code examples on the slides. I was talking after the talk with some Tuenti engineers about different topics related with architecture and which one they are applying on their app. I’m glad to see that I was able to convince some developers about the importance of taking care of your code and not doing everything in a ViewController. Some of them contacted me later on Twitter and have started reading and analyzing the example project.

I have to say it was a tired experience because I didn’t sleep so much, I had to take the train with my Android Redbooth friends but I learnt a lot and got a lot of ideas to apply in my daily work. There’s no doubt that I’ll try to be there the next year.

See you the next year!

]]>
<![CDATA[After two days of Codemotion I would like to share my experience in my first time in a developers event like that one]]>
VIPER, Looking for the perfect architecture https://pepicrft.me/blog/2014/11/16/viper-looking-for-the-perfect-architecture 2014-11-16T00:00:00+00:00 2014-11-16T00:00:00+00:00 <![CDATA[

The past thursday I gave a talk at Redbooth Office with the iOS team about an architecture we had been working with during the past months. We didn’t use any architecture until then and the code was too coupled and messy that it was hard to review, debug, detect bugs, …

We read about VIPER the first time here and we loved the idea of splitting reponsibilities in all those components. We took a look to the example project, analyzed it and finally we applied it to some of our ViewControllers. It was a heavy task because we had to refactor not only those components but those children too but at the end we liked the result, and what easy it was to review the code and understand the implementation.

Moreover I implemented a Ruby Gem to generate those templates automatically in Swift or Objective-C, I called it viper-module-generator and you can easily install it with sudo gem install vipergen. We spent a lot of time implementing the same components too many times, the naming was similar, the connections between them too, so why not making this faster?

The talk was recorded and the slides are available in speakerdeck so I would like to share with you them. If you have any doubt about it or you would like to contribute in any way, we are pleased to hear you and talk to you.

]]>
<![CDATA[Talk I gave in the Redbooth HQ office for NSBarcelona with the iOS team talking about the VIPER architecture]]>
Github as your project management tool https://pepicrft.me/blog/2014/11/04/github-as-your-project-management-tool 2014-11-04T00:00:00+00:00 2014-11-04T00:00:00+00:00 <![CDATA[

Github is a well known Git solution between all developers around the world. It helps you to manage your Git repositories remotely and offers extra features to complement the Git core and make you more productive. These extra features are for example Issues, Pull Requests, Labels, Milestones, Releases… We know about about them becase we use Github daily but we’re totally conscious of how productive we can be using these components properly.

Github + External Project Management Tool = Synchronization

I’ve been working during the past months in a project management tool as a developer. We used Github and tried to translate tasks from that platform to Github in order to work in them. Although we used milestones, and other internal methods to connect these developers items with the platform, all the management core was contained in that platform. This is something good but if you ensure that the synchronization between these two components is taken into account by all the team. Synchronization is something difficult to mantain, sometimes you feel a sync man! and you are focused on having a healthy communication between Github and your project management platform (PMP), but some others you feel so tired that you forget to report the status of your Github items to your PMP. When it happens the communication is broken and it implies extra work for your workmates like where did you work on this?, why didn’t you resolve the task?, why didn’t you apply the proper label? In these situations you feel like you are doing repeated stuff that you could do just once. Sometimes you make use of scrips that you develop on your own but if you can’t spend a lot of time developing scripts for each communication flow.

Github, but what if I’m not a developer?

Although it sounds strange Github is not only for developers. The entire Git concept can be applied to design, to the company related stuff, … Once all the team members know about the flow it’s pretty easy to stay connected an synchronized. Think about something like Markdown. Talking from the developer side we use a lot when we start a new repo and we have to fill the README.md file (I remember the first time I knew about it when I created my first repo). Markdown is becoming popular, it’s used now in blogging platforms (I’m in fact writting this article using markdown). We could use Mardown in our website/landing page repository I’ve seen companies that took the decision to stick to Markdown instead of choosing over-loaded editors like the one Wordpress offers to you. If your project includes its own content your company content editors could write directly in Markdown and integrate that content into for example a Landing, a Web, or a Mobile application just commiting their changes. For designers it’s something more complicated because they use mostly graphics instead of text but if you try to externalize all design related stuff like css files, stylesheet file, snapshot testing,… they can be more involved. This is something that everybody dreams with it, designers with the ability to go into your web project and tell you off about the styles your applied or a developer getting the styles and sizes directly from the designer’s mockups. It depends a lot on the kind the product and on the level of integration of design and developers.

Socialite

Management tips

Components

Issues

When we think about an issue we tend to think about something negative (as a developers we think about bugs) but it doesn’t has to why be something negative. Think that an issue should be something like a task, so it implies that you can use it as a for example an idea you have had that would be cool to have it in the project or a new feature you have to implement in the current sprint. Try to use a representative naming for your issues because the issues page is going to be your daily friend where you’re going to constantly check your next stuff you have to work on. Take a look to the examples below

Good naming

  • Steps counter feature
  • Crash caused by an empty response in the workout detail

Bad naming

  • steps
  • crash with response

Useful note: In most of cases we’ll end up creating a PR to solve the issue purpose. Github thought about it and they made an easy way to close issues directly merging an opened PR. You have to add a comment into your PR explaining that you resolved, fixed or solved any opened issue (e.g solved #31)

Github Issues Screenshot

Labels

Labels is a way to clasify not only your issues but your pull requests too. Labels has a name and a color and Github allows you to filter your issues/PR using them as a filter element. Although Github offers you some labels by default I recommend you to analyze and adapt them to your needs and requirements. Labels for a landing page can’t be the same for a backend project. I’ve been googling around to have some ideas about labels other companies use in their project and I liked the idea of the first answer in this question in stack exchange, http://programmers.stackexchange.com/questions/129714/how-to-manage-github-issues-for-priority-etc. Tags that guy uses are:

Github Labels Screenshot

Milestones

Issues we’ll be the core of hour project management, they will be our tasks and it’s important to keep them organized. We’ve seen that labels allows us to tag them in terms of status, priority, type,… We can easily have either an idea of the most priority issues to work on, or the recent bugs related with UI but it doesn’t allow to group different issues because they have something in common. The grouping reason might not only be related with the versioning but might be related with a refactor too, or even with a redesign. Think about Milestones like a bag with a well defined purpose that you are going to cover ussing the issues you are going to put inside.

Github Labels Screenshot

The example above shows different milestones related with some versions of the library and one of them is focused in issues/PRs that are planned to be in the future versions. Notice that you have a bar that indicates the precentage of issues/PR finished. This way you have a global idea of the status of that milestone.

Assignees

Tasks (issues) in groups (milestones) labeled and executed, developed, reviewed by your team. Here’s where the people come into play. Thanks to the assignee feature of Github you can assign a Github issue or a PR to someone. This person is responsible to move this item forward, work on it, ensure that the status of this item is updated (using comments, changing labels, mentioning work mates if something is blocking, …). Moreover you can filter by your issues/PRs so it’s easy for you to know what involes you. Imagine you turning on your computer every morning and wondering what you should work on. Easy, take a look to your issues on Github

Github Labels Screenshot

Flow

Understood these components someone might wonder how they could be applied into a project sprint. We’re still working on 8fit to have a fixed flow that matches our needs, after using different tools for project management and we ended up with Github and we’ve inspired our flow in others’ flows with our little details:

  1. Apply any methodology: We strongly recommend any Agile methodology like Scrum. We’ve been using them previously and we’re very happy we are very productive with it. Teach your entire team that methodology and make sure everybody knows about it. Although everybody should be responsible of its closest sprint, when the project is that manager role should cover more than one sprint. This person is going to be responsble to move issues between these sprints depending on the project schedule, priorities, … Your team should meet for example at the beginning of the week to organize the product and sprint backlong and then during the week everybody should work with the sprint backlog and report the daily scrum status through meetings or any real time communication tool (e.g. Slack HQ). _There are some tools that apply a layer over Github that add these extra management features. In our case we’re using one called Zenhub. You can install it as a Chrome extension and it’s directly integrated with Github. Other solutions work as an external component*

  2. Keep tasks updated: It’s a common error to not update your project issues since they are created. You know their statuses but what happens if your workmate wants to know about any task/issue and you haven’t reported anything since you started working on it? Labels and assignees are not only used when the issue/PR is created, re-assign it if another person has to work on it, change the label if the priority has increased, connect it with other PRs that have something in common. It’s very important to keep this engagement with the issues board.

  3. Group your issues (and make them small): Think about a refactor, split it in small refactors, and group them into the same milestone refactor’s bag. If you tried to locate that refactor in the future it would be easier to locate the milestone instead of the small issues. If the milestone is grouping a version, ensure that once the the milestone is closed you create a new release with the milestone version adding a changelog with the points that new version has covered and generate a tag to mark that status on the timeline.

  4. Review: Before closing any issue/PR and if you have workmates that can review your work, ask them to do it. Don’t merge your changes until you have their confirmations. It’s better having more than two eyes. We are not magicians and it’s probable that we forget any edge case.

Keep in mind the previous points and try to apply them in your projects, try to be constant with them, and try your team to be too. Improve your management tools and flow every day regarding your needs and boost your team productivity. You just only need Github!

Zenhub board in Github

Recommended articles

]]>
<![CDATA[Github is a powerful Git platform commonly used between the developers community. It offers features like issues, labels, milestones, releases, that used properly might help you to manage not only your technical repos but different aspects around your project like design, ideas, ...]]>
Leaving Redbooth https://pepicrft.me/blog/2014/10/29/leaving-redbooth 2014-10-29T00:00:00+00:00 2014-10-29T00:00:00+00:00 <![CDATA[

It’s difficult for me to talk about this and it was a few days ago when I had to tell my workmates about my decission, leaving Redbooth. I started with that startup from Barcelona when I haven’t finished my bachelor yet. They gave me the opportunity to work remotely and I did my best. I help firstly the iOS team when the company’s name was aready Teambox. Then I learnt a bit of Android when thought that I was able to help them and I thought why not?

It was difficult for me at first because I was used to Objective-C patterns and tools like XCode, Cocoapods, … Moreover the project was a bit messy but I did my best and then I improved the Java I had learnt at the University. I have to say that it was a great opportunity for me that I took advantage of. Since then I’ve done different mini projects/libraries in Android and I want to keep learning it (although I keep my strong opinion about the tools Google’s offering that are not as good as they shouln’t if we compare them with XCode for example)

I missed iOS, since I moved to Android the iOS team had implemented the iPad version, new components and some other interesting features. I wasn’t any more than a Junior developer in Android and I told the company about joinning again iOS as soon as they had a senior Android to replace me. Months later I ended up moving to iOS again until now.

Things I learnt from Redbooth

I’ve learnt a lot of since during these months in Redbooth that I will never forget and that have made me a better developer. Some of them has been:

  • Git: I remember the first meeting we had with the lead and the team to talk about us, about our experience and how Teambox iOS Universal was organized. I didn’t know a lot about Git, if I had used it before it had been usint graphic tools and the first task that day was to learn Git. I use it almost everyday for all my projects and only form console. Once you get used to the Git commands I think that it’s easier and you can get rid of these graphic tools. We used Github, and its different features like Issues, Milestones, Releases, … We even had a bot to let you know about cowboys in the project!

  • Work in a team: I had never worked before in a team. Projects I had done before were of maximum two people (Dropbox was our Git) and Trello was our collaboration tool. When you join a team you learn how important is to be coordinated with the team, and with the company’s track. When this feature has to be implemented, these bugs have to be fixed because QA team reported it, you should review that PR to see if everything is ok, … Moreover you can talk with your workmates about implementations, architectures, code structure… I enjoyed using Redbooth as a collaboration tool and learnt how powerful it can be if you use it perfectly.

  • Guidelines: This point is related with the previous one. When you work alone there are some points you don’t pay attention to. Overall these related with the code style. In the same way you need a language to communicate with others and you have to use it properly when you work in a team you have a language that is the code language and some rules that tell you how to use the language. Without these rules everybody would do what he wanted. We used them for Objective-C style (forked from New York Times) and one part of the PR review was to review if the style points were respected.

  • Architecture: When you work in a project that becomes bigger and where new features are coming every week you have to pay attention to your project architecture. In Redbooth we stopped to think about it some months ago when we read about VIPER and saw that we were not respecting SOLID and Clean Architecture principles. That made the code difficult to debug/test/read and understand. Our experience refactoring these components with VIPER has been very satisfactory. We refactored the biggest components and improved day by day the concept of VIPER inside the Redbooth iOS Team. I even developed a generator in Ruby https://github.com/pepicrft/viper-module-generator and I’m going to give a talk in the CodeMotion event.

  • Cowboy, but sometimes: Working in a team where everything is planned, where you have an schedule of tasks that you have to follow during the week makes you loose your cowboy side. It’s not bad when the company has a certain dimension (you cannot be doing what you want because there are priorities). However you shoudln’t lost your Cowboy side because it keeps your motivation alive. I’ve sometimes received some pull of ears by my workmates for being more Cowboy than what I should.

Change of pace

It was difficult for me to take de decission, specially due to the previous points but I was very motivated with the new project and the idea behind it. The company is giving its first steps as a startup and it was a chance for me to learn about how to build a company from the scratch and work closed to the product (at least more than what I did in Redbooth). I’m going to have more flexibility now because I don’t have a fixed space of work or office. I can work at home, from a Coworking space, from a coffee, or even from the world. I’m totally conscious that this is an adventure that I cannot miss. I have a lot of hope in the project and I’m going to do my best to do it a successful project and company to help a lot of users to get in shape and keep a good health.

]]>
<![CDATA[I took de decision to leave Redbooth and join to a new adventure. I explain here the reasons, everything I learned from there and my expectations for 8fit]]>
Setup your iOS Projects for testing https://pepicrft.me/blog/2014/10/13/setup-your-ios-projects-for-testing 2014-10-13T00:00:00+00:00 2014-10-13T00:00:00+00:00 <![CDATA[

In other programming communities, like the Ruby one, developers are more aware of testing new components. Ensuring every component is tested is not common within the mobile appcommunity.

Fortunately, iOS developers have been working over the years to bring that culture to iOS too. The community is developing new libraries that use native ones allow you to write your tests with a fresh and more readable syntax. This has provided a big impulse and every day more and more developers ensure that expected behaviour in their apps is tested.

The Redbooth iOS Team we’ve been strongly influenced by the Ruby backend team and we decided to introduce testing into our development flow. Thanks to components like Cocoapods, Schemes and targets and some other tools, testing has become an essential part of development cycle.

Testing flow

Currently, depending on the type of test, the steps we follow are different:

  • Unit testing: We write unit tests when developing new features or controls. Our goal is to ensure the components behave according to expectations. Although we don’t follow TDD at all times, we’re trying to do so more and more. Unit testing allow us to detect regressions and reduce QA costs.
    1. Tests are executed using a CI environment.
    2. The CI integration reports the results to the Github PR.
    3. The PR will be merged only when:
    • The implementation includes tests and they pass
    • It has more than one :+1: by the peer review.
  • Acceptance tests: In this case, tests are defined by the QA team. They use appium.io for these tests which allows them to define the same suite of tests for Android and iOS in Ruby. It’s important here to highlight the need of using accesibility tags because these tests make use of them to call different components in the app interface. When we have a new alpha version after some features introduced or bugs fixed:
    1. We generate an alpha version and send it to the QA team using distribution tools like Hockeyapp.
    2. QA runs Smoke and Acceptance tests in their environments.
    3. When they pass we move the alpha to a beta version. Otherwise we fix the tests that didn’t pass and repeat the process.
  • Snapshot testing: Regressions in design are not detectable using unit tests. Even if you add unit tests for UI properties (which we don’t recommened). We have recently started using a library from Facebook, iOS Snapshot Test Case and introduced it in the flow following the steps below:
    1. Design team send us the designs of the new features
    2. We implement them with their respective unit tests and snapshot tests generating snapshots.
    3. We send the snapshots to the design team and wait for their confirmation.
    4. Once confirmed, the implementation is ready and the snapshot/s generated will be valid while the designs don’t change.

Specta

Our firsts tests were written using Kiwi. We find it a little bit outdated and introduces a lot of coupling with matchers and mocks. With the introduction of iOS 8 and the improvements in the XCTest framework we’ve seen that the Specta framework is becoming more and more active. After releasing the first beta with support for iOS 8 and after a lot of investigation we decided to move our tests to this library. We complemented it with the matcher Expecta and the library for mocking OCMock. I recommend reading this article about different alternatives for testing. There Matt compares all the alternatives and discusses their advantages and disadvantages.

The main advantage of using Expecta over other matcher frameworks is that you do not have to specify the data types. Also, the syntax of Expecta matchers is much more readable and does not suffer from parenthesitis.

Syntax in Specta + Expecta is more readable, friendly and easy to remember. The example below shows tests using OCMHamcrest:

assertThat(@"foo", is(equalTo(@"foo")));
assertThatUnsignedInteger(foo, isNot(equalToUnsignedInteger(1)));
assertThatBool([bar isBar], is(equalToBool(YES)));
assertThatDouble(baz, is(equalToDouble(3.14159)));

Using Kiwi

[[@"foo" should] equal:@"foo"];
[[foo shouldNot] equal:theValue(1)];
[[[bar isBar] should] equal:theValue(YES)];
[[baz should] equal:theValue(3.14159)];

And finally Expecta:

expect(@"foo").to.equal(@"foo"); // `to` is a syntatic sugar and can be safely omitted.
expect(foo).notTo.equal(1);
expect([bar isBar]).to.equal(YES);
expect(baz).to.equal(3.14159);

Setup

Setup the project (schemes and targets)

A scheme represents a collection of targets that you work with together. It defines which targets are used when you choose various actions in Xcode (Run, Test, Profile, etc.)

In our case we use schemes only for testing. We decided to leave the main scheme only for builds and archives, integrating only the pod libraries that our project uses and having the pods required for testing like Specta or OCMock in the testing scheme. The result is the following:

Where it’s important set the scheme as Shared if you want to have it attached to your git repository.

With the schemes setup the next step is to define what targets we need.

A target is an end product created by running “build” in Xcode. It might be an app, or a framework, or static library, or a unit test bundle. Whatever it is, it generally corresponds to a single item in the “built products” folder.

In the Redbooth app, apart form the main app target, we use one for Unit testing and another one for Snapshot testing as you can see in the screenshot below.

Notice in the screenshot that the project has configurations for Debug and Release where we set the configuration for each target and configuration. By default CocoaPods should do it automatically for you but in some cases it doesn’t work properly. Be sure then that each configuration and XCode target has a corresponding generated pod config file.

Finally we have to select which targets are going to be built in our testing scheme. As we are going to use it only for testing we have to choose only this option in the targets. Moreover the order of the targets in that list should be the correct one regarding the dependencies between them. The firsts targets to be built should be the pod ones, then the application which components are going to be tested and later our testing targets. Remember:

  1. Pod targets
  2. Application targets
  3. Testing targets

Note: CocoaPods targets are not the same as XCode targets. Cocoapods targets are useful to group pods with an specific configuration. As you might have noticed their definitions but remember that they are not the same because it’s a common misunderstanding when you are integrating your project with CocoaPods.

Connect CocoaPods

With the project setup the next step is to prepare the Podfile to integrate the testing libraries with our project targets. If you haven’t worked with CocoaPods before I recommend you to read about it here: http://guides.cocoapods.org/. Our Podfile has the following format:

source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '7.0'

inhibit_all_warnings!

target :app do
  link_with 'Redbooth'
  # Project pods
end

target :test do
  link_with 'UnitTests'
  pod 'OCMock', '~> 3.1'
  pod 'Specta', '~> 0.2'
  pod 'Expecta', '~> 0.3'

  target :snapshottest do
    link_with 'SnapshotTests'
    pod 'FBSnapshotTestCase', '~> 1.2'
    pod 'Expecta+Snapshots', '~> 1.2'
  end
end

In order to have our pods organized we use Cocoapods targets. They are strongly related with our project targets but remember that they are not the same. To specify CocoaPods which XCode target should be integrated with we have to use the property link_with. The table below summarizes which Pods target is integrated with each Project target.

Pods targets \ Project targets Redbooth UnitTests SnapshotTests
app x x x
test x x
snapshottest x

Execute pod install and wait until it integrates the pods into the different target.

Note: We’ve noticed that in some cases, especially if you have been changing your Podfile a lot the integration might not be ok. If you try to compile the project after doing so you might run into problems:

  1. Check that Link Binary With Libraries section in Build Phases of each target contains only the libPods-xxx.a file of the CocoaPods target that you selected to be integrated there.
  2. Check in the project configurations that each target has the proper pod config linked.
  3. Finally ensure that in the scheme settings, build section, targets are listed there and in the proper order (mentioned previously)

If everything is right you should be able to Run your application using the main target in any device and execute your Tests from the tests scheme.

A bit about snapshot tests

snapshot-tests

Snapshot tests are not very common in the world of testing, however they are becoming more popular thanks overall to that Facebook’s library. Since we started using it the number of regressions introduced in design has decreased and now the designers can check that the results match their expectations and desings.

Basically the snapshot tests consist of a definition for snapshots creation and then once it’s checked that the snapshot is ok, the snapshot checking tests snapshots are stored in your project folder and they are used for future tests. If tests are executed and there’s no incoherence between these images and the tested views, tests will pass but if something is detected the test won’t pass giving you a command to be used with the software Kaleidoscope. Take a look at the example below where we define the test for testing a header view and an example of failed test shown in Kaleidoscope. The failed example shows an animation with the introduced UI bug (Someone changed the left margin and it was detected)

#import "TBHeaderView.h"

SpecBegin(TBHeaderView)

describe(@"header view", ^{
    it(@"matches view", ^{
        TBHeaderView *view = [[TBHeaderView alloc] initWithFrame:CGRectMake(0, 0, 320, 44)];
        [view setSectionName:@"DuckTest" sectionCount:60];
        if (SNAPSHOT_RECORDING) {
            expect(view).to.recordSnapshotNamed(@"TBHeaderView");
        }
        else {
            expect(view).to.haveValidSnapshotNamed(@"TBHeaderView");
        }
    });
});
SpecEnd

Next steps

  • Continuous integration: Tests are great but how do you know wether someone’s changes have broken them or not? This is where continuous integration plays an important role. Solutions like XCode Server, Travis CI or Jenkins should be taken into account to ensure nothing is broken.
  • Specta templates: I recommend you to install this XCode template https://github.com/luiza-cicone/Specta-Templates-Xcode to create tests using that predefined structure. You can install it using Alcatraz.
  • Snapshots for XCode: There’s another plugin to check the results of the snapshot tests. You can install it using Alcatraz too. https://github.com/orta/Snapshots
  • Test, test, test: Don’t think testing is something useless. When you start having a big app with a lot of components interacting between them regressions could appear easily and with tests they would be detected before passing the app to the QA team.

Documentation

]]>
<![CDATA[Learn how to setup your iOS for testing using the most popular testing libraries and how to integrate your project tests in the development flow.]]>