I shared this over Open Source Design forum last week but I also wanted to share it here. There’s something cool happening at Kaleidos , makers of Penpot and Taiga, the week of the 17th of April. We will have an off-season PIWEEK . We have been running these piweeks for 12 years in a row, every six months. Taiga and Penpot were born in such piweeks!
From an email I wrote to the team last week
From the many uses of Generative AI, there’s one field of enormous potential and that is “productivity boost”. Perhaps you’re in need of inspiration or you want to accelerate menial or repetitive tasks. Sometimes you just need to overcome information overload. Whatever your need, you will probably want to stay in control but “outsource” some of the work to a tireless algorithm.
As an open source product company that builds product team tools, we can’t simply wait to see what happens, we need to make sure we own this new tech track. We need to uplift ourselves to have the right perspective and rigour. This of course also includes any ethical considerations. Not all Generative AI approaches will be positive for society the same way not all technology is.
So, this is happening in 7 weeks and we will be sharing whatever we come up with. Typically a PIWEEK hosts 10-12 different projects but this one could be a bit different.
We’re not short of ideas but I’m posting this message here in case someone would want to suggest a particular approach or technology. It doesn’t need to be related to Taiga or Penpot, of course.
At Kaleidos we have the feeling that is generative AI is going superfast and it’s kind of noisy. We do have hands-on experience on machine-learning/deep-learning projects and yet we know we want to pause a bit and focus on leveling up the whole company. Everything we do is open source so any help towards our efforts will be released publicly, whether it’s a white paper or a tech POC.
Contrary to our normal PIWEEKs this one is not really meant to invite other people to join, simply because we will try to streamline the whole brainstorming+marketplace+piweek+demo cycle.
Now, after this AI PIWEEK or AIWEEK or whatever we call it, we’d love to share our findings with the broader Penpot community.
So yeah, feel free to share any challenges or themes you believe could be a fantastic way to get deep into (Generative) AI while addressing the future of collaboration between designers and developers.
The new AI era that is starting opens a ton of ways of how AI can help your performance (in terms of time but also in terms of “options to choose”). Most popular tools are integrating AI, for example Figma with Kernel or Adobe with Firefly, so the first (and easiest) approach could be just copy those features. If they are providing those features is because they know there are known “problems” that they want to help with.
Depending on the type of AI model, you can use AI to achieve several tasks in a better way. I list some examples of types of AI and use cases:
Audio to action: the best use case is accessibility. Taiga has recently improved in that way, but the goal would be to fully control the navigation over Taiga (in Penpot makes less sense) using just the voice (go back, open this issue, assign to this person, set as closed this issue with the following content…). Technically, Firefox Voice was shutdown, and it is better a browser agnostic implementation, but definitelly AI can improve accessibility to let more users to use the tool. Text to action models are good, but audio to action is much better.
Text to image: here the possibilities are infinite. The main idea would be to generate the things that you need, rather than search over the Internet the thing that you want. I remember a couple of weeks ago playing with GIMP, that I spend almost half an hour importing “arrows” to GIMP just to add a simple arrow to an image. With AI, I would like to say: please generate me an arrow, and let me choose between several options. I suppose that you have several ideas like this one, it is by far the most used type of model that could be used in a design tool.
Feature detection/extraction: this type of model could be useful for quick clones. When you are able to detect the things that you want (training the model), then you are able to create more complex things combining several things and thus creating from scratch something with the things that you want. I would recommend to take a look to segment-anything facebook model, where you can see it can detect in pictures the things that you want. And not only in images. For example, you could segment the parts of a webpage and then generate a webpage with your desired toolbar, slider, search bar, main content, side bar, etc Or even more, we will be able to say: please, create a clone of this webpage changing the colours to this ones, the images to this ones and the content to this one.
Image to text: I would say an improved OCR with intelligent features that helps you to achieve human like recognition. For example I am going to use this model for a personal project (related to invoices). Before AI, the use case would be to upload a picture of the invoice and let the user to insert the data that the invoice has. Now with AI, the use case is to upload the picture and ask to the user for confirmation of what the AI recognizes.
object detection: to have the ability to recognize and manipulate the object could be useful, for example. And obviously all the image classification work.
image to image: to be able to generate variations of something could be pretty useful in design.
text to text: conventions sometimes overload you mind, so for example an AI that generates content using the conventions could be pretty useful (here is an example)
Finally, I drop here a couple of ideas that may be interesting for you.
text/image/url to colour palette: coulld be interesting to have a place where you can send a text, image or url and it generates from the input a colour palette.
variation loop while several options: when you are generating variations of something, could be interesting to loop over the outputs (using them as the input of the next iteration) until you have only one input. For example, you want to have variations of a house with the sun, clouds and a car. To let the user to choose the best variation, could be interesting to generate for example 6 variations. If the user choose only one variation, that is the variation to be used, but if the user select more than one variation, those selections could be the input of the next iteration until the user only select one. So following the example, I want a picture of a house with the sun, clouds and a car, I get 6 variations, I select 3 of them (the ones that I like the most), and based on those 3 variations the AI generate another 6 options. So again I select the ones that I like the most and loop until I only select one option, that is the final option. Using this way of selection, you let the user to quickly select an option or the user could get more refined variations looping over the most liked options.
This new AI era is pretty amazing and there are a lot of ways to use it. Too much AI could be negative, but it is definitely the way to improve productivity (in a lot of ways, not just time).
Good luck with your internal hackathon and I am looking forward for the output of this week.
Sorry @Pablohn discourse flagged this post of yours as possible spam, nothing further from the truth! Thanks! The hackathon is happening right now, this week! We will share what comes out of it very soon!