Additive GUIs


Specify the attributes of the GUI and let the computer generate the GUI


I have a idea whereby you specify the relationships of widgets on the screen and the computer generates the layout.

Rather than positioning widgets manually, the computer generates the layout. Essentially the widgets are an inequality formula where X and y is set relative to one another.

We say that one widget is left of another widget or another widget is below another. This is how you might describe practically any GUI

The idea is that the computer generates variations of the layout and the human reviews them.

You also define the data flow between widgets. BackedBy is how we set the data source for a widget. MappedTo is a reference to a template that defines the GUI for an item in a collection. It's the same as a functional map.

The system is configured in triples.

Have you ever heard of Todo MVC?

It's a simple problem implemented in many frameworks. The problem is a to-do list.

This is a to-do app written in additive GUIs.

You should notice that it is extremely compact.


"predicates": [

    "NewTodo leftOf insertButton",

    "Todos below insertButton",

    "Todos backedBy todos",

    "Todos mappedTo todos",

    "Todos key .description",

    "Todos editable $item.description",

    "insertButton on:click insert-new-item",

    "insert-new-item 0.pushes {\"description\": \"$item.NewTodo.description\"}",

    "insert-new-item 0.pushTo $item.todos",

    "NewTodo backedBy NewTodo",

    "NewTodo mappedTo editBox",

    "NewTodo editable $item.description",

    "NewTodo key .description"

"widgets": {

    "todos": {

        "predicates": [

            "label hasContent .description"

    "editBox": {

        "predicates": [

            "NewItemField hasContent .description"

"data": {

    "NewTodo": {

        "description": "Hello world"

    "todos": [

            "description": "todo one"


            "description": "todo two"


            "description": "todo three"




(別通知) (可選) 請,登錄

我想知道,OpenAI Codex 在被非正式地指示時,是否會像在 this 視頻?

我認爲更嚴格地定義 UI 會非常簡單。我們可以提供如此緊湊的 UI 規範作爲 API 響應,如果所有瀏覽器都預加載了某些庫(例如,由某人制作了一個 nmp 預加載瀏覽器擴展),它可以非常快速地呈現 UI,而無需額外的 Web 請求,基本上,使前端開發變得不必要,並用此類聲明性語句的標準化 API 視圖取而代之。

I wonder, is the OpenAI Codex making an internal representation similar to your formalism of Additive GUIs, when being instructed informally, like in this video?

I think it would be great simplification for defining UIs more rigorously. We could just provide such compact UI specifications as an API response, and if all browsers just had the certain libraries preloaded (e.g., by someone making an nmp preloading browser extension), it could render the UI very fast, without extra web requests, basically, making the front-end development unnecessary, and replacing it by standardized API views of such declarative statements.

是 Mindey Additive GUIs 基於這樣一個概念,即 GUI 是一個拼接多維多維平面的查詢。其中每個維度都是一個小部件,點是該小部件的狀態。有一個函數可以定義每個維度的點與每個維度的另一組點之間的關係,可能是通過人工交互或服務器交互。

如果 API 可以返回關於 GUI 應該如何工作和呈現的高度密集的定義,那麼我們可以刪除大量自定義代碼。

大多數與面向數據的 GUI(如無窮大)的交互只是針對列表中的項目的動詞。他們對數據收集做出反應。或將項目添加到集合。

對於繪製圖形工具(如 PowerPoint 或 Paint)之類的圖形用戶界面,我認爲您需要一個不同的模型。

Yes Mindey Additive GUIs is based on the concept that a GUI is a query that splices multiple dimensions multidimensional plane. Where each dimension is a widget and the points are states of that widget. There is a function that defines the relation of the points of each dimension to another set of points for each dimension, perhaps by human interaction or server interaction.

If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

Most interaction with data orientated GUIs like infinity are just verbs against items in lists. They are reactive in response to data collections. Or add items to collections.

For drawing GUIs like diagram tools like PowerPoint or paint I think you need a different model.

    :  -- 
    : Mindey
    :  -- 


// 如果 API 可以返回關於 GUI 應該如何工作和呈現的高度密集的定義,那麼我們可以刪除大量自定義代碼。

我知道了。爲了簡化問題,接下來的問題是完全定義這種聲明性語言,然後通過(HTML,JS,CSS)三元組作爲組件構建這種語言的規範和實現之間的映射,即定義響應式UI元素,無論是在純 DOM 還是虛擬 DOM 中。然後,作爲組件的每個三元組的狀態空間對應於您定義的維度,整個 UI 的狀態空間作爲這些組件的 codomain 的笛卡爾積,整個 UI 的每個特定狀態都是一個“多維平面”。


// If APIs can return a highly dense definition of how the GUI should work and be rendered, then we can remove a lot of custom code.

I see. To simplify the matters, the problem then is fully-defining such declarative language, and then constructing the mapping between specification of such language and the implementation via the (HTML, JS, CSS) triplets as components, that is what defines reactive UI elements, regardless of whether its in pure or virtual DOM. The state space of each such triplet as component then corresponds to what you define as the dimension, and state space of entire UI as the Cartesian product of the codomains of those components, each particular state of entire UI being a "multidimensional plane".

I see this concept go, and being important, but to get it actually working, I see a lot of work being required to define the exhaustive set of terms, and then the browser-bound interpreter for this to run.

我已經編寫了一個非常簡單的解釋器來處理這個例子。它使用反應。呈現的 HTML 很醜陋,但 Todo 添加有效。

困難的部分是你所說的,提供一種足夠靈活的語言來支持大多數 GUI。

我的目標是讓 IDE 可以用這種格式的 GUI 來表示。

我不敢看 IntelliJ 的代碼我敢打賭它非常非常複雜。

I've written a very simple interpreter that works with the example. It uses React. The rendered HTML is ugly but Todo adding works.

The hard part is what you say, providing a flexible enough language that can support most GUIs.

My goal was for IDEs to be representable with this format of GUI.

I dread to look at the code for IntelliJ I bet it's very very very complicated.

    : Mindey
    :  -- 
    :  --