Automated API traversal


Armed with a thesaurus and an almanac of system functionality we can write robots that program themselves


In Restful HATEOAS design, web applications provide endpoints that provides a list of web resources related to the current request that can also be introspected in the API.

A restaurant resource has links or URLs to a booking resource because you can book a restaurant.

A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports.

A system should also publish a series of workflows that it expects people to use.

This way we can write a fuzzy logic for a system based on a rough description of what to do - based on the thesaurus and almanac of a system.

"Export all my tweets to file"

All has an thesaurus entry for "list", "listAll", "getAll".

So the service knows it has to loop over this collection and save all fields to a file.


(別通知) (可選) 請,登錄

但這已經是API的工作方式,該曆書被稱爲“文檔”,並且通常已經可以在機器上讀取(請閱讀[核心協議](和[open api](https:// www。

好吧,它們有一個侷限性,它們不返回與其對象類型相關的詞彙,並且Mime-Types或Content-Types信息不足。如果這些API返回了JSON-LD響應,或者僅使用元格式的[polycontext metasymbol](裝飾其響應,它們就可以將它們綁定到概念中定義的架構通過多上下文元符號鏈接,每個人都可以在遍歷API的同時檢索有關所有數據的所有信息。







通過定義一個好的[polycontext metasymbol](,所有這些都可以完美地聯繫起來(我認爲我們應該爲人類提出並發展多上下文元符號。一種類似的方式,即我們通過RFC來發展協議的方式),最終將具有在所有協議和所有信息系統中能夠真正超然地推理這種方式的理想屬性。

But this is how APIs work already, that almanac is called "documentation" and it is already often machine-readable (read the core protocols and open api?).

Well, they have a limitation, they do not return vocabularies associated with their object types, and Mime-Types or Content-Types are not sufficiently informative. They could, if those APIs returned JSON-LD respones, or simply decorating their responses with metaformat's polycontext metasymbol would be enough to bind them to schemas defined in concepts also linked via polycontext metasymbol, and everyone could reason about everything while retrieving data about everything, traversing APIs.

A bunch of existing solutions that require human work, and already work (they are called expert systems) for this wishful thinking to get realized, because there exists a lot of not very well documented software, where software types are poorly linked with ontological (linguistic) types.

To create that linguistic connect so that everyone could auto-generate queries in numerous databases just by thinking that one would want to know something, will require the training of an AI system to map it from human examples. There are a couple of layers:

  • synonyms that are part of human language
  • concept IDs (that you can find in Wikidata)
  • class names (that you can find in OOP software)
  • table names (that you can find in databases)

All of that, can be beautifully linked up by defining a single good polycontext metasymbol (I think we should come up with and evolve the polycontext metasymbol for humanity, in a similar way that we evolve protocols, through RFCs), and that would allow to eventually have that desired property of being able to reason this way, really transcendentally, in all protocols and with all information systems.



I'm familiar with expert systems such as Drools which uses the rete algorithm which is really clever. And I know about OpenAPI

But they still have to be explicitly coded.





// But they still have to be explicitly coded.

But what you propose (the thesaurus and an almanac of system functionality) would also have to be explicitly coded, wouldn't it?

You say: "A system should publish an endpoint that is an almanac of system functionality, that is, every endpoint it has, a thesaurus of keywords used to access that endpoint and a thesaurus of operations that it supports."

You need to code those systems to publish their functionality, so you'd have to modify every system that has functionality, to be able to publish it. How would your approach avoid this need for explicit coding to modify those systems to publish descriptions of themselves to the "thesaurus and an almanac"?

我認爲,應該有一個用於 REST API 的 nnn 版本。 REST API 作爲文件系統,然後擴展 nnn 工具來處理 .json文件會做到這一點。然而,我發現 FUSE 的性能不是很好,Linus Torvalds 有句名言說 FUSE 文件系統只不過是玩具......

I think, there should be a version of nnn for REST APIs. The REST API as a filesystem, and then extending the nnn util to handle .json files would do it. However, I found that FUSE is not very performant, and Linus Torvalds famously says FUSE filesystems are nothing more than toys...