Refine
Year of publication
Document Type
- Conference Proceeding (277) (remove)
Has Fulltext
- no (277) (remove)
Is part of the Bibliography
- yes (277)
Keywords
- Nachhaltigkeit (7)
- Nutritional Footprint (3)
- Autarkie (2)
- Baumpflege (2)
- Container (2)
- Dendrologie (2)
- Energie (2)
- Ernährungsbildung (2)
- Grünflächenmanagement (2)
- Lebensmittelspenden (2)
Institute
- Fakultät AuL (171)
- Fakultät WiSo (75)
- Fakultät IuI (20)
- Institut für Management und Technik (7)
- Institut für Duale Studiengänge (4)
Taking the transdisciplinary research study “Green fingers for a climate resilient city”, funded by the German Ministry of education and research (BMBF), as an example, this paper follows the hypothesis that processes of landscape planning and designing multifunctional green spaces and processes of co-creation need to be combined to stimulate climate resilient city transformation. The findings are that efforts to combine these processes benefit from making complex climate-resilient city planning accessible for people of different professional backgrounds. The paper showcases how storytelling (Schmidt 2019), mapping (Langner 2009) and guided walks (Schultz 2019) are means to mutually engage with, perceive and understand multifunctional green spaces, inspire ownership, and build capacity for the city’s climate-resilient transformation.
Artificial intelligence (AI) and human-machine interaction (HMI) are two keywords that usually do not fit embedded applications. Within the steps needed before applying AI to solve a specific task, HMI is usually missing during the AI architecture design and the training of an AI model. The human-in-the-loop concept is prevalent in all other steps of developing AI, from data analysis via data selection and cleaning to performance evaluation. During AI architecture design, HMI can immediately highlight unproductive layers of the architecture so that lightweight network architecture for embedded applications can be created easily. We show that by using this HMI, users can instantly distinguish which AI architecture should be trained and evaluated first since a high accuracy on the task could be expected. This approach reduces the resources needed for AI development by avoiding training and evaluating AI architectures with unproductive layers and leads to lightweight AI architectures. These resulting lightweight AI architectures will enable HMI while running the AI on an edge device. By enabling HMI during an AI uses inference, we will introduce the AI-in-the-loop concept that combines AI's and humans' strengths. In our AI-in-the-loop approach, the AI remains the working horse and primarily solves the task. If the AI is unsure whether its inference solves the task correctly, it asks the user to use an appropriate HMI. Consequently, AI will become available in many applications soon since HMI will make AI more reliable and explainable.