Refine
Document Type
- Article (3)
- Part of a Book (2)
- Conference Proceeding (2)
Is part of the Bibliography
- yes (7)
Keywords
- Areas of interest (1)
- Bildung (1)
- High accurate image registration (1)
- Image registration (1)
- Künstliche Intelligenz (1)
- Lernspiel (1)
- Mechanische Berechnungen (1)
- Mechanisches Neuronales Netz (1)
- Repeat photography (1)
- Rephotography (1)
Institute
- Fakultät IuI (7)
Climate Change Combat and Disaster Management with re.photos, the Web Portal for Rephotography
(2022)
Comparing two or more images taken on different dates from the same vantage point is beneficial for rescuers, researchers, and politicians to improve the assessment of natural disasters and climate change. Rephotography, i.e., shooting and comparing two or more images, can show fast changes in surroundings, e.g., before and after a tsunami, earthquake, and other environmental disasters, as well as slow changes like glacial movements. Retrieving these rephotographies is difficult since images from different shooting dates are usually not found on a single source, are without georeference, are lacking in metadata like precise recording times, and have different or no licensing information. Thus, rephotography is time intensive, costly, and not easy to use by rescuers. Overcoming the drawback and providing rephotography after disasters, our web portal re.photos addresses the difficulties of automatic image registration for large scene changes as happened after, e.g., earthquakes. Once the images are registered, georeferenced, and stored in a database, our web portal provides this in an easy-to-use interface. This database of compilations can be queried via metadata search. Rephotographies of two or more images is visualized as a table or on an interactive map. We provide custom interactive registration methods to register complex compilations with only a few fixed corresponding landmarks in the before and after images. By providing these interaction methods, rephotography for disaster management become valuable, e.g., by registering images of flooded or destroyed areas within minutes. re.photos allows its users to retrieve existing compilations, create template images that colleagues or citizen scientists can rephotograph, and register, georeference and persistently publish their rephotographic compilations.
Over 200 georeferenced registered rephotographic compilations of the Faroe Islands are provided in this dataset. The position of each compilation is georeferenced and thus locatable on a map. Each compilation consists of a historical and a corresponding contemporary image showing the same scene. With steady object features, these two images of the same geolocation are aligned pixel accurately. In the summer of 2022, all contemporary images were photographed by A. Schaffland, while historical images were retrieved from the National Museum of Denmark collections.
Images show Faroese landscape and cultural heritage sites, focusing on relevant areas when the historical images were taken, e.g., Kirkjubøur, Tórshavn, and Saksun. Historic images date from the end of the 19th century to the middle of the 20th century. The historical images were taken by scientists, surveyors, archaeologists, and painters.
All historical images are in the public domain, have no known rights, or are shared under a CC license. The contemporary images by A. Schaffland are released under CC BY-NC-SA 4.0.
The dataset is organized as a GIS project. Historic images, not already georeferenced, were referenced with street view services. All historical images were added to the GIS database, containing camera position, viewing direction, etc. Each compilation can be displayed as an arrow from the camera position along the view direction on a map. Contemporary images were registered to historical images using a specialized tool. None or only a suboptimal rephotograph could be taken for some historical images. These historical images are still added to the database together with all other original images, providing additional data for improvements in rephotography methods in the upcoming years.
The resulting image pairs can be used in image registration, landscape change, urban development, and cultural heritage research. Further, the database can be used for public engagement in heritage and as a benchmark for further rephotography and time-series projects.
For driving the roads of cities into enjoyable and relaxing places with parks, trees, and seating, a paradigm change in everyone’s commuter behavior is needed. Still, individual transport via cars increases, and thus, the space required for parking and driving these cars shapes our cities — not the people. Next to the space needed, vehicles pollute the environment with CO2, diesel particulate, and even electric cars with tire abrasion. Alternative modes of locomotion, like public transportation and shared mobility, are still not attractive to many people. Intelligent intermodal mobility networks can help address these challenges, allowing for efficient use between various transportation modalities. These mobility networks require good databases and simulation combined into digital twins. This paper presents how such a digital twin can be created in the Simulation of Urban Mobility (SUMO) software using data from available and future city sensors. The digital twin aims to simulate, analyze, and evaluate the different behaviors and interactions between traffic participants when changing commuting incentives. Using the city of Osnabrück and its different available sensor types, the data availability is compared with other towns to discuss how the data density can be improved. Creating a static network from open street data and intersection side maps provided by the city of Osnabrück shows how these data can be integrated into SUMO for generating traffic flows and routes in SUMO based on a database of historical and live data. Within the conclusion, the paper discusses how developing a digital twin in SUMO from static and dynamic data can be improved in the future and what common misconceptions need to be overcome.
Förderung der KI-Kompetenz bei Studierenden ohne Vorkenntnisse in Informatik und Programmierung
(2024)
Das Mechanische Neuronale Netz (MNN) ist ein physikalisches Modell eines Künstlichen Neuronalen Netzes (KNN), das die Bauteile und Funktionen eines KNN physikalisch greifbar macht. Mit dem MNN können SchülerInnen und Studierende eine der Grundlagen der Künstlichen Intelligenz (KI) einfach verstehen, ohne auf Programmierkenntnisse oder Computer angewiesen zu sein. Studien ergaben, dass sowohl der objektive als auch der subjektive Lernerfolg sowie die Zufriedenheit mit dem MNN im Vergleich zu herkömmlichen Methoden höher ist. Dabei verfolgt das MNN einen Bottom-up- Ansatz, bei dem die Grundbausteine wie die Neuronen und ihre Verbindungen und ihre Zusammenstellung zu einem Gesamtnetz erklärt werden. Darauf können mit dem MNN logische Probleme gelernt und gelöst werden. In dieser Arbeit wird aufgezeigt, wie die Brücke zu echten KNN geschlagen werden kann, indem ein Klassifikationsproblem, die Erkennung von Hunderassen, zuerst mit dem MNN und danach mit einem KNN gelöst wird. Während das MNN auf Merkmalen, wie Kopfform und Größe, operiert, nutzt das KNN Bilder. Somit ist es für die SchülerInnen und Studierenden möglich, eine Brücke zwischen den Grundbausteinen der KI, die mit dem MNN erklärt werden und echten Problemen, die heutzutage mit einer KI gelöst werden, wie beispielsweise der Bildklassifikation, zu schlagen.
Embarking on the journey of rephotography, capturing a contemporary image from the vantage point of a historical counterpart and registering them, is a formidable challenge. Traditional automated registration methods stumble in the face of this task, while manual methods, reliant upon painstakingly identified corresponding points, demand an investment of time, precision, and expertise. Often, only image fragments can be seamlessly registered due to changes in the scene, like new and removed buildings. Determining the areas of interest (AOI) for registration becomes a critical decision, placing users in the process’s role as curators. This work proposes a new method combining state-of-the-art automatic deep learning-based registration methods with user-provided masks. Users draw masks around the AOI they want to register and exclude non-indented AOI from registration. Using AOI masks reduces the required time, painstaking identification of corresponding points, and knowledge needed for manual registration while giving the user control over the registration process by providing an intuitive way to embed which AOI is vital to register. This interactive method achieves excellent registration quality and positive user feedback compared to regular automated image registration methods. It can not replace manual registration completely. However, for many rephotography tasks, it significantly reduces the required effort. The deep learning-based automatic method already achieves a high acceptance rate—i.e., a score of at least 4 out of 5—of 55%, which is a considerable improvement to standard automatic registration method with an acceptance rate of 12%. With the interactive AOI masks method, which combines user-drawn masks with the automatic deep learning-based method, the acceptance rate increases to 60% and is almost as good as manual registration with a rate of 65%.