Having used MidJourney since V3, I've gained experience with LLM-based image generators. Selecting Stable Diffusion for this work might not have been the best choice; MidJourney excels more in architectural rendering. Are there plans for a MidJourney Renderer?
While I understand the application is experimental, my initial impression is that it might have been premature for release. The lack of internal control over model geometry input is crucial for usefulness.
1. It seems the engine should process each Material assigned to elements in the fixed model. This would probably avoid random unwanted architectural features outside the design scope. 2. For this application to be worthwhile, there should be a mode where the ArchiCAD model should be preserved, and only materials and empty space around the model processed by AI. 3. Per-material locks are needed for refining or regenerating options for specific materials that look odd. 4. AI should use assigned texture maps, analyzing scale, location, UV mapping, and reflectance values for better results.
5. AI should receive location information for accurate sun orientation and time.
6. It would be nice to use a 3D envelope made from a mesh or a Morph to constrain features being generated outside of the Zoning envelope. Sort of a 3d Mask.
7. A Project Gallery. Images should be saved in the project with links and prompts. Recalling old images without manual reloads and creating a gallery website for client sharing are recommended.
8. Compatibility with other LORAs or LLMs, like the Freedom.Redmond model, should be allowed. 9. Perhaps the way forward is with embedding features like ControlNet has; a node-based editor for individual control over elements within the model and environment is essential.
10. Graphisoft may want to reconsider the GDL world for an AI-generated objects paradigm. Where 3d objects get created by a prompt for a product model name/number. All Properties could be extracted from product literature online and embedded in the selected product making for easier schedules creation.
Despite current limitations and the sizable 18 GB disk space requirement, I appreciate the effort and look forward to future updates. I hope for a MidJourney-compatible product soon. In the near future, perhaps within five years, we might be able to extract floor plans from these images, apply code analysis, and detail Construction Documents from conceptual images—an exciting prospect.
It would be a great idea to carefully study how Promeai uses Stable Diffusion in a logical, robust manner. Including the management of images. Would it be possible to work out a partnership with Promeai to just have their interface ported into ArchiCAD's Ai Interface?
Going forward, I will use this website over the built in AI until the GS Plugin is more mature. Still have high hopes.