IMO the dashboards you have shown provide some insight into the file itself, but I don't see how they can give an insight into model and file performance. There are a lot of other file checks/audit components that need to be undertaken to fix performance issues. Attributes are only a small piece of the puzzle (well they are for me, because of how strict I am with them).
The areas that need checking are unused section/elevation/detail markers, non placed views, number of modules, library management, elements that are still in the file that are no longer needed (old options) etc. These items have a bigger impact on file performance than attributes and properties in most cases. Some of these I believe can be accessed by Python currently?
What part of the audit process are you looking to automate? The modelling and information validation component or the performance part?
Both as we need files to be of high standard for IFC coordination and embodied carbon analysis and files need to perform well.
It is not a lack of knowledge either. At least I think it's not. We have a team of 4 BIM managers, 3 of which previously worked with Central Innovation (GS distributor here in Australia).
I understand being strict helps, but we can't be beating the staff with a stick for any mistake. Overall it is not possible to have all of your team members highly skilled in Archicad. There will always be newstarters and graduates who will make mistakes as we all do. But it should not be possible for someone to inadvertently introduce a problem into the file that is very difficult to detect and costs the entire team in terms of loss of productivity.
Understanding that extra detail I can narrow down to only a couple of practices that you would be working with. For the IFC checking, I would be using Solibri with custom rulesets to make sure that the file aligns with your client's deliverables.
Sadly you need flexibility in Archicad for most users. With the project in Teamwork you could set up roles that have restrictions to perform some of the tasks you don't want them to perform. That isn't my preferred approach, I don't use a stick approach either, I would be investing in more training to train people correctly on the areas where you are getting the main issues, so they understand the problems that they are creating.
In my office we audit models based on our standards and best practice as advised by the Archicad staff. We check things like
- Layer numbering Conventions (Visual Check)
- Correct use of Pen sets (Visual Check)
- Elements on correct layer (Schedule Check)
- Elements on correct renovation filter (schedule check)
- Correct Use of layer combinations (Visual Check)
- Correct use of model view combinations (visual check)
- correct use of Graphical Overides (Visual Check)
- Renovation filter management (Visual check) to ensure renovation filters have been shared to other disciplines.
- All elements have been classified (Schedule Check)
- All drawing sheet criteria has been met (North arrow, Key plan, legend etc)
- Project north has been set (Visual check)
- project information has been set (Visual check)
- Standards Library parts in use (Visual check)
We also run BIM specific audits too that check for specific requirements of projects.
One thing that I feel is the main cause of projects running or behaving badly in my office is the use of the embedded library..... As a whole we make sure that our libraries are linked in through BIMcloud . But things like Custom objects, Foreign objects & even IFC data can clutter the embedded library . I make sure that this folder is as small in size as possible! Graphisoft have recommended it to be under 100mbs at all times but we keep it under 10mbs by using project specific yellow folders - these are also uploaded to BIMcloud and referenced in models that way.
I also recommend running the audit & fix on your models or saving a local copy & re-uploading to BIMcloud.... this dramatically reduces the file size.
Drawing manager is another good way to check your drawings & speed up your models, my understanding is that anything in red is being looked for by the software, which would mean slower load up times etc.
In library manager you can see missing or duplicate library parts.... this is also something that clutters models and increases processes & load up time.
finally in options I believe the default autosave is after "each step" which to me is unnecessary and slows working, I changed it to autosave every 5 minutes.
We use a schedule to check proper classification, which in turn also tackles any layer issues. The layer should be the first 2 digits of the 4 digit classification system we have.
I do find that using some sort of filter on such a schedule, for example per layer, speeds things up a lot. Generating a big schedule takes a long time and it is easy to get lost with how far you got.
I model pretty much every project (we are a small firm), so I'm mostly correcting my own mistakes and I am pretty obsessed with neat modeling so generally don't check for modeling errors (since I would have noticed them long before and corrected them).