The Solution- Within Microsoft Cognitive Services we host Computer Vision Service. The model behind this service has been trained with millions of images and enables object detection for a wide range of types of objects. In this case, we need to build a custom model and train it with images of hand-drawn design elements like a textbox, button or combobox. The Custom Vision Service gives us with the capability to train custom models and perform object detection for them. Once we can identify HTML objects we use the text recognition functionality present in the Computer Vision Service to extract hand-written text present in the design. By combining these two pieces of information, we can generate the HTML snippets of the different elements in the design. We then can infer the layout of the design from the position of the identified elements and generate the final HTML code accordingly.
Microsoft Sketch2Code AI turns sketches into HTML pages
Posted on Thursday, Aug 30 2018 @ 10:17 CEST by Thomas De Maesschalck
Microsoft teases Sketch2Code, a new artificial intelligence-driven tool that makes it easier to design websites. Basically, you draw the design of the site on a whiteboard, capture a photograph and the tool's deep learning algorithm that runs in the Azure cloud will turn your sketch into a working HTML wireframe. Interesting stuff.