Removing resolution limit on importing image from media library

Hi I have custom build working and am trying to tweak the resolution limit for importing images into the sketch (More Options... -> Labs -> Media Library -> Local Images). As a test I tried NOPing ValidateDimensions in ImageUtils.cs then doing a clean build but that doesn't seem to have an effect to be able to load a higher resolution image. Any suggestions?
20 Replies
andybak
andybak2y ago
I'm not sure ValidateDimensions is even on the execution path for loading local images. What image dimensions are you trying to load?
CodeJingle
CodeJingleOP2y ago
This is a programming exercise. It will defeat the purpose to give you exact values. It would be equally valid to ask where this code is located for the purpose of further restricting the dimensions. I'm trying to make a change to the code and see it reflected properly in the updated build. Please if you want to help, rather than trying to lead the conversation down a path that ends up with me making no changes. I've already installed Unity, pulled the git repo, am building the app, loading it and running it on the headset. Clearly I've put a significant effort into this. I have a clear vision. If you assume I know what I'm doing, I just need some guidance for where the code is that actively executes when loading a local image.
andybak
andybak2y ago
It will defeat the purpose to give you exact values.
Yes but I'm trying to understand your goal. There's always a chance the answer is "don't do that, do something entirely different". I never like to answer a question without knowing the context. The other issue is that I don't know the answer without tracing through the code myself - which is why I want to know a bit more info up front.
CodeJingle
CodeJingleOP2y ago
My two goals are tweaking the resolution limit, and mitigating the issue related to billboarded images not being visible from the other side. I'm doing some engineering work in VR and need images with slightly higher res. The images for example would be a circuit layout. If you don't know the execution path then of course I don't want you to debug and figure it out on my behalf that would be unfair.
andybak
andybak2y ago
In that case - it's not a part of the codebase I'm particularly familiar with from memory. And I'm not sure who else would be I can suggest how I'd go about finding out, if that's any help
CodeJingle
CodeJingleOP2y ago
I'm assuming Quest Pro would handle higher res images better so it makes sense to at least try.
andybak
andybak2y ago
If you suspect it's a quest specific limit then you could try searching through for the conditionals that determine the desktop vs mobile functionality. There's some compiler flags but I think also some runtime flags.
CodeJingle
CodeJingleOP2y ago
This is what I'm currently limited to. Not bad but can do better. I'm thinking there is also a size limit on local image files not just a resolution limit.
CodeJingle
CodeJingleOP2y ago
TeamRabidDog
YouTube
Reverse Engineering in VR
After already 'extracting' the layers of a previously unknown PCB, I connect VIAs between the 4 PCB layers (in VR - Meta Quest Pro). Luckily there are no hidden or buried VIAs. The layers don't quite line up but you get the idea. This is practice. OpenBrush beta 1.0.206. I built the OpenBrush APK from source with Unity.
dwillington
dwillington2y ago
This is dope! Are those "2D" layers actually images? With transparency? I've been playing with the API. One of my thoughts was to programmatically reconstruct circuit boards, TRON like. You would get more linear results over the "drawn" effect shown in the videos. https://docs.openbrush.app/user-guide/open-brush-api#how-do-i-configure-it Also, was this video created from within Meta Pro? When I create a video from Quest 2, it does not capture my surroundings in Passthrough. I think I read it has to do with the OS not making that available to the App layer.
CodeJingle
CodeJingleOP2y ago
I drew the traces by hand ('traced' the traces lol) in Concepts on iPad and exported those layers as PNG. Luckily both Concepts and OpenBrush support PNG transparency. https://concepts.app/en/ I sort of had to since I sanded down the board by hand to expose the traces of the various [board] layers. The schematic and layout aren't publicly available. I recorded the video from within the headset, from the OS. I don't think it has anything to do with the app layer. I was using a Meta Quest Pro but it also works on Quest 2. Check out the OpenBrush YouTube channel for examples. https://www.meta.com/help/quest/articles/in-vr-experiences/social-features-and-sharing/record-video-oculus/ https://youtu.be/Gluzf8cVYDI https://youtu.be/0Z_hLo0hi1w You are right that from a developer perspective in-app access to the 'pixels' of the passthrough is limited. https://developer.oculus.com/blog/mixed-reality-with-passthrough/ "We built Passthrough API with privacy in mind. Apps that use Passthrough API cannot access, view, or store images or videos of your physical environment from the Oculus Quest 2 sensors. This means raw images from device sensors are processed on-device."
ConceptsApp
Concepts App • Infinite, Flexible Sketching
Every idea begins as a concept. Write notes on the infinite canvas, make mind-maps and mood boards, sketch plans, designs and illustrations. Share with friends, clients and other apps.
Record video with Meta Quest
You can record video on Meta Quest headset from the sharing option in your toolbar.
Open Brush
YouTube
Preview~ Open Brush Passthrough with Open XR
A quick preview of the power of Open XR and the features we can now bring to Open Brush! Music: Traveler by Alexander Nakarada (www.serpentsoundstudios.com) Licensed under Creative Commons BY Attribution 4.0 License http://creativecommons.org/licenses/by/4.0/
Mixed Reality with Passthrough
Today we’re excited to introduce Passthrough API Experimental—a way to build and test apps that seamlessly blend the real and virtual worlds and open up new possibilities for mixed reality experiences.
dwillington
dwillington2y ago
It's a super creative use of OB. Although I don't think I'd have the patience to manually draw for over 30 minutes in the headset. My exploration has been about programmatic drawing. If it's all right, I'll share some videos here and it may give you a taste of what's possible. Sounds like you have the programming background too... https://youtu.be/xDwgJhi0OIc https://youtu.be/p-jq2CGKlVk https://youtu.be/oz3E9NxLUo0 Unfortunately, I can't get OB working in Meta Pro. I have it on my Quest 2, but Passthrough in black and white, meh. I've tried everything that was mentioned above, seemed pretty straightforward...
CodeJingle
CodeJingleOP2y ago
Nice videos. Yes my dayjob is software engineer. The original TiltBrush code that OpenBrush is based on was made by Google employees. Some of the world's smartest software engineers. Every Brush in OpenBrush is the developer equivalent of an automation tool or programmatic tool. You want to extend it a little further, add another layer of complexity. Perhaps think of it that way. I think your exploration is great, just try to keep in mind the tools that you make will most likely be used in combination with tedious manual work (aka 'art'). It took me 8 hours just to sand the circuit board down after removing all the components. With sandpaper. For 8 hours. And this was a small board. Those 8 hours were just step 1. Spending 30 minutes in VR to draw wires between VIAs is like blinking.
andybak
andybak2y ago
How did you get on with raising the size limit on imported images?
CodeJingle
CodeJingleOP2y ago
Well I realized all of the images were PNG instead of a mix of PNG and JPG. For certain images this caused artificial bloat of the file size. I'm going to re-export and try again. I still want to confirm the execution path for enforcement of constraints for locally loaded images but the priority is now lower. The next feature with priority is 'double-sided' images. Such that after importing an image, looking 'behind it' will continue showing the image. Rather than looking behind an image culling the geometry and showing nothing. The image doesn't follow your gaze like old-school billboarding. It should feel like you're looking at the image from two different sides. I'm thinking a poor man's solution will load the same image twice facing away from each other and the images are linked/grouped so that if you move either image they stay together (one of the images will have the pixels flipped).
andybak
andybak2y ago
The next feature with priority is 'double-sided' images. Such that after importing an image, looking behind it will continue showing the image. Rather than looking behind an image culling the geometry and showing nothing.
"Cull Off" in Assets/Shaders/ReferenceImage.shader ?
CodeJingle
CodeJingleOP2y ago
That's just what it's called in generic computer graphic terms. You're "culling" geometry when you decide not to draw it because it's outside the view frustum or within view but facing the wrong way. In this case specifically it's "back-face culling". If you are finding 'cull' anywhere in the graphics code well that's a great place for me to start looking.
andybak
andybak2y ago
Exactly where i suggested! Add "Cull Off" at line 27 After the Lighting directive
CodeJingle
CodeJingleOP2y ago
Sure I'll give it a try this weekend.
andybak
andybak2y ago
@CodeJingle I've just stumbled across the location of the image size setting if you're still looking It's a ScriptableObject asset: PlatformConfigMobile and PlatformConfigPC in the root Assets directory Going to mark this as "solved" as it's not a bug. Feel free to post in #feature-suggestions although we are aware of the general need to update limits originally set for Quest 1.
Want results from more Discord servers?
Add your server