therealgooey
therealgooey
CC#
Created by therealgooey on 3/1/2023 in #help
✅ Array Assignment Error
I'm iterating over a list of classes, and in each class, there's a property that is an array which I iterate over and remove anything that matches what I want to remove. Originally, I did a .Where() for comparison and then .ToList() which I iterated over and did a Array.Clear on the original array. But that just leaves behind null classes in the Array. So I changed my approach, converted the array to a list and did a remove all. I want to set the original Array equal to this List<T>.ToArray(), but when I try that, I get "the Left-hand side of an assignment must be a variable, property or indexer" error.
foreach (Foo foo in fooBar ?? Enumberable.Empty<Foo>)
{
List<Bar>? barList = foo?.bar.ToList<Bar>();
barList?.RemoveAll(bar => bar.Name == "");
foo?.bar = barList??.ToArray(); // error here
}
foreach (Foo foo in fooBar ?? Enumberable.Empty<Foo>)
{
List<Bar>? barList = foo?.bar.ToList<Bar>();
barList?.RemoveAll(bar => bar.Name == "");
foo?.bar = barList??.ToArray(); // error here
}
9 replies
CC#
Created by therealgooey on 2/16/2023 in #help
❔ Large XML Documents Performance Issues
Running into performance issues trying to parse large XML documents into memory. By large, I mean 200+ MB XML files containing over hundreds of thousands of lines of text. They contain snapshots of telemetry data for machines the client owns and operates. Of of their purposes is to assist upper tier support with resolving tickets by making this data available to them. Currently the data it is not available unless the support is on premise. The problem is most of the documents with the more useful information are so large that they can't be opened by the machines the support teams use. Heck, some of them I can't even open on my developer machine. Since we can't go back and redesign the XML Schemas to be parsed into smaller documents, my idea to resolve this is to develop a service that parses the documents and let's the user navigate through the XML document through a web page, letting cloud resources do the heavy lifting. The primary reason I've landed on this is because it falls under the umbrella of what we are authorized to do (gov client). Initially I was going to use Kubernetes for this, but again, do the authorization reasons, the client has asked that we stick to Azure Functions. Currently, the data is stored in Azure Files, however, for other reasons, some of that data has been stored in SQL with each file stored within an XML column (they refuse to split the data up relationally because, and I quote "it would add too many tables to the db"). So I've been testing parsing the data into an XPathDocument using EF Context GetDbConnection Reader. It is able to mostly parse files up to ~70mb, and while a bit slow (8100ms response time with the largest successfully parsed file), it gets the job done. This is using Azure Functions Premium on EP1. EP2 saw faster response times (8100 was the fastest), but still would timeout on anything larger than 50-70mb. The response time seems to grow exponentially with the document size with anything over the 70mb timing out with the default 5 minute timeout time (which seems really long). As proof of concept, the gov side of the team originally developed a similar solution using the front-end framework (adobe coldfusion) and letting the web server handle loading the documents, and they found major performance gains pulling the data directly from Azure Files where the originally files are hosted (although, they still had to upgrade the web server from AP2 v2 to D8 v3) Wondering if someone can lend me some insight on why our response times seem to grow exponentially, why pulling the file from Azure Files instead of Azure SQL seems to be faster (testing this with my function today to confirm), and what are some other possible solutions for this using Azure Functions. Note: the next steps for me if this was not timing out and maybe a bit more responsive would've been to return the requested xpath data to the user, while also caching that data plus a number of nodes at x depth from that data into redis so that when the user stepped up or down in the document, the data was immediately available, and more nodes would be cached as the user reached the bounds of the original caching.
5 replies
CC#
Created by therealgooey on 2/1/2023 in #help
❔ FileNotFoundException for NuGet Pkg in Az Function App
Hi, I'm receiving the following error while trying to locally test an Azure Function App:
System.IO.FileNotFoundException: 'Could not load file or assembly 'Microsoft.Extensions.Configuration.Abstractions, Version=7.0.0.0, Culture=neutral, PulicKeyToken=adb9793829ddae60'. The system cannot find the file specified.'
System.IO.FileNotFoundException: 'Could not load file or assembly 'Microsoft.Extensions.Configuration.Abstractions, Version=7.0.0.0, Culture=neutral, PulicKeyToken=adb9793829ddae60'. The system cannot find the file specified.'
This happens during the WebJobsBuilderExtensions ConfigureStartup. I verified that the package is in my .nuget folder and at the path found in project.nuget.cache, and I also tried explicitly installing the package through the package manager. It also happens when I target .NET6 and .NET7, and on both v3 and v4. I found mention of similar errors for version 5.0.0.0 of the Microsoft.Extension packages, but nothing for 7.0.0.0. Not sure where to look and not familiar with debugging NuGet package errors like this.
6 replies