❔ is there any way i can run my csharp console app net framwork program executable on my gpu?
Hi, i want to know if there is any way i can run my c# console app net framwork executable file on my gpu insted of my cpu becuase i think my gpu would run the program i wrote faster then my cpu.
48 Replies
why do you think so
becuase its a program that does other stuff butt stuff almost like a cryptominer does and a cryptominer works best with the gpu
GitHub
GitHub - Sergio0694/ComputeSharp: A .NET library to run C# code in ...
A .NET library to run C# code in parallel on the GPU through DX12, D2D1, and dynamically generated HLSL compute shaders, with the goal of making GPU computing easy to use for all .NET developers! 🚀...
write a compute shader
what is that
you do work on the gpu via shaders
that abstraction they sent above generates said shaders from .NET IL code dynamically
I'm guessing
can i not just put all my c# code it to some kind of converter and then use the converted code so it will be runned on my gpu
that's what they sent above
but it's not as simple as that
gpus are highly multithreaded
which means the program has to be written with that in mind
converting an arbitrary program doesn't give you any benefit
shaders are programs that run e.g. per pixel on the screen, each on a separate thread, in a mini context
ok but this is the program that i want to convert
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Globalization;
using System.Runtime.CompilerServices;
namespace ConsoleApp1
{
internal class Program
{
static void Main(string[] args)
{
string filepath = @"D:\text1.txt";
string filepath2 = @"D:\text2.txt";
List<string> lines = File.ReadAllLines(filepath).ToList();
List<string> lines2 = File.ReadAllLines(filepath2).ToList();
for (int i = 0; i < lines.Count; i++)
{
for (int j = 0; j < lines2.Count; j++)
{
if (lines[i] == lines2[j].Split(':')[0])
{
Console.WriteLine(lines2[j]);
}
}
}
Console.WriteLine("\nDone");
Console.ReadLine();
}
}
}
this works with the cpu becuase its not many rows in text1 and text2
uhhh
I don't think this is exactly what GPUs are built to do
there are other ways to optimize this though
but i have a other "text1" doucument with 8k rows and the "txt2 document" has 8000000 rows
with my 8k txt1 and 8000k txt2 document this takes to long with the cpu
you could get rid of the outer loop over lines, for example
and use a HashSet with HashSet.Contains instead
what do you mean?
where in my code?
you have
for (int i = 0; i < lines.Count; i++)
you could get rid of that loopok
have
HashSet<string> lines = new HashSet<string>(File.ReadAllLines("D:\text1.txt"));
ok
then do
lines.Contains(lines2[j].Split(':')[0])
in the if
ok
and that should be significantly faster
you can also multithread this on your CPU instead of trying to make it run on a GPU
i will try it and write back here if i dont get it to work
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Globalization;
using System.Runtime.CompilerServices;
namespace ConsoleApp1
{
internal class Program
{
static void Main(string[] args)
{
string filepath = @"D:\text1.txt";
string filepath2 = @"D:\text2.txt";
List<string> lines2 = File.ReadAllLines(filepath2).ToList();
HashSet<string> lines = new HashSet<string>(File.ReadAllLines("D:\text1.txt"));
for (int j = 0; j < lines2.Count; j++) { if (lines.Contains(lines2[j].Split(':')[0])) { Console.WriteLine(lines2[j]); } } Console.WriteLine("\nDone"); Console.ReadLine(); } } } that does not work at all
for (int j = 0; j < lines2.Count; j++) { if (lines.Contains(lines2[j].Split(':')[0])) { Console.WriteLine(lines2[j]); } } Console.WriteLine("\nDone"); Console.ReadLine(); } } } that does not work at all
it's doing almost the same thing as the code from before
though it doesn't print multiple times for one row if the same line exists in text1 more than once
it is no duplicates so that do i not need to worry about with my bigger txt documents
what doesn't work about it
i cant even run it can you just please rewrite the code and send it here i have never even used hashset before
why can't you run it
it said that varible lines is already used
what
System.ArgumentException: 'Illegal characters in path.'
oh because the thing in the ReadAllLines should've just been
filepath
Gonna need to escape the backslash I think \\
instead of the string
ok
currently there's a tab in the string lmao
this is the code now
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.IO;
using System.Globalization;
using System.Runtime.CompilerServices;
namespace ConsoleApp1
{
internal class Program
{
static void Main(string[] args)
{
string filepath = @"D:\text1.txt";
string filepath2 = @"D:\text2.txt";
List<string> lines2 = File.ReadAllLines(filepath2).ToList();
HashSet<string> lines = new HashSet<string>(File.ReadAllLines("D:\text1.txt"));
for (int j = 0; j < lines2.Count; j++) { if (lines.Contains(lines2[j].Split(':')[0])) { Console.WriteLine(lines2[j]); } } Console.WriteLine("\nDone"); Console.ReadLine(); } } } what should i change
for (int j = 0; j < lines2.Count; j++) { if (lines.Contains(lines2[j].Split(':')[0])) { Console.WriteLine(lines2[j]); } } Console.WriteLine("\nDone"); Console.ReadLine(); } } } what should i change
did you fix the ArgumentException
thanks it worked now but i have no idea how it worked
and why is it so much faster?
it had quadratic complexity, but you made it linear
or is it n logn
HashSets can check whether they contain something in O(1) time
ok
meaning no matter how many lines there are in text1, it will take the same amount of time to do .Contains
that takes the number of iterations you do from 8k * 8000000 to just 8000000
which is much smaller!
you were right, it's linear based on the line count of text2
understanding time complexity, and how to reduce it, is one of the best ways to become better at optimizing code
yeah a search tree would've been
n log n
, constructing hash sets is linearnot sure why you'd use a binary tree here
you wouldn't, I was just reminding myself of its complexity
Was this issue resolved? If so, run
/close
- otherwise I will mark this as stale and this post will be archived until there is new activity.