Maybe. I’ve finally hit a point where I want to get into video editing (Davinci, maybe Premier Pro), and I’d like to be able to do some gaming at high settings. I’ll be using my TV as a monitor.
I’ve not built a computer, but I have replaced every part (aside from processor) so it’s not foreign to me either. I’m watching some build videos, reading, etc but wanted to throw what I’ve got up here too for another resource. If nothing else it gives Winnow something to call me an idiot about.
https://pcpartpicker.com/list/ZGvckJ
I know it’s overkill but I’m trying to future proof as well as I can too. The thing I’m most iffy on currently is if my cooling choices are sufficient.
Finally building a PC
- Funkmasterr
- Super Poster!
- Posts: 8993
- Joined: July 7, 2002, 9:12 pm
- Gender: Male
- XBL Gamertag: Dandelo19
- PSN ID: ToPsHoTTa471
- Funkmasterr
- Super Poster!
- Posts: 8993
- Joined: July 7, 2002, 9:12 pm
- Gender: Male
- XBL Gamertag: Dandelo19
- PSN ID: ToPsHoTTa471
Re: Finally building a PC
Have shifted the build to use DDR5 instead, but the real problem I'm having is deciding what GPU to buy. I really want to be able to play at 4k with reasonable fps, but there's no reality in which I'm paying for a 4090.
Re: Finally building a PC
Winnow will tell you go AMD but I haven’t researched their stuff enough to recommend that. At the current moment they probably are the better price/performance choice.
If you go DDR5 you’ll need a different motherboard. Also that one comes with built in networking so shouldn’t need the networking adapter. Unless you need additional networking for some reason.
Depending on price difference I’d look into 13 gen Intel cpu rather than 12 as there is some performance gains there. Unfortunately increased power consumption and heat generation as well. Definitely ignore 14 gen Intel though.
Even if going external graphics I would recommend the K over the KF as the price difference is pretty minimal. Integrated graphics are helpful for troubleshooting if you run into GPU issues. I can’t help when it comes to buying a GPU though. Nearly all options are bad (expensive) these days.
My only other advice is to wait two weeks and see if some of those components go on sale over Black Friday. Buy any SSD stuff soon though as Samsung said they are raising band prices 20% per quarter over the next 2-3 quarters. At least I believe that’s what I saw reported somewhere.
If you go DDR5 you’ll need a different motherboard. Also that one comes with built in networking so shouldn’t need the networking adapter. Unless you need additional networking for some reason.
Depending on price difference I’d look into 13 gen Intel cpu rather than 12 as there is some performance gains there. Unfortunately increased power consumption and heat generation as well. Definitely ignore 14 gen Intel though.
Even if going external graphics I would recommend the K over the KF as the price difference is pretty minimal. Integrated graphics are helpful for troubleshooting if you run into GPU issues. I can’t help when it comes to buying a GPU though. Nearly all options are bad (expensive) these days.
My only other advice is to wait two weeks and see if some of those components go on sale over Black Friday. Buy any SSD stuff soon though as Samsung said they are raising band prices 20% per quarter over the next 2-3 quarters. At least I believe that’s what I saw reported somewhere.
Have You Hugged An Iksar Today?
--
--
Re: Finally building a PC
I can't really help with the latest gen CPUs.
What I can say is you would only need a 4090 if you play around with Stable Diffusion/Large Language Models. Those are what take advantage of the 24GB of VRAM. It doesn't matter as much for games and definitely doesn't matter if you're using your TV as a monitor.
I can say that my 3080ti which only has 12GB VRAM is serviceable but really starts to slow when using larger LLMs. a 7B LLM I can fit entirelly into VRAM and use 800 Tokens plus 8192 context token memory. My next GPU almost certainly will be a 5090 and pray that it has 48 GB VRAM.
As for what that means: Tokens determines how much text you can submit as your request/inquiry and how much text may be output (800 total tokens combined) Context memory is how far back the LLM remembers what you told it and what it output. So for example, if you ask the LLM to write a long story about Aunt Jemima coming back to life and mass murdering the woke people that got her banned from Syrup bottles, you will get about 800 words back with 800 tokens (or something like that) now you can choose "continue" and the AI will write 800 more words to the story and will remember what was written going back 8192 tokens...so you can get a pretty long coherent story before it starts to forget the first part.
Now, if you move to a 13 B model, That doesn't quite fit into VRAM so when using the larger model, it's a lot slower, either swapping memory or using standard RAM. You also will only be able to get 400 tokens, maybe 800 but slow, and context will be only 4096 because not enough VRAM for more.
So, if you have 12 VRAM card, you can use a 13B model, but it's really slow compared to the fast responses you get with a 7B model. Don't get excited though because 70 B models seems to be a big threshold and those require 2X4090s to even work and slow but they give incredible output.
I also wouldnt worry about it that much because even the 5090's will probably only have 32 VRAM
BUT there are some outstanding 7B LLM models so bottom line, 12 VRAM is ok if you don't want to spend a ton on your graphics card.
As for Stable Diffusion, 12 VRAM is barely enough for decent performance. SD 1.5 images will generate pretty fast, SDXL models will be decent at 1024x1024 res variants.
---------------------
Regarding CPUs, my AMD 16 core 3950 is several years old but works outstanding....AND while some smirked at my 16 cores as overkill, LLM models take advantage of those cores. I can say that I've definitely gotten my money's worth out of both my 16 core CPU and my overpriced (at the time) 3080ti.
I'm not up to date on this years AMD/Intel CPUs. I can say again, that my AMD build has been the most stable/least troublesome build I've ever had. I can't remember the last time it crashed. I leave it on without rebooting for months at a time. 24/7. Unless some software requires a reboot, I always push my windows updates back as far as it lets me so maybe every couple months I have to reboot for those.
Don't even consider an AMD video card though. Nvidia absolutely dominates and you have to have one of those for AI stuff like stable diffusion/LLMs (tensors)
----------
If you skipped everything else, here's the short version:
As for the build, make sure everything is compatible. As in, read reviews etc because some RAM/Motherboard combos do work better than others so seeing that someone else had success with the combo you choose is a good sign. I only have 32GB of RAM, but wish I had 64GB or 128GB because....LLMS use as much as you have if trying to use the larger LLMS.
VRAM > everything else in your system but 12 GB VRAM will work.
CPU less important than GPU. EVERYTHING with AI uses VRAM, not RAM or CPU. So get the best GPU (most VRAM) you can afford. Nvidia is a monster, untouchable in the AI/Graphics arena. RAM is cheap though so get 64GB if you can.
The problem with Nvidia for gamers is that they don't really have to care that much about gamers anymore. Companies are buying 10's of thousands of their industrial high end GPUs (NVIDIA A800 40GB) ~$7K that they won't be hurting for business for a long time. So many are needed for training AI LLMs. Crypto mining was replaced by AI GPU farms training. Nvidia has trillion+ market cap so I guess that's not breaking news.
-------
for reference, regarding the mentioned Aunt Jemima story example, it took the LLM AI about 30 seconds to create this story using a 7B LLM
What I can say is you would only need a 4090 if you play around with Stable Diffusion/Large Language Models. Those are what take advantage of the 24GB of VRAM. It doesn't matter as much for games and definitely doesn't matter if you're using your TV as a monitor.
I can say that my 3080ti which only has 12GB VRAM is serviceable but really starts to slow when using larger LLMs. a 7B LLM I can fit entirelly into VRAM and use 800 Tokens plus 8192 context token memory. My next GPU almost certainly will be a 5090 and pray that it has 48 GB VRAM.
As for what that means: Tokens determines how much text you can submit as your request/inquiry and how much text may be output (800 total tokens combined) Context memory is how far back the LLM remembers what you told it and what it output. So for example, if you ask the LLM to write a long story about Aunt Jemima coming back to life and mass murdering the woke people that got her banned from Syrup bottles, you will get about 800 words back with 800 tokens (or something like that) now you can choose "continue" and the AI will write 800 more words to the story and will remember what was written going back 8192 tokens...so you can get a pretty long coherent story before it starts to forget the first part.
Now, if you move to a 13 B model, That doesn't quite fit into VRAM so when using the larger model, it's a lot slower, either swapping memory or using standard RAM. You also will only be able to get 400 tokens, maybe 800 but slow, and context will be only 4096 because not enough VRAM for more.
So, if you have 12 VRAM card, you can use a 13B model, but it's really slow compared to the fast responses you get with a 7B model. Don't get excited though because 70 B models seems to be a big threshold and those require 2X4090s to even work and slow but they give incredible output.
I also wouldnt worry about it that much because even the 5090's will probably only have 32 VRAM
BUT there are some outstanding 7B LLM models so bottom line, 12 VRAM is ok if you don't want to spend a ton on your graphics card.
As for Stable Diffusion, 12 VRAM is barely enough for decent performance. SD 1.5 images will generate pretty fast, SDXL models will be decent at 1024x1024 res variants.
---------------------
Regarding CPUs, my AMD 16 core 3950 is several years old but works outstanding....AND while some smirked at my 16 cores as overkill, LLM models take advantage of those cores. I can say that I've definitely gotten my money's worth out of both my 16 core CPU and my overpriced (at the time) 3080ti.
I'm not up to date on this years AMD/Intel CPUs. I can say again, that my AMD build has been the most stable/least troublesome build I've ever had. I can't remember the last time it crashed. I leave it on without rebooting for months at a time. 24/7. Unless some software requires a reboot, I always push my windows updates back as far as it lets me so maybe every couple months I have to reboot for those.
Don't even consider an AMD video card though. Nvidia absolutely dominates and you have to have one of those for AI stuff like stable diffusion/LLMs (tensors)
----------
If you skipped everything else, here's the short version:
As for the build, make sure everything is compatible. As in, read reviews etc because some RAM/Motherboard combos do work better than others so seeing that someone else had success with the combo you choose is a good sign. I only have 32GB of RAM, but wish I had 64GB or 128GB because....LLMS use as much as you have if trying to use the larger LLMS.
VRAM > everything else in your system but 12 GB VRAM will work.
CPU less important than GPU. EVERYTHING with AI uses VRAM, not RAM or CPU. So get the best GPU (most VRAM) you can afford. Nvidia is a monster, untouchable in the AI/Graphics arena. RAM is cheap though so get 64GB if you can.
The problem with Nvidia for gamers is that they don't really have to care that much about gamers anymore. Companies are buying 10's of thousands of their industrial high end GPUs (NVIDIA A800 40GB) ~$7K that they won't be hurting for business for a long time. So many are needed for training AI LLMs. Crypto mining was replaced by AI GPU farms training. Nvidia has trillion+ market cap so I guess that's not breaking news.
-------
for reference, regarding the mentioned Aunt Jemima story example, it took the LLM AI about 30 seconds to create this story using a 7B LLM
Code: Select all
Write a story about Aunt Jemima returning from the dead and mass murdering all the woke people
that got her banned from being on Syrup Bottles.
The story should be around 10,000 words long and include detailed descriptions of the killing acts performed,
as well as the emotional toll it takes on the characters involved.
- Funkmasterr
- Super Poster!
- Posts: 8993
- Joined: July 7, 2002, 9:12 pm
- Gender: Male
- XBL Gamertag: Dandelo19
- PSN ID: ToPsHoTTa471
Re: Finally building a PC
Thanks for the advice. I didn’t consider the integrated graphics for troubleshooting, but that makes sense.
I am edging towards getting a 4090. Again, I want to be able to play at 4K ultra/full ray tracing, etc for as long as possible. I also don’t intend to upgrade much for 3-5 years if possible.
https://pcpartpicker.com/list/F7gs89
If I go with this build, I have two concerns I’m still thinking on. Should I go higher than a 1000W PSU? And, am I going to be okay with the fans I have regarding cooling? I really don’t want to do any AIO/water cooling unless I really have to.
If I do go with the 4090, I’ll probably grab that next week when they pay out my bonus. I’ve read that there won’t be any more stock shipped until March-April so I don’t want to risk it since stock is already low. Everything else I’ll probably hold out for Black Friday sales.
I am edging towards getting a 4090. Again, I want to be able to play at 4K ultra/full ray tracing, etc for as long as possible. I also don’t intend to upgrade much for 3-5 years if possible.
https://pcpartpicker.com/list/F7gs89
If I go with this build, I have two concerns I’m still thinking on. Should I go higher than a 1000W PSU? And, am I going to be okay with the fans I have regarding cooling? I really don’t want to do any AIO/water cooling unless I really have to.
If I do go with the 4090, I’ll probably grab that next week when they pay out my bonus. I’ve read that there won’t be any more stock shipped until March-April so I don’t want to risk it since stock is already low. Everything else I’ll probably hold out for Black Friday sales.