If you’re interested in starting with stable diffusion, whisper, or open language models, there are a few options to consider. Some people have built businesses based on stable diffusion, so that could be a path to explore. For those who enjoy stable diffusion generations or whisper transcription, there are opportunities for experimentation. However, open language models are not as advanced as GPT-4, so it’s recommended to use GPT-3.5 or GPT-4 unless there are specific reasons not to. GPUs like H100, RTX 6000, and A6000 are suggested for different use cases. Runpod is an easy GPU cloud to start with, offering templates for customization. Other GPU clouds and their specific features are also discussed. Overall, there is a lot of room for exploration and improvement in the field of open models and GPU usage.
https://gpus.llm-utils.org/the-gpu-guide/