CubitOom

  • 29 Posts
  • 618 Comments
Joined 2 years ago
cake
Cake day: June 8th, 2023

help-circle




  • The globe is heating, and the data is not lying. But I wish people would not try to shoehorn this unique el nino year weather as a precursor for all future non el nino years. At least this article mentions that to an extent but it is definitely trying to capitalize on the fear the headline implies.

    Since 2024 was an el nino year the weather was much less predictable. The weather patterns for the entire globe were slightly off, and it’s traditionally warmer and wetter then average globally.

    Basically I feel like what’s going to happen is next year it will be much less hot and all the climate deniers are going to look at the headlines like this from last year and hold snowballs and say how crazy we are are for thinking this is real. Which will convince some voters and more anti-climate policies will be put in place.







  • CubitOomtolinuxmemes@lemmy.worldDon’t get me wrong…
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I used to think this way. Until I found that with emacs you can edit any file on an SSH enabled computer remotely. Meaning that not only are you no longer constrained by what the computer has installed. But you can use your personality configured editor while editing that file. It’s called tramp.

    BTW, with Emacs you can use vim key bindings evil-mode, so don’t stress about that.









  • CubitOomOPtoFuck AI@lemmy.worldGoogle Will Kill YouTube With AI Nonsense
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    4 days ago

    Text to speech (TTS) has come a long way, even open source projects.

    I’ve recently been using TTS to read pdfs to me while I do yardwork and its not super easy to tell just by the voice that its not a human. Biggest issue seems to be with math formulas.

    In a year it will be hard to tell the difference unless they read something that a human would have skipped.





  • Sorry about that bad link. Here it is.

    Install ollama

    ```sh
    pacman -S ollama
    ```
    

    Download any uncensored llm

    From ollama’s library

    Serve, Pull, and Run

    1. In terminal A execute
      ollama serve
      
    2. In terminal B execute
      ollama pull wizard-vicuna-uncensored:7B
      ollama run wizard-vicuna-uncensored:7B
      

    From huggingface

    download and any gguf model you want with uncensored in the name. I like ggufs from TheBloke

    • Example using SOLAR-10.7B-Instruct-v1.0-uncensored-GGUF
      • Click on Files and Versons and download solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf
      • change directory to where the downloaded gguf is and write a modelfile with just a FROM line
        echo "FROM ~/Documents/ollama/models/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf" >| ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
        
      • Serve, Create, and Run
        1. In terminal A execute
          ollama serve
          
        2. In terminal B execute
          ollama create solar-10:7b -f ~/Documents/ollama/modelfiles/solar-10.7b-instruct-v1.0-uncensored.Q5_K_S.gguf.modelfile
          ollama run solar-10:7b
          

    Create a GGUF file from a non gguf llm for ollama

    setup python env

    Install pyenv and then follow instructions to update .bashrc

    curl https://pyenv.run/ | bash
    

    Update pyenv and install a version of python you need

    source "${HOME}"/.bashrc
    pyenv update
    pyenv install 3.9
    

    Create a virtual environment

    pyenv virtualenv 3.9 ggufc
    

    Use the virtual environment and download the pre-reqs

    pyenv activate ggufc
    pip install --upgrade pip
    pip install huggingface_hub
    mkdir -p ~/Documents/ollama/python
    cd ~/Documents/ollama/python
    git clone https://github.com/ggerganov/llama.cpp.git
    cd llama.cpp
    pip install -r requirements.txt
    

    Download the model from huggingface.

    For this example, Im going to pull llama3.2_1b_2025_uncensored Note that this llm is 1B so can be ran on a low spec device.

    mkdir -p ~/Documents/ollama/python
    mkdir -p ~/Documents/ollama/models
    model_repo_slug='carsenk'
    model_repo_name='llama3.2_1b_2025_uncensored'
    model_id="$model_repo_slug/$model_repo_name"
    cat << EOF >| ~/Documents/ollama/python/fetch.py
    from huggingface_hub import snapshot_download
    
    model_id="$model_id"
    snapshot_download(repo_id=model_id, local_dir="$model_id",
                      local_dir_use_symlinks=False, revision="main")
    EOF
    
    cd ~/Documents/ollama/models
    python ~/Documents/ollama/python/fetch.py
    

    Transpose HF to GGUF

    python ~/Documents/ollama/python/llama.cpp/convert.py "$model_id" \
      --outfile "$model_repo_name".gguf \
      --outtype q8_0
    

    Serve, Organize, Create, and Run

    1. In terminal A execute
      ollama serve
      
    2. Open a new terminal while ollama is being served.
      mkdir -p ~/Documents/ollama/modelfiles
      echo "FROM ~/Documents/ollama/models/llama3.2_1b_2025_uncensored.gguf" >| ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
      ollama create llama3.2:1b -f ~/Documents/ollama/modelfiles/llama3.2_1b_2025_uncensored.modelfile
      ollama run llama3.2:1b