fool

joined 1 year ago
[–] fool@programming.dev 1 points 3 days ago

obligatory navier-stokes equation

image

109
kdesu (programming.dev)
submitted 4 days ago* (last edited 3 days ago) by fool@programming.dev to c/linuxmemes@lemmy.world
 
 

This site is so cool!

             />  フ
            |  _ _| 
          /` ミ_xノ 
         /     |
        /  ヽ   ノ
        │  | | |
   / ̄|   | | |
   ( ̄  ヽ__ヽ_)__)
    \二)

But how do people make these? I searched online and the best I could find were small Japanese communities still using MS Gothic (which is metrically incompatible with Arial/more-used fonts) and halfhearted JPG-to-ASCII-bitmap converters.

Further, how do people manage these? I'd imagine an emoji search, but these millionfold emoticons don't have names; and the other alternatives are "I've got a meme for that scrolls down infinite camera roll" or searching them up every time.

⠀/\_/\
(˶ᵔ ᵕ ᵔ˶) thanks lol
/ >🌷<~⁠♡

[–] fool@programming.dev 10 points 4 days ago* (last edited 4 days ago)

Grandiloquent/sesquipedalian. It's what you get when you use everything in this thread ₍^ >ヮ<^₎ .ᐟ.ᐟ

~/s~

[–] fool@programming.dev 15 points 4 days ago

Specifically, it refers to a deep understanding.

[A critic] notes that [the coiner's] first intensional definition is simply "to drink", but that this is only a metaphor "much as English 'I see' often means the same as 'I understand'". (from Wikipedia)

When you claim to "grok" some knowledge or technique, you are asserting that you have not merely learned it in a detached instrumental way but that it has become part of you, part of your identity. For example, to say that you "know" Lisp is simply to assert that you can code in it if necessary – but to say you "grok" Lisp is to claim that you have deeply entered the world-view and spirit of the language, with the implication that it has transformed your view of programming. Contrast zen, which is a similar supernatural understanding experienced as a single brief flash. (The Jargon File; also quoted on Wikipedia)

[–] fool@programming.dev 2 points 4 days ago

In 2003, Bill Burr wrote “NIST Special Publication 800-63. Appendix A” -- a security document that recommended passwords be changed every 90 days, and have irregular caps and special characters. When asked about it, and the resultant trends in people adding !@#$%^&*() to the end of their passwords, Burr said something enlightening:

"Much of what I did I now regret."

Lmao

so yeah I hit the Bitwarden generate button and forget

[–] fool@programming.dev 2 points 4 days ago* (last edited 4 days ago) (1 children)

Whoa, I didn't know about this! My trustworthy beloved orange apps were sold to ZipoApps, a company that flips apps into ad revenue.

But has anything changed for the worse yet? I don't see any odd commits in the history (e.g. Draw). I'll probably just lock the F-Droid version of the Simple gear I can't switch.

[–] fool@programming.dev 40 points 5 days ago (1 children)

-1 accuracy point ( ◞ ﹏ ◟)

linux 4.5-rc5 had efivarfs fixed to prevent "rm -rf /" bricking uefi motherboards -- so maybe someone can try it out? :]

 
[–] fool@programming.dev 1 points 5 days ago* (last edited 5 days ago)

Speaking of fearmongering, you note that:

an artist getting their style copied

So if I go to an art gallery for inspiration I must declare this in a contract too? This is absurd. But to be fair I’m not surprised. Intellectual property is altogether an absurd notion in the digital age, and insanity like “copyrighting styles” is just the sharpest most obvious edge of it.

I think also the fearmongering about artists is overplayed by people who are not artists.

Ignoring the false equivalency between getting inspiration at an art gallery and feeding millions of artworks into a non-human AI for automated, high-speed, dubious-legality replication and derivation, copyright is how creative workers retain their careers and find incentivization. Your Twitter experiences are anecdotal; in more generalized reality:

  1. Chinese illustrator jobs purportedly dropped by 70% in part due to image generators
  2. Lesser-known artists are being hindered from making themselves known as visual art venues close themselves to known artists in order to reduce AI-generated issues -- the opposite of democratizing art
  3. Artists have reported using image generators to avoid losing their jobs
  4. Artists' works, such as those by Hollie Mengert and Karen Hallion among others, have been used without their compensation, attribution, nor consent in training data -- said style mimicries have been described as "invasive" (someone can steal your mode of self-expression) and reputationally damaging -- even if the style mimicries are solely "surface-level"

The above four points were taken from the Proceedings of the 2023 AIII/ACM Conference on AI, Ethics, and Society (Jiang et al., 2023, section 4.1 and 4.2).

Help me understand your viewpoint. Is copyright nonsensical? Are we hypocrites for worrying about the ways our hosts are using our produced goods? There is a lot of liability and a lot of worry here, but I'm having trouble reconciling: you seem to be implying that this liability and worry are unfounded, but evidence seems to point elsewhere.

Thanks for talking with me! ^ᴗ^

(Comment 2/2)

[–] fool@programming.dev 0 points 5 days ago* (last edited 5 days ago) (1 children)

Thanks for the detailed reply! :P

I'd like to converse with every part of what you pointed out -- real discussions are always exciting!

...they pay the journals, not the other way around...

Yes of course. It’s not at all relevant?

It's arguably relevant. Researchers pay journals to display their years of work, then these journals resell those years of work to AI companies who send indirect pressure to researchers for more work. It's a form of labor where the pay direction is reversed. Yes, researchers are aware that their papers can be used for profit (like medical tech) but they didn't conceive that it would be sold en masse to ethically dubious, historically copyright-violating, pollution-heavy server farms. Now, I see that you don't agree with this, since you say:

...not only is it very literally transparent and most models open-weight, and most libraries open-source, but it’s making knowledge massively more accessible.

but I can't help but feel obliged to share the following evidence.

  1. Though a Stanford report notes that most new models are open source (Lynch, 2024), the models with the most market-share (see this Forbes list) are not open-source. Of those fifty, only Cleanlab, Cohere, Hugging Face (duh), LangChain (among other Python stuff like scikit-learn or tensorflow), Weaviate, TogetherAI and notably Mistral are open source. Among the giants, OpenAI's GPT-4 et al., Claude, and Gemini are closed-source, though Meta's LLaMa is open-source.
  2. Transparency is... I'll cede that it is improving! But it's also lacking. According to the Stanford 2024 Foundation Model Transparency Index, which uses 100 indicators such as data filtration transparency, copyright transparency, and pollution transparency (Bommasani et al., 2024, p. 27 fig. 8), developers were opaque, including open-source developers. The pertinent summary notes that the mean FMTI company score improved from 37 to 58 over the last year, but information about copyright data, licenses, and guardrails have remained opaque.

I see you also argue that:

With [the decline of effort in average people's fact-finding] in mind I see no reason not to feed [AI] products of the scientific method, [which is] the most rigorous and highest solution to the problems of epistemology we’ve come up with thus far.

And... I partly agree with you on this. As another commenter said, "[AI] is not going back in the bottle", so might as well make it not totally hallucinatory. Of course, this should be done in an ethical way, one that respects the rights to the data of all involved.

But about your next point regarding data usage:

...if you actually read the terms and conditions when you signed up to Facebook... and if you listened to the experts then you and these artists would not feel like you were being treated unfairly, because not only did you allow it to happen, you all encouraged it. Now that it might actually be used for good, you are upset. It’s disheartening. I’m sorry, most of you signed it all away by 2006. Data is forever.

That's a mischaracterization of a lot of views. Yes, a lot of people willfully ignored surveillance capitalism, but we never encouraged it, nor did we ever change our stance from affirmatory to negative because the data we intentionally or inadvertently produced began to be "used for good". One of the earliest surveillance capitalism investigators, Harvard Business School professor Shoshana Zuboff, confirms that we were just scared and uneducated about these things outside of our control.

"Every single piece of research, going all the way back to the early 2000s, shows that whenever you expose people to what’s really going on behind the scenes with surveillance capitalism, they don’t want anything to do [with] it. The only reason we keep engaging with it is because we feel like we have no choice. ...[it] is a colossal market failure. Because it is not giving people what people want. ...everything that's inside that choice [i.e. the choice of picking between convenience and privacy] has been designed to keep us in ignorance." (Kulwin, 2019)

This kind of thing -- corporate giants giving up thousands of papers to AI -- is another instance of people being scared. But it's not fearmongering. Fearmongering implies that we're making up fright where it doesn't really exist; however, there is indeed an awful, fear-inducing precedent set by this action. Researchers now have to live with the idea that corporations, these vast economic superpowers, can suddenly and easily pivot into using all of their content to fuel AI and make millions. This is the same content they spent years on, that they intended for open use in objectively humanity-supporting manners by peers, the same content they had few alternative options for storage/publishing/hosting other than said publishers. Yes, they signed the ToS and now they're eating it. We're evolving towards the future at breakneck pace -- what's next? they worry, what's next?

(Comment 1/2)

[–] fool@programming.dev 1 points 5 days ago

Lots of good answers here but I'll toss in my own "figure out what you need" experience from my first firewall funtime. (Disclaimer: I used nftables -- it should be similar to ufw in terms of defaults though).

  • Right off the bat, everything unneeded was blocked. I "needed" no configuration, except for maybe...
  • Whatever CUPS runs on (when I use it)
  • Sometimes I ran python -m http.server -- I unblocked port 8000 for personal use.
  • I chose to unblock port 53 (DNS). I wanted to connect to another computer via hostname IIRC (e.g. connecting to raspberry-pi.local. I might be misremembering this though).
  • At one point I played with NGINX -- that's port 80 (HTTP) and port 443 (HTTPS).
  • SSH was already permitted (port 22 -- you need root access to enable traffic through ports below 1024 anyway so this wasn't an issue for running typical apps)

I didn't use WireShark back then, really. I think I just ran something like

sudo lsof -nP -iTCP -sTCP:LISTEN

which showed me a bunch of port traffic (mostly just harmless language servers).

You don't have to dive to deep into all the "egress" and "ingress" and whatnot unless you're doing something special. Or your software uses a weird port. (LocalSend lol)

[–] fool@programming.dev 2 points 6 days ago (1 children)

Hmm, that makes sense. The toothpaste can't go back into the tube, so they're going a bit deeper to get a bit higher.

That does shift my opinion a bit -- something bad is at least being made better -- although the "let's use more content-that-wants-to-be-open in our closed-content" is still a consternation.

[–] fool@programming.dev 2 points 6 days ago* (last edited 6 days ago)

Obligatory Linux comment (Lemmy moment):

Windows is used often for its compatibility and defaultness but Linux is interesting in the sense that everything is patchable, everything is tinkerable and configurable. The low resistance to tinkering makes lots of Linux users tinkerers -- including tinkering via code.

I'm not saying wipe your hard drive or even dual-boot. Maybe an older computer or VM could help, depending on what you have. But just in the past week I've screwed around in low-to-medium-difficulty Linux projects that configured my lockscreen with C, that implemented mildly usable desktop GUIs with TypeScript, among others -- just not-too-committal stuff that has a return value I literally see every time I lock my computer.

Windows equivalent projects can be harsher on the beginning-to-intermediate curve (back when I first tried out Linux Mint, I'd been struggling to make a bookmark inspector in Visual Studio -- ended up Pythoning it instead) -- not to say that Windows fun is by any means out-of-reach.

[–] fool@programming.dev 4 points 6 days ago (1 children)

My friends Leetcoded and Codeforced quite a lot. Advent of Code is up there too, with the interesting caveat that Advent of Code also teaches you refactoring (due to the two-part nature of every problem).

However, when I was younger I had contempt for the whiteboard-problem-esque appearances of these, but everyone is different.

If you look hard enough there is always a project at medium difficulty -- not way too hard, like a huge project you feel won't give you returns -- not way too easy, like some cowsay clone. Ever tried making a blog? You can host for free on most Git pages implementations (codeberg, github, gitlab...).

As for programming books, consider trying security books like Art of Exploitation -- in the same strain, CTFs can use a decent amount of code, and they're fun in terms of raw problem-solving. I started with the Bandit wargame, which does Linux problem solving from any machine that has SSH.

I'm not by any means a l33t hax3r but I found them pretty fun in my learning journey.

 

I saw a post recently about someone setting up parental controls -- screentime, blocked sites, etc. -- and it made me wonder.

In my childhood, my free time was very flexible. Within this low-pressure flexibility I was naturally curious, in all directions -- that meant both watching brainteaser videos, and watching Gmod brainrot. I had little exposure to video games other than Minecraft which ran poorly on my machine, so I tended to surf Flash games and YouTube.

Strikingly, while watching a brainteaser video, tiny me had a thought:

I'm glad my dad doesn't make me watch educational videos like the other kids in school have to.

For some reason, I wanted to remember that to "remember what my thought process was as a child" so that memory has stuck with me.

Onto the meat: if I had had a capped screentime, like a timer I could see, and knew that I was being watched in some way, I'd feel pressure. For example,

10 minutes left. Oh no. I didn't have fun yet. I didn't have fun yet!!

Oh no, I'm gonna get in so much trouble for watching another YTP...

and maybe that pressure wouldn't have made me into an independent, curious kid, to the person I am now. Maybe it would've made me fearful or suspicious instead. I was suspicious once, when one of my parents said "I can see what you browse from the other room" -- so I ran the scientific method to verify if they were. (I wrote "HI MOM" on Paint, and tested if her expression changed.)

So what about now? Were we too free, and now it's our job to tighten the next generation? I said "butthead" often. I loved asdfmovie, but my parents probably wouldn't have. I watched SpingeBill YTPs (at least it's not corporatized YouTube Kids).

Or differently: do we watch our kids without them knowing? Write a keylogger? Or just take router logs? Do we prosecute them like some sort of panopticon, for their own good?

Or do we completely forgo this? Take an Adventure Playground approach?

Of course, I don't expect a one-size-fits-all answer. Where do you stand, and why?

 

Git cheat sheets are a dime-a-dozen but I think this one is awfully concise for its scope.

  • Visually covers branching (WITH the commands -- rebasing the current branch can be confusing for the unfamiliar)
  • Covers reflog
  • Literally almost identical to how I use git (most sheets are either Too Much or Too Little)
73
submitted 3 weeks ago* (last edited 3 weeks ago) by fool@programming.dev to c/linux@lemmy.ml
 

What was your last RTFM adventure? Tinker this, read that, make something smoother! Or explodier.

As for me, I wanted to see how many videos I could run at once. (Answer: 60 frames per second or 60 frames per second?)

With my sights on GPUizing some ethically sourced motion pictures, I RTFW, graphed, and slapped on environment variables and flags like Lego bricks. I got the Intel VAAPI thingamabob to jaunt by (and found that it butterized my mpv videos)

$ pacman -S blahblahblahblahblahtfm
$ mpv --show-profile=fast
Profile fast: 
 scale=bilinear
 dscale=bilinear
 dither=no
 correct-downscaling=no
 linear-downscaling=no
 sigmoid-upscaling=no
 hdr-compute-peak=no
 allow-delayed-peak-detect=yes
$ mpv --hwdec=auto --profile=fast graphwar-god-4KEDIT.mp4
# fucking silk

But there was no pleasure without pain: Mr. Maxwell F. N. 940MX (the N stands for Nvidia) played hooky. So I employed the longest envvars ever

$ NVD_LOG=1 VDPAU_TRACE=2 VDPAU_NVIDIA_DEBUG=3 NVD_BACKEND=direct NVD_GPU=nvidia LIBVA_DRIVER_NAME=nvidia VDPAU_DRIVER=nvidia prime-run vdpauinfo
GPU at BusId 0x1 doesn't have a supported video decoder
Error creating VDPAU device: 1
# stfu

to try translating Nvidia VDPAU to VAAPI -- of course, here I realized I rtfmed backwards and should've tried to use just VDPAU instead. So I did.

Juice was still not acquired.

Finally, after a voracious DuckDuckGoing (quacking?), I was then blessed with the freeing knowledge that even though post-Kepler is supposed to support H264, Nvidia is full of lies...

 ______
< fudj >
 ------
          \   ‘^----^‘
           \ (◕(‘人‘)◕)
              (  8    )        ô
              (    8  )_______( )
              ( 8      8        )
              (_________________)
                ||          ||
               (||         (||

and then right before posting this, gut feeling: I can't read.

$ lspci | grep -i nvidia
... NVIDIA Corporation GM108M [GeForce 940MX] (rev a2)
# ArchWiki says that GM108 isn't supported.
# Facepalm

SO. What was your last RTFM adventure?

60
submitted 4 months ago* (last edited 4 months ago) by fool@programming.dev to c/linux@lemmy.ml
 

I have a little helper command in ~/.zshrc called stfu.

stfu() {
    if [ -z "$1" ]; then
        echo "Usage: stfu <program> [arguments...]"
        return 1
    fi

    nohup "$@" &>/dev/null &
    disown
}
complete -W "$(ls /usr/bin)" stfu

stfu will run some other command but also detach it from the terminal and make any output shut up. I use it for things such as starting a browser from the terminal without worrying about CTRL+Z, bg, and disown.

$ stfu firefox -safe-mode
# Will not output stuff to the terminal, and
# I can close the terminal too.

Here’s my issue:

On the second argument and above, when I hit tab, how do I let autocomplete suggest me the arguments and command line switches for the command I’m passing in?

e.g. stfu ls -<tab> should show me whatever ls’s completion function is, rather than listing every /usr/bin command again.

# Intended completion
$ stfu cat -<TAB>
-e                      -- equivalent to -vE                                                                                                                                                     
--help                  -- display help and exit                                                                                                                                                 
--number            -n  -- number all output lines                                                                                                                                               
--number-nonblank   -b  -- number nonempty output lines, overrides -n                                                                                                                            
--show-all          -A  -- equivalent to -vET                                                                                                                                                    
--show-ends         -E  -- display $ at end of each line                                                                                                                                         
--show-nonprinting  -v  -- use ^ and M- notation, except for LFD and TAB                                                                                                                         
--show-tabs         -T  -- display TAB characters as ^I                                                                                                                                          
--squeeze-blank     -s  -- suppress repeated empty output lines                                                                                                                                  
-t                      -- equivalent to -vT                                                                                                                                                     
-u                      -- ignored  

# Actual completion
$ stfu cat <tab>
...a list of all /usr/bin commands
$ stfu cat -<tab>
...nothing, since no /usr/bin commands start with -

(repost, prev was removed)

EDIT: Solved.

I needed to set the curcontext to the second word. Below is my (iffily annotated) zsh implementation, enjoy >:)

stfu() {
  if [ -z "$1" ]; then
    echo "Usage: stfu <program> [arguments...]"
    return 1
  fi

  nohup "$@" &>/dev/null &
  disown
}
#complete -W "$(ls /usr/bin)" stfu
_stfu() {
  # Curcontext looks like this:
  #   $ stfu <tab>
  #   :complete:stfu:
  local curcontext="$curcontext" 
  #typeset -A opt_args # idk what this does, i removed it

  _arguments \
    '1: :_command_names -e' \
    '*::args:->args'

  case $state in
    args)
      # idk where CURRENT came from
      if (( CURRENT > 1 )); then
        # $words is magic that splits up the "words" in a shell command.
        #   1. stfu
        #   2. yourSubCommand
        #   3. argument 1 to that subcommand
        local cmd=${words[2]}
        # We update the autocompletion curcontext to
        # pay attention to your subcommand instead
        curcontext="$cmd"

        # Call completion function
        _normal
      fi
      ;;
  esac
}
compdef _stfu stfu

Deduced via docs (look for The Dispatcher), this dude's docs, stackoverflow and overreliance on ChatGPT.

EDIT: Best solution (Andy)

stfu() {
  if [ -z "$1" ]; then
    echo "Usage: stfu <program> [arguments...]"
    return 1
  fi

  nohup "$@" &>/dev/null &
  disown
}
_stfu () {
  # shift autocomplete to right
  shift words
  (( CURRENT-=1 ))
  _normal
}
compdef _stfu stfu
view more: next ›