Please help contribute to the Reddit categorization project here

    lieutenant_lowercase

    + friends - friends
    15,676 link karma
    6,207 comment karma
    send message redditor for

    [–] How do you rank up in solo ranked? lieutenant_lowercase 1 points ago in RocketLeague

    Over enough games you should be the difference and you can climb up fairly easily to your 'rightful' rank. If you can't climb then that's where you belong

    [–] How do you rank up in solo ranked? lieutenant_lowercase 2 points ago in RocketLeague

    You should be carrying your team if your in plat?

    [–] MongoDB with Dictionary of Dataframes lieutenant_lowercase 1 points ago in learnpython

    Are you wanting to query the tables afterwards or just save the dataframes? If you want to save the dataframes then perhaps look at pickle

    [–] Iterating over Pandas dataframe using zip and df.apply() lieutenant_lowercase 2 points ago in learnpython

    For a start you don't need to calculate the following line every time you run the function. Calculate it outside the function so they only need to be calculated once, not 60,000 times

    zip(df_a['winner_id'],df_a['tourney_date'],df_a['winner_rank'],df_a['loser_rank'],
                             df_a['winner_serve_pts_pct'])
    

    [–] attempting to loop over a df and plot but having trouble with code lieutenant_lowercase 1 points ago in learnpython

    did you reset the index?

    pp = df_combined.groupby(['country','year']).reset_index()
    
    for index, row in pp.iterrows():
        print(row['country'], row['year'])
    

    [–] attempting to loop over a df and plot but having trouble with code lieutenant_lowercase 1 points ago in learnpython

    use iterrows(). you may need to reset the index on your df also

    for index, row in df_combined.iterrows():
        print(row['country'], row['year'])
    

    [–] Running into an issue with a simple webscraping script. lieutenant_lowercase 1 points ago * (lasted edited 2 days ago) in learnpython

    In your code your traxsource code the tracks list variable is empty. Maybe try:

    tracks = page_soup.findAll('div', {'data-trid':True})
    

    For trackitdown I don't know why you are using this selector

    containers = page_soup.findAll('div', class_=re.compile("featuredTracks track"))
    

    If you look at the source I would suggest using selecting the div's where class="track" so.. https://imgur.com/fJDfohF

    containers = page_soup.findAll('div', {'class':'track'})
    

    [–] Running into an issue with a simple webscraping script. lieutenant_lowercase 1 points ago in learnpython

    It's how the page you want to scrape loads new data. Open the page in your browser, go to the URL then click page 2 - make sure you have your developer console open. It will show you where the data is being loaded. The offset variable just increases by 20 each loop then amends the URL to load 20 more tracks

    [–] Running into an issue with a simple webscraping script. lieutenant_lowercase 1 points ago in learnpython

    iterate over the elements in

    soup.findAll('div', {'class':'featuredTracks track'})
    

    I would do it like this:

    import requests
    import pandas as pd
    from bs4 import BeautifulSoup
    
    base_url = 'https://www.trackitdown.net/genre/tech_house_minimal/featured_tracks.html?offset={}'
    offset = 0
    
    track_data = []
    while True:
        r = requests.get(base_url.format(offset))
        soup = BeautifulSoup(r.text, 'lxml')
        tracks = soup.findAll('div', {'class':'featuredTracks track'})
        if len(tracks) == 0:
            break
        for track in tracks:
            track_dict = {}
            track_dict['track id'] = track['data-track-id']
            track_dict['release date'] = track['data-release-date']
            track_dict['genre'] = track['data-genre']
            track_dict['track name'] = track.find('strong', {'class':'trackTitle'}).text
            track_dict['artist'] = track.find('a', {'class':'artistName'}).text
            track_dict['label'] = track.find('a', {'class':'labelName'}).text
            track_data.append(track_dict)
        offset += 20
    
    df = pd.DataFrame(track_data)
    

    giving you

    https://i.imgur.com/zu3m8tt.png

    [–] How to script a simple ping sweep in python 3.6.4 on linux? lieutenant_lowercase 0 points ago in learnpython

    something like this would work

    import requests.exceptions
    import requests
    
    base_url = 'http://10.0.0.{}'
    from_ip = 0
    to_ip = 100
    urls = [base_url.format(x) for x in range(from_ip, to_ip+1)]
    for url in urls:
        print("Testing {}".format(url))
        try:
            r = requests.get(url)
            r.raise_for_status()  # Raises a HTTPError if the status is 4xx, 5xxx
        except (requests.exceptions.ConnectionError, requests.exceptions.Timeout):
            print ("Down")
        except requests.exceptions.HTTPError:
            print ("4xx, 5xx")
        else:
            print("All good!")
    

    similar here: https://stackoverflow.com/questions/26682177/in-python-requests-module-how-do-i-check-whether-the-server-is-down-or-500

    [–] Using Pyzbar lieutenant_lowercase 1 points ago in learnpython

    Have you tried reading the documentation? This is on the first page

    >>> from pyzbar.pyzbar import decode
    >>> from PIL import Image
    >>> decode(Image.open('pyzbar/tests/code128.png'))
    [Decoded(data=b'Foramenifera', type='CODE128'),
     Decoded(data=b'Rana temporaria', type='CODE128')]
    

    [–] Is there a better way to generate a modestly large number of objects than this? lieutenant_lowercase 2 points ago in learnpython

    Perhaps use a dictionary.

    deck = {}
    for suit in suits:
        for strength, number in enumerate(numbers):
            deck['{} of {}'.format(number, suit)] = {}
            deck['{} of {}'.format(number, suit)]['strength'] = strength
    

    [–] Is there a better way to generate a modestly large number of objects than this? lieutenant_lowercase 2 points ago in learnpython

    sorry missed that change

    deck.append(('{} of {}'.format(number, suit), strength))
    

    to

    deck.append(Card('{} of {}'.format(number, suit), strength))
    

    [–] Beginner - Automating POST requests from CSV, where to start for best practice going forward? lieutenant_lowercase 3 points ago in learnpython

    Open your developer console in your browser and visit the site and make a request. Then check the developer console and it will show you what data is being sent via POST. Add those key value pairs into the post_data dictionary that I put in the code above and then pass that to the s.post as json

    [–] Is there a better way to generate a modestly large number of objects than this? lieutenant_lowercase 7 points ago in learnpython

    enumerate will give you the index number in your list (strength value) and the string (card number)

    suits = ['clubs', 'spades', 'hearts', 'diamonds']
    numbers = ['two', ...... 'king', 'ace']
    deck = []
    for suit in suits:
        for strength, number in enumerate(numbers):
            deck.append(('{} of {}'.format(number, suit), strength))