too many values to unpack (expected 2) error

109 views Asked by At

I am trying to optimize the hyperparameter of CNN using PSO, but cannot fix this error "too many values to unpack (expected 2)", not sure what I am missing in the fitness function: main code:

Define PSO optimization

dimensions = 3  # Number of hyperparameters to optimize
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
lower_limit = np.array([1, 128, 512])
upper_limit = np.array([10, 256, 1024])
bounds = (lower_limit, upper_limit)
optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)
best_hyperparameters, _ = optimizer.optimize(fitness_function, iters=20) 

The details error for PSO part :

2023-07-06 19:09:25,049 - pyswarms.single.global_best - INFO - Optimize for 20 iters with {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
pyswarms.single.global_best:   0%|          |0/20
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-6-2b66b04d1847> in <cell line: 11>()
      9 bounds = (lower_limit, upper_limit)
     10 optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)
---> 11 best_hyperparameters, _ = optimizer.optimize(fitness_function, iters=20)

2 frames
<ipython-input-4-fecbe8b4fed7> in fitness_function(params)
      1 # Define the fitness function for PSO optimization
      2 def fitness_function(params):
----> 3     hidden_channels, kernel_size, stride = params
      4 
      5     # Set the device

ValueError: too many values to unpack (expected 3)

at this line 6: `optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds)

I have no idea why I am getting this error, tried some solutions but did not help.

2

There are 2 answers

4
Debi Prasad On

Your Error is arising because of the values you have put the bounds variables. As of the documentation, the bounds variable has to be a tuple with 2 values i.e bounds=(lower_limit,upper_limit) where your lower limits and upper limit are the numpy.ndarray or list of the ranges. So this version of the code works fine.

dimensions = 3  # Number of hyperparameters to optimize
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
bounds = ([16,3,1],[64,7,3])  # Bounds for each hyperparameter
## The above line is which has been given in wrong format which causes error
optimizer=ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options,bounds=bounds)
best_hyperparameters,_ = optimizer.optimize(fitness_function, iters=20)

As you can see the shape of the bounds variable should be (2,) but as of your input it came out to be (3,). So as a result of which you have got the Error is this: "ValueError: too many values to unpack (expected 2) at this line 6: optimizer = GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds), as it expected to unpack 2 variables but you have given 3.

P.S - You might be new to the community, but try to go through this, as it helps us to get the bug/query quite easily and it's quite representable. Happy Coding!!

0
Black Swan On

Fitness function for PSO is as follows:

# Define the fitness function for PSO optimization
def fitness_function(params):
  hidden_channels, kernel_size, stride = params

# Set the device
  device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Create the CNN model
  model = CNN(input_channels=3, output_classes=10, hidden_channels=hidden_channels,
            kernel_size=kernel_size, stride=stride)
model.to(device)

# Define the loss function and optimizer
  criterion = nn.CrossEntropyLoss()
  optimizer = optim.Adam(model.parameters(), lr=0.001)

# Load the CIFAR10 dataset
  train_dataset = CIFAR10(root='./data', train=True, download=True, transform=ToTensor())
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)

# Train the model
  num_epochs = 10
  for epoch in range(num_epochs):
    for images, labels in train_loader:
        images, labels = images.to(device), labels.to(device)

        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

# Evaluate the model
  test_dataset = CIFAR10(root='./data', train=False, download=True, transform=ToTensor())
  test_loader = DataLoader(test_dataset, batch_size=64, shuffle=False)

  model.eval()
  correct = 0
  total = 0
  with torch.no_grad():
    for images, labels in test_loader:
        images, labels = images.to(device), labels.to(device)

        outputs = model(images)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

  accuracy = correct / total
  return accuracy

    # Calculate the fitness value
  fitness = -accuracy
    # Return the fitness value and any additional information
  return fitness, accuracy